Uploaded by gechosface

AndersonAndVongpanitlerd-Network-Analysis-and-Synthesis1973

advertisement
BRIAN D.0.ANDERSON
SUMETWC BONGPANITLERD
Network Analysis
and Synthesis
A MODERN SYSTEMS THEORY APPROACH
N E T W O R K S SERIES
Robert W. Newcomb,
Editor
PRENTICE-HALL NETWORKS SERIES
ROBERT W. NEWCOMB, editor
ANDERSONAND VONCPANITLERD
Network Analysis and Synthesis: A Modern Systems Theory Approach
ANDERSON
AND MOORE
Linear Optimal Conlrol
ANNER
Elementary Nonlinear Electronic Circuits
CARLIN
AND GIORDANO
Network Theory: An Introduction to Reciprocal and Nonreciprocal Circuits
CHIRLIAN
Integrated and Active Synthesis and Analysis
DEUTSCH
Systems Analysis Techniques
HERRERO
AND WILLONER
Synthesis of Filters
HOLTZMAN
Nonlinear System 17reory: A Functional Analysis Approach
HUMPHREYS
The Analysis, Design, and Synthesis of Electrical Filters
KIRK
Optimal Control Theory: An Introductio~
KOHONEN
DizitaI
Circuits ond Devices
MAN-, ~ E R AND
T GRAY
Modern Transistor Electronics Analysis and Design
N&WCDMB
Active Integrated Circuit Synthesis
SAGE
Optimum Systems Control
sv
VAN
Time-Domain Synthesis of Linear Networks
VALKENBURG
Network Analysis, 2nd ed.
NETWORK ANALYSIS
AND
SYNTHESIS
A Modern Systems Theory
Approach
BRIAN D. 0.ANDERSON
Profemor of Electrical Engineering
University of Newca~tle
New South Wales, Australia
SUMETH VONGPANITLERD
D&ector, Multidisc@linoryLoboralories
Mahidof University
Bangkok, ntailand
PRENTICE-HALL, INC.
Engl~woodCliffs,New Jersey
To our respective parents
0 1973
PRENTICE-HALL, INC.
Englewocd Cliffs, New Jersey
All rights reserved. No part of this book may be
reproduced in any way or by any means without
permission in writing from PrentiwHdi, Inc.
ISBN: 0-13-61 1053-3
Library of Congress Catalog Card No. 70-39738
Printed in the United States of America
PRENTICE-HALL
INTERNATIONAL,
INC.,London
PRENTICE-HALL
OF AUSTRALIA,
Pw.LTD., Sydney
PRENTICE-HALL
OF CANADA,
LTD., Toronfo
PRENTICE-HALL
OF INDIA PRIVATE
LIMITED,New Delhi
PREMICE-HALL
OF JAPAN,
INC.,Tokyo
Table of Contents
Preface, ix
Part I
INTRODUCTION
1 introduction. 3
1.1 Analysis and Synthesis, 3
1.2 Classical and Modem Approaches, 4
1.3 Outline of the Book, 6
Part II
BACKGROUND INFORMATION-NETWORKS
AND STATE-SPACE EQUATIONS
2 m-Port Networks and their Port
Descriptions. I I
2.1 Introduction, I1
2.2 m-Port Networks and Circuit Elements, I2
2.3 Tellegen's Theorem; Passive and Lossless Networks, 21
2.4 Impedance, Admittance, Hybrid and Scattering Descriptions of a
Network, 28
2.5 Network Interconnections, 33
2.6 The Bounded Real Property, 42
2.7 The Positive Real Property, 51
2.8 Reciprocal mPorts, 58
9
3 State-Space Equations and Transfer
Function Matrices, 63
3.1 Descriptions of Lumped Systems, 63
3.2 Solving the State-Space Equations, 68
3.3 Properties of State-Space Realizations, 81
3.4 Minimal Realizations, 97
3.5 Construction of State-Space Equations from a Transfer Function
Matrix, I04
3.6 Degree of a Rational Transfer Function Matrix, 116
3.7 Stability, 127
Part 111 NETWORK ANALYSIS
4 Network Analysis via State-Space
Equations, 143
Intraduction, 143
State-Space Equations for Simple Networks, I45
Stateapace Equations via Reactance Extraction--Simple Version, 156
State-Space Equations via Reactance Extraction-The General
Case, 170
4.5 State-Space Equations for Active Networks, 200
4.1
4.2
4.3
4.4
Part 1V THE STATE-SPACE DESCRIPTION. .
.
OF PASSIVITY AND RECIPROCITY
5 Positive Real Matrices and the Positive
Real Lemma, 215
5.1 Introduction, 215
5.2 Statement and Sufficiency Proof of the Positive Real Lemma, 218
5.3 Positive Real Lemma: Necessity Proof Based on Network Theory
Results, 223
5.4 Positive Real Lemma: Necessity Proof Based on a Variational
Problem, 229
5.5 Positive Real Lemma: Necessity Proof Based on Spectral
Factorization, 243
5.6 The Positive Real Lemma: Other Proofs, and Applications, 258
213
6 computation of Solutions of the Positive
Real Lemma Equations, 267
6.1 Introduction, 267
6.2 Solution Computation via Riccati Equations and
Quadratic Matrix Equations, 270
6.3 Solving the Riccati Equation, 273
6.4 Solving the Quadratic Matrix Equation, 276
6.5 Solution Computation via Spectral Factorization, 283
6.6 Finding All Solutions of the Positive Real Lemma Equations, 292
7 Bounded Real Matrices and the Bounded
Real Lemma; The State-Space
Description of Reciprocity, 307
. .
7.1 Introduction, 307
7.2 The Bounded Real Lemma, 308
7.3 Solution of the Bounded Real Lemma Equations, 314
7.4 Reciprocity in Statespace Terms, 320
Part V
NETWORK SYNTHESIS
8 Formulation of State-Space Synthesis
Problems. 329
8.1 An Introduction to Synthesis Problems, 329
8.2 The Basic Impedance Synthesis Problem, 330
8.3 The Basic Scattering Matrix Synthesis Problem, 339
8.4 Preliminary Simplification by Lossless Extractions, 347
9 Impedance Synthesis, 369
9.1 Introduction, 369
9.2 Reactance Extraction Synthesis, 372
9.3 Resistance Extraction Synthesis, 386
9.4 A ReciprocaI (Nonminimal Resistive and Reactive)
Impedance Synthesis, 393
10 Reciprocal Impedance Synthesis, 405
10.1 Introduction, 405
10.2 Reciprocal Lossless Synthesis, 406
10.3 Reciprocal Synthesis via Reactance Extraction, 408
10.4 Reciprocal Synthesis via Resistance Extraction, 418
10.5 A State-Space Bayard Synthesis, 427
11 S c a t t e r i n g Matrix Synthesis, 442
11.1 Introduction, 442
11.2 Reactance Extraction Synthesis of Nonreciprocal Networks, 444
11.3 Reactance Extraction Synthesis of Reciprocal Networks, 454
11.4 Resistance Extraction Scattering Matrix Synthesis, 463
12 T r a n s f e r Function Synthesis, 474
12.1 Introduction, 474
12.2 onr reciprocal Transfer Function Synthesis, 479
12.3 Reciprocal Transfer Function Synthesis. 490
12.4 Transfer Function Synthesis with Prescribed Loading, 497
Part VI
ACTIVE RC NETWORKS
13 Active S t a t e - S p a c e Synthesis, 511.
13.1 Introduction to Active RC Synthesis, 511
13.2 Operational Amplifiers as Circuit Elements, 513
13.3 Transfer Function Synthesis Using Summers and Integrators, 519
13.4 The Biquad, 526
13.5 Other Approaches to Active RC Synthesis, 530
Epilogue: W h a t o f t h e F u t u r e ? 537
S u b j e c t Index, 539
A u t h o r Index. 546
Preface
.
.
For many years, network theory has beenone of the more mathematically
developed of the electrical engineering fields. In more recent times, it has
shared this distinction with fields such as control theory and communication
systems theory, and is now viewed, together with these fields, as a subdiscipline of modem system theory. However, one of the key concepts of
modern system theory, the notion of state, has received very little attention
in network synthesis, while in network analysis the emphasis has come only
in recent times. The aim of this book is to counteract what is seen to be an
unfortunate situation, by embedding network theory, both analysis and
synthesis, squarely within the framework of modem system theory. This is
done by emphasizing the state-variable approach to analysis and synthesis.
Aside from the fact that there is a gap in the literature, we see several
important reasons justifying presentation of the materia1found in this book.
First, in solving network problems with a computer, experience has shown
that very frequently programs based on state-variable methods can be more
easily written and used than.programs based on, for example, Laplace transform methods. Second, the state concept is one that emphasizes the internal
structure of a system. As such, it is the obvious tool for solving a problem
such as finding all networks synthesizing a prescribed impedance; this and
many other problems of internal structure are beyond the power of classical
network theory. Third, the state-space description of passivity, dealt with
at some length in this book, applies in such diverse areas as Popov stability,
inverse optimal control problems, sensitivity reduction in controI systems,
and Kalman-Bucy or Wiener filtering (a topic of such rich application surely
deserves treatment within a textbook framework). Fourth, the graduate
major in systems science is better served by a common approach to diierent
disciplines of systems science than by a multiplicity of different approaches
with no common ground; a move injecting the state variable into network
analysis and synthesis is therefore as welcome from the pedagogical viewpoint as recent moves which have injected state variables into communications systems.
The book has been written with a greater emphasis on passive than on
active networks. This is in part a reflection of the authors' personal interests,
and in part a reflection of their view that passive system theory is a topic
about which no graduate systems major should be ignorant. Nevertheless,
the inclusion of starred material which can.be omitted with no loss of continuity offers the instructor a great deal offreedom in setting the emphasis
in a course based on the book. A course could, if the instructor desires,
de-emphasize the networks aspect to a great degree, and concentrate mostly
on general systems theory, including the theory of passive systems. Again,
a course could emphasize the active networks aspect by excluding much
material on passive network synthesis.
The book is aimed at the first year graduate student, though it could
certainly be used in a class containing advanced undergraduates, or later
year graduate students. The background required b an introductory course
on linear systems, the usual elementary undergraduate networks material,
and ability to handle matrices. Proceeding at a fair pace, the entire book
cduld be completed.in a semester, while omission of starred material wouJd
allow a more leisurely coverage in a semester. In a course of two quarters
]eng&: the book could be comfortably completed, while in one quarter,
particularly with stude"&. of strong backgrounds, a judicious selection of
material could build a unifid course:
.
.
',;
.
.
ACKNOWLEDGMENTS
Much of the research reported in this book, as well as the writing of it,
was supported by the Australian Research Grants Committee, the Fulbright
Grant scheme and the Colombo Plan Fellowship scheme. There are also
many individuals to whom we owe acknowledgment, and we gratefully record
the names of those to whom we are particularly indebted: Robert Newcomb,
in his dual capacities as a Prentice-Hall editor and an academic mentor;
Andrew Sage, whose hospitality helped create the right environment a t
Southern Methodist University for a sabbatical leave period to be profitably
spent; Rudy Kalman, for an introduction to the Positive Real Lemma;
Lorraine Pogonoski, who did a sterling and untiring job in typing the
manuscript; Peter McLauchlan, for excellent drafting work of the figures;
Peter Moylan, for extensive and helpful criticisms that have added immeasurably to the end result; Roger Brockett, Len Silverman, Tom Kailath
and colleagues John Moore and Tom Fortmann who have all, in varying
degrees, contributed ideas which have found their way into the text; and
Dianne, Elizabeth, and Philippa Anderson for criticisms and, more importantly, for their patience and ability to generate inspiration and enthusiasm.
Part I
INTRODUCTION
What is the modern system theory approach to network analysis
and synthesis? In this part we begin answering this question.
introduction
1.I
ANALYSIS AND SYNTHESIS
Two complementary functions of the engineer are analysis and
synthesis. In analysis problems, one is usually given a description of a physical
system, i.e., a statement of what its components are, how they are assembled,
and what the laws of physics are that govern the behavior of each component.
One is generally required to perform some computations to predict the
behavior of the system, such as predicting its response to a prescribed input.
The synthesis problem is the reverse of this. One is generally told of a behavioral characteristic that a system should have, and asked to devise a system
with this prescribed behavioral characteristic.
In this book we shall be concerned with network analysis and synthesis.
More precisely, the networks will be electrical networks, which ahvays will
be assumed to be (1) linear, (2) time invariant, and (3) comprised of a finite
number of lumped elements. Usually, the networks will also be assumed to
be passive. We presume the reader knows what it means for a network to
be linear and time invariant. In Chapter 2 we shall define more precisely
the allowed element classes and the notion of passivity.
In the context of this book the analysis problem becomes one of knowing the
set of elements comprising a network, the way they are interconnected, the set
of initial conditions associated with the energy storage elements, and the
set of excitations provided by externally connected voltage and current
generators. The problem is to find the response of the network, i.e., the
3
4
INTRODUCTION
CHAP. 1
resultant set of voltages and currents associated with the elements of the
network.
In contrast, tackling the synthesis problem requires us to start with a
statement of what responses should result from what excitations, and we
are required to determine a network, or a set of network elements and a
scheme for their interconnection, that will provide the desired excitationresponse characteristic. Naturally, we shall state both the synthesis problem
and the analysis problem in more explicit terms at the appropriate time;
the reader should regard the above explanations as being rough, but temporarily satisfactory.
1.2
CLASSICAL AND MODERN APPROACHES
Control and network theory have historically been closely linked,
because the methods used to study problems in both fields have often been
very similar, or identical. Recent developments in control theory have, however, tended to outpace those in network theory. The magnitude and importance of the developments in control theory have even led to the attachment
of the labels "classical" and "modem" to large bodies of knowledge within
the discipline. It is the development of network theory paralleling modern
control theory that has been lacking, and it is with such network theory that
this book is concerned.
The distinction between modern and classical control theory is at points
blurred. Yet while the dividing line cannot be accurately drawn, it would
be reasonable to say that frequency-domain, including Laplace-transform,
methods belong to classical control theory, while state-space methods belong
to modern control theory. Given this division within the field of control,
it seems reasonable to translate it to the field of network theory. Classical
network theory therefore will be deemed to include frequency-domain
methods, and modem network theory to include state-space methods.
In this book we aim to emphasize application of the notions of state, and
system description via state-variable equations, to the study of networks. In
so doing, we shall consider problems of both analysis and synthesis.
As already noted, modern theory looks at statespace descriptions while
classical.theory tends to look at Laplace-transform descriptions. What are
the consequences of this difference?They are numerous; some are as follows.
First and foremost, the state-variable description of a network or system
emphasizes the internal structure of that system, as well as its input-output
performance. This is in contrast to the Laplace-transform description of
a network or system involving transfer functions and the like, which ernphasizes the input-output performance alone. The internal structure of a network
must be considered in dealing with a number of important questions. F o r
SEC. 1.2
CLASSICAL AND MODERN APPROACHES
5
example, minimizing the number of reactive elements in a network synthe&zinga prescribed excitation-response pair is a problem involving examination of the details of the internal structure of the network. Other pertinent
include minimizing the total resistance of a network, examining
the effect of nonzero initial conditions of energy storage or reactive elements
on the externally observable behavior of the network, and examining the
sensitivity of the externally observable behavior of the circuit to variations
in the component values. It would be quite illogical to attempt to solve all
these problems with the tools of classical network theory (though to be sure
some progress could be made on some of them). It would, on the other hand,
be natural to study these problems with the aid of modern network theory
and the state-variable approach.
Some other important differences between the classical and modern
approaches can be quickly summarized:
1. The classical approach to synthesis usually relies on the application of
ingeniously contrived algorithms to achieve syntheses, with the number
of variations on the basic synthesis structures often being severely
circumscribed. The modern approach to synthesis, on the other hand,
usually relies on solution, without the aid of too many tricks or clever
technical artifices, of a well-motivated and easily formulated problem.
At the same time, the modern approach frequently allows the straightforward generation of an infinity of solutions to the synthesis problem.
2. The modern approach to network analysis is ideally suited to implementation on a digital computer. Time-domain integration of statespace differential equations is generally more easily achieved than
operations involving computation of Laplace transforms and inverse
Laplace transforms.
3. The modem approach emphasizes the algebraic content of network
descriptions and the solution of synthesis problems by matrix algebra.
The classical approach is more concerned with using the tools of
complex variable analysis.
The modern approach is not better than the classical approach in every
way. For example, it can be argued that the intuitional pictures provided
by Bode diagrams and pole-zero diagrams in the classical approach tend to
be lost in the modern approach. Some, however, would challenge this argument on the grounds that the modem approach subsumes the classical
approach. The modem approach too has yet to live up to all its promise.
Above we listed problems to which the modern approach could logically
be applied; some of these problems have yet to be solved. Accordingly, at
this stage of development of the modern system theory approach to network
analysis and synthesis, we believe the classical and modern approaches are
best seen as being complementary. The fact that this book contains so little
6
INTRODUCTION
CHAP. 1
of the classical approach may then be queried; but the answer to this query
is provided by the extensive array of books on network analysis and synthesis, e.g., [I-41, which, at least in the case of the synthesis books, are heady
committed to presenting the classical approach, sometimes to the exclusion
of the modern approach.
The classical approach to network analysis and synthesis has been developed for many years. Virtually all the analysis problems have been solved,
and a falling off in research on synthesis problems suggests that the majority
of those synthesis problems that are solvable may have been solved. Much
of practical benefit has been forthcoming, but there do remain practical
problems that have yet to succumb. As we noted above, the modern approach
has not yet solved all the problems that it might he expected to solve; particularly is it the case that it has solved few practical problems, although in
isolated cases, as in the case of active synthesis discussed in Chapter 13, there
have been spectacular results. We attribute the present lack of other tangible
results to the fact that relatively little research has been devoted to the
modern approach; compared with modern control theory, modern network
theory is in its infancy, or, at latest, its early adolescence. We must wait to
see the payoffs that maturity will bring.
1.3
OUTLINE OF THE BOOK
..
Besides the introductory Part I, the book falls into five parts.
Part 11 is aimed at providing background in two areas, the first being m-port
networks and means for describing them, and the second being state-space
equations and their relation with transfer-function matrices. In Chapter 2,
which discusses m-port networks, we deal with classes of circuit elements,
such network properties as passivity, losslessness, and reciprocity, the immitlance, hybrid and scattering-matrix descriptions of a network, as well as some
important network interconnections. In Chapter 3 we discussthe description
of lumped systems by state-space equations, solution of state-space equations,
such properties as controliability, observability, and stability, aad the relation of state descriptions to transfer-function-matrix descriptions.
Part 111, consisting of one long chapter, discusses network analysis via
state-space procedures. We discuss three procedures for analysis of passive
networks, of increasing.degree of complexity and generality, as well as
analysis of active networks. This material is presented without significant
use of network topology.
Part 1V is concerned with translating into state-space terms the notions
of passivity and reciprocity. Chapter 5 discusses a basic result of modern
system theory, which we term the positive real lemma. It is of fundamental
importance in areas such as nonlinear system stability and optimal control,
REFERENCES
CHAP. 1
7
as well as in network theory; Chapter 6 is concerned with developing procedures for solving equations that appear in the positive real lemma. Chapter
7 covers two matters; one is the bounded real lemma, a first cousin to the
positive real lemma, and the other is the state-space description of the
reciprocity property, first introduced in Chapter 2.
p a t V is concerned with passive network synthesis and relies heavily on
the positive real lemma material of Part IV. Chapter 8 introduces the general
approaches to synthesis and disposes of some essential preliminaries.
Chapters 9 and 10 cover impedance synthesis and reciprocal impedance
synthesis, respectively. Chapter 11 deals with scattering-matrix synthesis,
and Chapter 12 with transfer-function synthesis.
Part VI comprises one chapter and deals with active RC synthesis, i.e.,
synthesis using active elements, resistors, and capacitors. As with the earlier
part of the book, state-space methods alone are considered.
REFERENCES
[ I J H. J. CARLIN
and A. B. GIORDANO,
Network Tlreory, Prentice-Hall, Englewood
Cliffs, N.J., 1964.
[ 2 ] R. W. NEWCOMB,
Linear Mulfiport Synthesis, McGraw-Hill, New York, 1966.
[ 3 ] L. A. WEINBERG,
Network Amlysis and Synthesis, McGraw-Hill, New York,
1962.
[ 4 I M. R. Womms, Lumped and Di'stributedPassive Nefworks, Academic Press,
New York. 1969.
Part II
BACKGROUND INFORMATIONNETWORKS AND
STATE-SPACE EQUATIONS
In this part our main concern is to lay groundwork for the real
meat of the book, which occurs in later parts. In particular, we
introduce the notion of mnltiport networks, and we review the
notion of state-space equations and their connection with
transfer-function matrices. Almost certainly, the reader will have
had exposure to many of the concepts touched upon in this
part, and, accordingly, the material is presented in a reasonably
terse fashion.
m-Port Networks and
their Port Descriptions
2.1
INTRODUCTION
The main aim of this chapter is to define what is meant by a
network and especially to define the subclass of networks that we shall be
interested in from the viewpoint of synthesis. This requites us to list the
the types of permitted circuit elements that can appear in the networks of
interest, and to note the existence of various network descriptions, principally
port descriptions by transfer-function matrices.
We shall define the important notions of pmsivify, losslessness, and reciprocity for circult elements and for networks. At the same time, we shall
relate these notions to properties of port descriptions of networks.
Section 2.2 of the chapter is devoted to giving basic definitions of such
notions as rn-port networks, circuit elements, sources, passivity, and losslessness. An axiomatic introduction to these concepts may be found in [I].
Section 2.3 states Tellegen's theorem, which proves a useful device in interpreting what effect constraints on circuit elements (such as passivity and,
later, reciprocity) have on the behavior of the network as observed at the
ports of the network. This material can be found in various texts, e.g., 12, 31.
In Section 2.4 we consider various port descriptions of networks using
transfer-function matrices, and we exhibit various interrelations among
them where applicable. Next, some commonly encountered methods of
comecting networks to form a more complex network are discussed in
Section 2.5. In Sections 2.6 and 2.7 we introduce the-definitions of the
11
12
m-PORT
NETWORKS AND THEIR PORT DESCRIPTIONS
CHAP. 2
bounded real and positive real properties of transfer-function matrices, and
we relate these properties to port descriptions of networks. Specializations of
the bounded real and positive real properties are discussed, viz., the lossless
bouna'ed real and losslesspositive realproperties, and the subclass of networks
whose transfer-function matrices have these properties is stated. In Section 2.8
we define the notion of reciprocity and consider the property possessed by
port descriptions of a network when the network is composed entirely of
reciprocal circuit elements. Much ofthe material from Section 2.4 on will
be found in one of [l-31.
We might summarize what we plan to do in this chapter in two statements:
I. We shall offer various port descriptions of a network as alternatives to
a network description consisting of a list of elements and a scheme for
interconnecting the elements.
2. We shall translate constraints on individual components of a network
into constraints on the port descriptions of the network.
2.2
rn-PORT NETWORKS A N D CIRCUIT ELEMENTS
Multipart Networks
An m-port network is a physical device consisting of a collection
of circuit elements or components that are connected according to some
scheme. Associated with the m-port network are access points called terminals, which are paired to form ports. At each port of the network it is
possible to connect other circuit elements, the port of another network, or
some kind of exciting devicz, itself possessing two terminals. In general,
there will be a voltage across each terminal pair, and a current leaving one
terminal of the pair making up a port must equal the current entering the
other terminal of the pair. Thus if the jth port is defined by terminals Ti
and T:, then for eachj, two variables, v, and i,, may be assigned to represent
the voltage of T, with respect to T;and the current entering T, and leaving
T;, respectively, as illustrated in Fig. 2.2.la. A simplified representation is
shown in Fig. 2.2.lb. Thus associated with the m-port network are two vector
functions of time, the (vector) port voltage v = [v, v, . v,'J' and the
(vector) port current i = [i, i, ... iJ. The physical structure of them port
will generally constrain, often in a very general way, the two vector variables
v and i, and conversely the constraints on v and i serve to completely describe
the externally observable behavior of the m port.
In following through the above remarks, the reader is asked to note two
important points:
..
.
I . Given a network with a list of terminals rather than ports, it is not
permissible to combine pairs of terminals and call the pairs ports unless
SEC. 2.2
m-PORT NETWORKS AND CIRCUIT ELEMENTS
3
FIGURE 2.2.1.
13
i',
Networks with ~ssociated~oks.
under all operating conditions the current entering one terminal of
the pair equals that leaving the other terminal.
2. Given a network with a list of terminals rather than ports, if one properly constructs ports by selecting terminal pairs and appropriately
constraining the excitations, it is quite permissible to include one
terminal in two port pairs (see Fig. 2 . 2 . 1 ~for an example of a threeterminal network redrawn as a two-port network).
14
m-PORT NETWORKS AND THEIR PORT DESCRIPTIONS
CHAP. 2
Elements of a Network
Circuit elements of interest here are listed in Fig. 2.2.2, where
their precise definitions are given. (An additional circuit element, a generalization of the two-port transformer, will be defined shortly.) Interconnections
of these elements provide the networks to which we shall devote most attention. Note that each element may also be viewed as a simple network in its
own right. For example, the simple one-port network of Fig. 2.22%the
resistor, is described by a voltage v, a current i, and the relation v = ri, with
i or v arbitrarily chosen.
The only possibly unfamiliar element in the list of circuit elements is the
FIGURE 2.2.2. Representations and Definitions of Basic
Circuit Elements.
SEC. 2.2
m-PORT NETWORKS AND CIRCUIT ELEMENTS
15
gyrator. At audio frequencies the gyrator is very much an idealized sort of
component, since it may only be constructed from resistor, capacitor, and
transistor or other active components, in the sense that a two-port network
using these types of elements may be built having a performance very close
to that of the ideal component of Fig. 2.2.2d. In contrast, at microwave frequencies gyrators can be constructed that do not involve the use of active
elements (and thus of an external power supply) for their operation. (They do
however require a permanent magnetic field.)
Passive Circuit Elements
. .
In the sequel we shall be concerned almost exclusively with circuit
elements with constant element values, and, in the case of resistor, inductor,
and capacitor components, with nonnegative element values. The Class of
all such elements (including transformer and gyrator elements) will be termed
linear, lumped, time invariant, and passive.~heterm linear arises from the
fact that the port variables are constrained by a linear relation; the word
lumped arises because the port variables are constrained either via a memoryless transformation or an ordinary differential equation (as opposed to a
partial differential equation or an ordinary differential equation with delay);
the term time invariant arises because the element values are constant. The
regon for use of the term passive is not quite so transparent. Passivity of
a component is defmed as follows:
Suppose that a component contains no stored energy at some
arbitrary time I,. Then the total energy delivered to the component from any generating source connected to the component,
computed over any time interval [to,TI,is always nonnegative.
Since the instantaneous power flow into a terminal pair at time t is u(t)i(t),
sign conventions being as in Fig. 2.2.1, passivity requires for the one-port
components shown in Fig. 2.2.2 that
for allinitial times to,all T 2 I,, and all possible voltage-current pairs satisfying the constraints imposed by the component. It is easy to check (and
checking is requested in the problems) that with real constant r, I, and c,
(2.2.1) is fulfilled if and only ifr, 1, and c are nonnegative.
For a multipart circuit element, i.e., the two-port transformer and gyrator
already defined or the multipart transformer yet to be defined, (2.2.1) is
replaced by
16
m-PORT NETWORKS AND THEIR PORT DESCRIPTIONS
CHAP. 2.
Notice that v'(r)i(t), being C v,(t)i,(t), represents the instantaneous power
flow into the element, being computed by summing the power flows at each
port.
For both the transformer and gyrator, the relation v'i = v,i, f v,i, = 0
holds for all t and all possible excitations. This means that it is never posslble
for there to be a net flow of power into a transformer and gyrator, and, therefore, that it is never possible for there to be a nonzero value of stored energy.
Hence (2.2.2) is satisfied with &(T)identically zero.
Lossless Circuit Elements
A concept related to that of passivity is losslessness. Roughly
speaking, a circuit element is lossless if it is passive and if, when a finite
amount of energy is put into the element, all theenergy can be extractedagain.
More precisely, losslessness requires passivity and, assuming zero excitation
at time to,
or, for a multiport circuit element,
for all compatible pairs v ( . ) and i(-), which are also square integrable; i.e.,
The gyrator and transformer are lossless for the reason that &(T)= 0 for all
T, as noted above, independently of the excitation. The capacitor and
inductor are also lossless-a proof is called for in the problems.
In constructing our m-port networks, we shall restrict the number of
elements to being finite. An m-port network comprised of afinite number of
linear, lumped, time-invariant, and passive components will be c d e d a finite,
linear, lumped, time-invariant, passive m port; unless otherwise stated, we
shall simply call suck a network an mport. An m port containing only lossless
components will be called a lossless m port.
The Multiport Transformer
A generalization is now presented of the ideal two-port transformer, viz., the ideal multiport trandormer, introduced by Belevitch 141. It
will be seen later in our discussion of synthesis procedures that the multiport
transformer indeed plays a major role in almost all synthesis methods that
we shall discuss.
SEC. 2.2
m-PORT NETWORKS AND CIRCUIT ELEMENTS
+
17
Consider the (p q)-port transformer N, shown in Fig. 2.2.3. Figure
2.2.3a shows a convenient symbolic representation that can frequently replace
the more detailed arrangement of Fig. 2.2.3b. In this figure are depicted
, (i,), and (v,),, ... , (v,), at
secondary currents and voltages (i,),, (i,),,
the q secondary ports, primary currents (i,),, (i,),,
,(i,), at thep primary
ports, and one of the primary voltages (v,),. (The definition of the other
.. .
.. .
(b)
FIGURE 2.2.3.The Multiport Ideal Transformer.
18
m-PORT NETWORKS AND THEIR PORT DESCRIPTIONS
CHAP. 2
primary voltages is clear from the figure.) The symbols t , denote turns ratios,
and the figure is meant to depict the relations
0
(v,); =I= 1 t,;(w,),
i = I, 2,
... ,P
and
or, in matrix notation
v, = T'v,
i, = -Ti,
(2.2.5)
where T = [t,l is the q x p turns-ratiomatrix, and v,, v,, i,, and i, are vectors
representing, respectively, the primary voltage, secondary voltage, primary
current, and secondary current. The ideal multiport transformer is a lossless
circuit element (see Problem 2.2.1), and, as is reasonable, is the same as the
two-port transformer already defined if p = q = 1.
Generating Sources
Frequently, at each port of an m-port network a voltage source
or a current source is wnnected. These are shown symbolically in Fig. 2.2.4.
F I G U R E 2.2.4. Independent Voltage and Current Sources.
The voltage source has the characteristic that the voltage across its terminals
takes on a specified value or is a specified function of time independent of
the current flowing through the source; similarly with a current source, the
current entering and leaving the source is 6xed as a specified value or specified function of time for all voltages across the source. In the sense that the
terminal voltage for a voltage source and the current entering and leaving
the terminal pair for a current source are invariant and are independent of
the rest of the network, these sources are termed independent sources.
If, for example, an independent voltage source V is connected to a terminal
pair T, and T: of an m port, then the voltage v, across the ith port is constrained to be V or - V depending on the polarity of the source. The two
situations are depicted in Fig. 2.2.5.
SEC. 2.2
m-PORT NETWORKS AND CIRCUIT ELEMENTS
~9;l-l
m-port
19
(a1 v , = V
(0)
(b)
FIGURE 2.2.5.
Connection of
an
Independent Voltage
Source.
In contrast, there exists another class of sources, dependent or controlled
voltage and current sources, the values of voitage or current of which depend
on one or more of the variables of the network under consideration. Dependent sources are generally found in active networksjn fact, dependent
sources are the basic elements of active device modeling. A commonly found
example is the hybrid-pi model of a transistor in the common-emitter configuration, as shown in Fig. 2.2.6. The controlled current source is made to
depend on V . , , the voltage across terminals B' and E. Basically, we may
consider four types of controlled sources: a voltage-controlled voltage source,
FIGURE 2.2.6.
The Common Emitter Hybrid-pi
of a Transistor.
Model
20
m-PORT NETWORKS A N 0 THEIR PORT DESCRIPTIONS
CHAP. 2
a current-controlled voltage source, a voltage-controlled current source, and
a current-controlled current source.
In the usual problems we consider, we hope that if a source is connected
to a network, there will result voltages and currents in the network and at
its ports that are well-defined functions of time. This may not always be
possible. For example, if a one-port network consists of simply a short
circuit, connection of an arbitrary voltage source at the network's port will
not result in a well-defined current. Again, if a one-port network consists
of simply a 1-henry (H) inductor, connection of a current source with the current a nondifferentiahle function of time will not lead to a well-behaved
voltage. Yet another example is provided by connecting at time to = -m a
constant current source to a 1-farad (F) capacitor; at any finite time thevoltage will be infinite. The first sort of difficulty is so basic that one cannot avoid
it unless application of one class of sources is simply disallowed. The second
difficulty can be avoided by assuming that all excitafionsare suitably smooth,
while the third sort of difficulty can be avoided by assuming that all excitations arejirst applied at somefinite time t o ; i.e., prior to t , all excitations are
zero. We shall make no further explicit mention of these last two assumptions
unless required by special circumstances; they will he assumed to apply at
all times unless statements to the contrary are made.
Problem Show that the linear time-invariailt resistor, capacitor, and inductor are
passive if the element valua are nonnegative, and that the capacitor,
inductor, and multipart transformer are lossless.
Problem A time-variable capacitor c(.) constrains its voltage and current by
2.2.2
i(t) = (d/dt)[c(r)v(t)l.What are necessary and sufficient conditions on
d..) for the capacitor to be (I) passive, and (2) lossless?
Problem Establish the equivalents given in Fig. 2.2.7.
2.2.1
2.2.3
bpj-rc
rl
FIGURE 2.2.7.
I1
-
72
=
~lll<
Some Equivalent Networks
SEC. 2.3
TELLEGEN'S THEOREM
21
Problem Devise a simple network employing a controlled source that simulates
a negative resistance (one whose element value is negative).
Problem Figure 2.2.8a shows a one-port device called the nullator, while Fig.
2.2.5
2.2.8b depicts the norator. Show that the nullator has simultaneously
zero current and zero voltage at its terminals, and that the norator can
have arbitrary and independent current and voltage at its terminals.
2.2.4
FIGURE 2.2.8.
2.3
(a) The Nullator; and (b) the Norator.
TELLEGEN'S THEOREM-PASSIVE
AND LOSSLESS NETWORKS
Passivity and Losslessness of m ports
The definitions of passivity and losslessness given in the last
section were applied to circuit elements only; in this section we shall note
the application of these definitions to m-port networks. We shall also introduce an important theorem that will permit us to connect circuit element
properties, including passivity, with network properties.
The delinition of passivity for an m-port network is straightforward.
An m-port network, assumed to be storing no energy* at time to, is said
to be passive if
for all t o ,T,and all port voltage vectors v ( . ) and current vectors i(.) satisfying
constraints imposed by the network.
An m port is said to be active if it is not passive, while the losslessness
property is defined as follows.
An m-port network, assumed to be storing no energy at time t o , is said
to be lossless if it is passive and if
*A network is said to be storing no energy at time ro if none of its elements is storing
energy at time t o .
22
m-PORT NETWORKS AND THEIR PORT DESCRIPTIONS
CHAP. 2
for all t o and all port voltage vectors v(.) and current vectors i(.) satisfying
constraints imposed by the network together with
Intuitively, we would feel that a network all of whose circuit elements are
passive should itself be passive; likewise, a network all of whose circuit
elements are lossless should also be lossless. These intuitive feelings are
certainly correct, and as our tool to verify them we shall make use of Tellegen's theorem.
Tellegen's Theorem
The theoremis one of the most general results of all circuit theory,
as it applies to virtually any kind of lumped network. Linearity, time invariance, and passivity are not required of the elements; there is, though, a
requirement that the number of elements be finite. Roughly speaking, the
reason the theorem is valid for such a large class of networks is that it depends
on the validity of two laws applicable to all such networks-viz., Kirchhoff's
current law and Kirchhoff's voltage law. We shall first state the theorem,
then make brief comments concerning the theorem statement, and finally
we shall prove the theorem. The statement of the theorem requires an understanding of the concepts of a graph of a network, and the branches, nodes,
and loops of a graph. KirchhofF's current law requires that the sum of the
currents in all branches incident on any one node, with positive direction of
current entering the node, be zero. Kirchhoff's voltage law requires that
the sum of the voltages in all branches forming a loop, with consistent definition of signs, be zero.
Tellegen's Theorem. Suppose that Nis a lumpedfinite network
with b branches and n nodes. For the kth branch of the graph,
suppose that u, is the branch voltage under one set of operating
conditions* at any one instant of time, and ik the branch current
under any other set of operating conditions at any other instant
of time, with the sign convention for vk and i, being as shown
*By the tern set of. operating condifions we mean the set of voltages and currents in the
various circuit elements of the network, arising from certain source voltages and currents
and certain initial stoied energies in the energy storage elements. Differentoperating conditions will result from differentchoices of these quantities. In the main theorem statkment
it is not assumed that element values can be varied, though a problem does extend the
theorem to this case.
~
TELLEGEN'S THEOREM
23
Branch
FIGURE 2.3.1.
Reference Directions for Tellegen's
Theorem.
in Fig. 2.3.1. Assuming the validity of the two Kirchhoff laws,
then
b
(2.3.3)
, z l w ~=
i ~0
We ask the reader to note carefully the following points.
1. The theorem assumes that the network N is representable by a graph,
with branches and nodes. Certainly, if Ncontains only inductor, resistor,
and capacitor elements, this is so. But it remains so if N contains transformers or gyrators; Fig. 2.3.2 shows, for example, how a transformer
can be drawn as a network containing "simple" branches.
Sensing vr
FIGURE 2.3.2.
Redrawing of a Transformer.
2. N can contain sources, controlled or independent. These are treated just
like any other element.
3. Most importantly, u, and i, are not necessarily determined under the
same set of operating conditions--though they may be.
Proof. We can suppose, without loss of generality, that between
every pair of nodes of the graph of N there is one and only one
branch. For if there is no branch, we can introduce one on the
understanding that under all operating conditions there will be
zero current through it, and if there is more than one branch,
we can replace this by a single branch with the current under
any operating condition equal to the sum of the currents through
the separate branches.
24
m-PORT NETWORKS AND THEIR PORT DESCRIPTIONS
CHAP. 2
Denote the node voltage of the ath node by V,, and tKe current
from node a to node p by I.,. If the kth branch connects node
a to node 8, then
V*
= v, - v,
relates the branch voltage with the node voltages under one set
of operating conditions, and
relates the branch cnrrent to the current flowing from node a to
node under some other operating conditions (note that these
equations are consistent with the sign convention of Fig. 2.3.1).
It follows that
Note that for every node pair a, 3, there will be one and only
one branch k such that the above equation holds and, conversely,
for every branch k there will be one and only one node pair a, 9,
such that the equation holds. This means that if we sum the left
side over all branches, we should sum the right side over all possible node pairs; i.e.,
(Why is the right-hand side not twice the quantity shown?) Now
we have
x;=,
For fixed a,
,.I is the sum of all currents leaving node a.
It is therefore zero. Likewise, EL=,
I, = 0. Thus
*The symbol V V V will denote the end of the proof of a theorem or lemma
SEC. 2.3
TELLEGEN'S THEOREM
25
The reader may wonder where the Kirchhoff voltage law was used in the
above proof; actually, there was a use, albeit a very minor one. In setting
n, = V . - V,, where branch k connects nodes a and 8, we were making
use of a very elementary form of the law.
Example
2.3.1
Just to convince the reader that the theorem really does work, we shall
consider a very simple example, illustrated in Fig. 2.3.3. A circuit is
FIGURE 2.3.3. Example of Tellegen's Theorem.
shown under two conditions of excitation in Fig. 2.3.3a and b, with its
+ranches and referencedirections for current in Fig. 2.3.3~.The reference
directions for voltage are automatically determined.
For the arrangement of Fig. 2.3.3a and b
Vc = 30
1, = -40
V J , = -1200
v,zz = 200
v2 = 10 Iz = 20
I, = 20
v3t =
400
v
3 5 2Q
V4Z4= 600
I, = 20
V4 = 30
VkZk =
0
x
Passive Networks and Passive Circuit Elements
Tellegen's theorem yields a very simple proof of the fact that a
finite number of passive elements when interconnected yields a passive
network. (Recall that the term passive has been used hitherto in two independent contexts; we can now justify this double use.) Consider a network
N , comprising an m port N, with m sources at each port (see Fig. 2.3.4).
26
CHAP. 2
m-PORTNEWORKS AND THEIR PORT DESCRIPTIONS
Whether the sources are current or voltage sources is irrelevant. We shall
argue that if the elements of N,are passive, then N,itself is passive. Let us
number the branches of N, from I to k, with the sources being numbered as
branches 1 through m. Voltage and current reference directions are chosen
for the sources as shown in Fig. 2.3.4, and for the remaining branch elements
r--------------------
1
I
I
I
I
I
I
I
I
I
I
Source
I
I
I
-+
--.'JI
I
I
I
Source
I
1
I
I
I
---
"2
1
t
t
I
I
I
Source
1
I
L
I
I
I
t
+- .
1
I
I
'2,
I
I
I
I
..
..
.
---
t/
N2
I
m-port
NI
I
I
I
I
' m 2
I
+
I
vm
-C
I
I
I
I
I
-----,----.I - - - - - - - FIGURE 2.3.4. Passivity of N, is Related to Passivity of
Elements of N , .
of N,,i.e., for the branch elements of N,,in accordance with the usual convention required for application of Tellegen's theorem. (Note that the sign
convention for the source currents andvoltages differs from that required for
application of TeIlegen's theorem, but agrees with that adopted for port voltages and currents.)
Tellegen's theorem now yields that
SEC. 2.3
TELLEGEN'S THEOREM
27
Taking into account the sign conventions, we see that the left side represents
the instantaneous power flow into N, at time t through its ports, while
v,(r)ij(t) for each branch of N, represents the instantaneous power ffow into
that branch. Thus with v the port voltage vector and i the port current vector
k
C v,(t)iJ(t)
j%rn+J
v'i =
and
I,,v'i dt 2
r
(2.3.4)
0
by the passivity of the m-port circuit elements, provided these elements are
all storing no energy at time to. Equation (2.3.4) of course holds for all such
t o , all T, and all possible excitations of the m port; it is the equation that
defines the passivity of the m port.
In a similar fashion to the above argument, one can argue that an m port
consisting entirely of lossless circuit elements is lossless. (Note: The word
lossless has hitherto been applied to two different entities, circuit elements
and m ports.) There is a minor technical difficulty in the argument however;
this is to show that if the port voltage v and current i are square integrable,
the same is true of each branch voltage and current.* We shall not pursue
this matter further at this point, but shall accept that the two uses of the
word lossless can be satisfactorily tied together.
Problem Consider an m-port network driven by m independent sinusoidal sources.
2.3.1
Describeeach branch voltage and current by a phasor, a complex number
with amplitude equal to 1/fl times the amplitude of the sine wave
variation of the quantity, and with phase equal to the phase, relative to
a reference phase, of the sine wave. Then for branch k, V x l : represents
the complex power flowing into the branch. Show that the sum of the
complex powers delivered by the sources to them port is equal to the sum
of the complex powers received by all branches of the network.
Problem Let N , and N 2 be two networks with the same graph. In each network
2.3.2
choose reference directions for the voltage and current of each branch
in the same way. Let v,, and ik. be the voltage and current in branch k
of network i for i = 1,2, and assume that the two Kirchhoff laws are
valid. Show that
k
~ k l i k 1=
v k l i k ~=
0
*Actually, the time invariance of the circuit elements is needed to prove this point, and
counterexamples can be found if the circuit elements are permitted to be time varying.
This means that interconnections of Iossless time-varying elements need not be lossless
1141.
28
m-PORT NETWORKS AND THEIR PORT DESCRIPTIONS
2.4
CHAP. 2
IMPEDANCE, ADMITTANCE, HYBRID.
AND SCATTERING DESCRIPTIONS
OF A NETWORK
Since the m-port networks under consideration are time invariant,
they may be described in the frequency domain-a standard technique used
in all classical network theory. Thus instead of working in terms of the
vector functions of time v(.) and i(.), the Laplace transforms of these quantities, V(s) and Z(s), may be used, the elements of V(s) and i(s) being functions
of the complex variable s = v jo.
+
Impedance and Admittance Descriptions
Now suppose that for a certain m port it is possible to connect
arbitrary current sources at each of the ports and obtain a well-defined voItage response at each port. (This will not always be the case of course-for
example, a one-port consisting simply of an open circuit does not have the
property that an arbitrary current source can be applied to it.) In case connection of arbitrary current sources is possible, we can conceive of the port
current vector I(s) as being an independent variable, and the port voltage
vector V(s) as a dependent variable. Then network analysis provides procedures for determining an m x m matrix Z(s) = [z,,(s)] thaL maps i(s) into
V(s) through
[Subsequently, we shall' investigate procedures using stare-space equations
for the determination of Z(s).] We call Z ( . ) the impedance matrix of N,and
say that N possesses an impedance description. Conversely, if it is possible
to take V(s) as the independent variable, i.e., if there exists a well-defined set
of port currents for an arbitrary selection of port voltages, N possesses an
admittance description, and there exists an admittance matrix Y(s) relating
I(s) and V(s) by
As elementary network analysis shows, and as we shall see subsequently in
discussing the analysis of networks via state-space ideas, the impedance and
admittance matrices of m ports (if they exist) are m x m matrices of real
rational functions of s. If both Y(s) and Z(s) exist, then Y(s)Z(s) = I, as is
clear from (2.4.1) and (2.4.2). The term immittance matrix is often used to
denote either an impedance or an admittance matrix. Physically, the (i, j)
element z,,(s) of Z(s) represents the Laplace transform of the voltage appearing at the ith port when a unit impulse of current is applied at port j, with
DESCRIPTIONS OF A NEXJVORK
SEC. 2.4
all other ports open circuited to make I,(s) = 0,k
can be made for y,,(s).
f j.
29
A dual interpretation
Hybrid Descriptions
There are situations when one or both of Z(s) and Y(s) do not
exist. This means that one cannot, under these circumstances, choose the
excitations to be all independent voltages or all independent currents. For
example, an open circuit possesses no impedance matrix, but does possess
an admittance matrix-a scalar Y(s) = 0. Conversely, the short circuit
possesses no admittance matrix, but does possess an impedance matrix. As
shown in Example 2.4.1, a simple two-port transformer possesses neither an
impedance matrix nor an admittance matrix. But for any m port, it can be
shown [5,6] that rhere always exists at least one set of independent excitations,
described by U(s), where u,(s)may be the ith port voltage or the ithport current,
such that the corresponding responses are well defined. Of course, it is understood that the network consists of only a finite number of passive, linear,
time-invariant resistors, inductors, capacitors, transformers, and gyrators.
With r,(s) denoting the ith port current when uis) denotes the ith port
voltage, or conversely, there exists a matrix H(s) = [h,](s)]such that
Such a matrix H(s) is called a hybrid matrix, and its elements are again
rational fuctions in s with real coefficients. It is intuitively clear that a network in general may possess several alternative hybrid matrices; however,
any one describes completely 'the externally observable behavior of the
network.
Example
2.4.1
Consider the two-port transformer of Fig. 2.22. By inspection of its
definingequations
we can see immediately that the port currents i, and i~cannot be chosen
independently. Here, according to (2.4.4b), the admissible currents are
those for which i2 = -Til. Consequently, the transformer does not
possess an impedance-matrixdescription. Similarly, by looking at (2.44,
we see that it does not possess an admittance-matrix description. These
conclusions may altemathely be deduced on rewriting (2.4.4) in the frequcncy domain as
30
m-PORT NETWORKS A N D THEIR PORT DESCRIPTIONS
CHAP. 2
from which it is clear that an equation of the form of (2.4.1) or (2.4.2)
cannot be obtained.
However, the twoport transformer has a hybrid-matrix description,
as may be seen by rearranging (2.4.5) in the form
Scattering Descriptions
Another port description of networks commonly used in network
theory is the scattering matrix S(s). In the field of microwave systems,
scattering matrices actually provide the most natural and convenient way of
describing networks, which usually consist of distributed as well as lumped
elements.*
To define S(s) for an m port N, consider the augmented m-port network
of Fig. 2.4.1, formed from the originally given m port N by adding unit
I
k
FIGURE 2.4.1.
I
---------__--_
J
The Augmented m-Port Network.
resistors in series with each of its m p0rts.t We may excite the augmented
network with voltage sources designated bye. It is straightforward to deduce
from Fig. 2.4.1 that these sources are related to the original port voltage and
current of N via
We shall suppose for the moment that e(.) can be arbitrary, and any response
of interest is well defined. Now the most obvious physical response to the
voltage sources [e,] is the currents flowing through these sources, which are
well-known example of a distributed element is a transmission line.
tln this figure the symbol I, denotes the m x rn unit matrix. Whether I denotes a
current in the Laplace-transform domain or the unit matrix should be quite clear from the
context.
*A
.SEC. 2.4
DESCRIPTIONS OF A
NETWORK
31
the entries of the port current vector i. But since
.
.
from the mathematical point of view we may as well use v - i instead as
the response vector, So for N we can think mathematically of v i as the
excitation vector and v - i as the response vector, with the understanding
that physically this excitation may be applied by identifying e in Fig. 2.4.1
with v iand taking e 2i a .the response. Actually, excitation and response
vectors of $(v
i) and (v - i ) are more commonly adopted, these two
quantities often being termed the incident voltage u' and reflected voltage vr.
Then, with the following definition in the frequency domain,
+
+
+
-
we defme the scattering matrix S(s) = [su(s)], an m x m matrix of real
rational functions of s, such that
The matrix S(s) so defined is sometimes called the normalized scattering
matrix, since the augmenting resistances are of unit value.
Example Consider once again the two-port transformer in Fig. 2.2.2e; it is simple
2.4.2
to calculate its scattering matrix using (2.4.7) and (2.4.8);the end result is
The above example also illustrates an important result in network theory
that although an immittance-matrix description may not necessarify exist for
an m-port network, 4 scntfering-matrix description does always exist [6].
( A sketch of the proof will he given shortly.) For this reason, scattering
matrices are often preferred in theoretical work.
Since it is sometimes necessary to convert from S to Y or Z, or vice versa,
interrelations between these matrices are of interest and are summarized in
Table 2.4.1. Of course, Y or Z may not exist, but if they do, the table gives
the correct formula. Problem 2.4.1 asks for the derivation of this table.
Table 2.4.1
SUMMARY OF MATRIX INTERRELATIONS
32
m-PORT NEMlORKS AND THEIR PORT DESCRIPTIONS
CHAP. 2
Problem Establish, in accordance with the definitions of the impedance, admittance, and scattering matrices, the formulas of Table 2.4.1.
2.4.1
Problem In defining the normalized scattering matrix S, the augmented network,
Fig. 2.4.1, is used when the referellce terminations of N are 1-ohm (Q)
resistors connected in series with N. Consider the general situation of
Fig. 2.4.2, in which a resistive network of symmetric impedance matrix
2.4.2
FIGURE 2.4.2. An m-port Network with Resistive Network Termination of Symmetric Impedance Matrix R.
R is connected in series with N. By analogy with (2.4.7) and (2.4.8). we
define incident and reflected voltages with reference to R by
and the scattering matrix wifli the refrence R, SR,by
Express S, in terms of S. Hence show that if R =I,then S,
= S.
Problem Find the scattering matrix (actually a scalar) of the resistor, inductor,
and capacitor.
2.4.3
Problem Evaluate the scattering matrix of the multiport transformer and check
2.4.4
that it is symmetric. Specialize the result to the case in which the turnsratio matrix T is square and orthogonal-defining the orthogonal transformer.
Problem Find the impedance, admittance, and scattering matrices of the gyrator.
2.4.5
Observe that they are not symmetric.
Problem This problem relates the scattering matrix to the augmented admittance
matrix of a network. Consider the m-port network defined by the dotted
lines on Fig. 2.4.1. Show that it has an admittance matrix Y. (the augmented admittance matrix of N) if and only if N possesses a scattering
matrix S, and that
2.4.6
NETWORK INTERCONNECTIONS
33
In this section we consider some simple interconnections of
networks. The most frequently encountered forms of interconnection are the
series, parallel, and cascade-load connections.
Series and Parallel Connections
Consider the arrangement of Fig. 2.5.1 in which two m ports N,
and N , are connected in series; i.e., the terminal (Ti), of the jth port of N,
--
OF=;
r-------I---
7
I
I
I
I
I
I
m NZ
I
I
I
I
I
I
I
I
I
I
I
I
I
L - - _ - - _ - _ - - _ - - A
FIGURE 2.5.1.
N
I
Series Connection.
is connected in series with the terminal (T,), of thejth port of N,,and the
remaining terminals (T,), and (Ti), are paired as new port terminals. Since
the port currents of N , and N , are constrained to be the same and the port
voltages to add, the series connection has an impedance matrix
Z = Z, -I-Z,
(2.5.1)
where Z, and Z , are the respective impedance matrices of N , and N,.
It is important in drawing conclusions about the series connection (and
other connections) of two networks that, after connection, it must remain
true that the same current enters terminal T, of N, or N, as leaves the
associated terminal Ti of the network. It is possible for this arrangement to
be disturbed. Consider, for example, the arrangement of Fig. 2.5.2, where
two twoport networks are series connected, the connections being shown
34
m-PORT NETWORKS AND THEIR PORT DESCRIPTIONS
CHAP. 2
FIGURE 2.5.2. Series ConnectionsIllustrating Nonadditiv-
ity and Additivity of Impedance Matrices.
with dotted lines. AFter connection a two port is obtained with terminal
pairs 1 4 ' and 2-3'. Suppose that the second pair is left open circuit, but
a voltage is applied at the pair 1-4'. It is easy to see that there will be no
current entering at terminal 2, but there will be current leaving the top
network at terminal 2'. Therefore, the impedancematrix of the interconnected
network will not be the sum of the separate impedance matrices. However,
introduction of one-to-one turns-ratio transformers, as in Fig. 2.5.2b, will
always guarantee that impedance matrices can be added. In our subsequent
work we shall assume, without further comment on the fact, that the impedance matrix of any series connection can be derived according to (2.5.1).
SEC. 2.5
NETWORK INTERCONNECTIONS
35
with, if necessary, the validity of the equation being guaranteed by the use
of transformers.
The parallel connection of N, and N,,shown in Fig. 2.5.3, has the corre-
FIGURE 2.5.3.
Parallel Connection.
sponding terminal pairs ofN, and N,joined together to form a set of common
terminal pairs, which constitute the ports of the resulting m port. Here, since
the currents of N, and N, add and the voltages are the same, the parallel
connection has an admittance matrix
where Y , and Y , are the respective admittance matrices of N , and
Similar cautions apply in writing (2.5.2) as apply in writing (2.5.1).
N,.
Cascade-Load Connection
Now consider the arrangement of Fig. 2.5.4, where an n port N,
cascade loads an (m n) port No to produce an m-port network N. If N,
+
-- -- -- --- - - - - -- - - 1
I
I
I
I
I
I
I
I
I
I
I
N
I
L - - - - - - - - - - - - - - - - - J
FIGURE 2.5.4.
CascadeLoad Connection.
I
36
m-PORT NETWORKS AND THEIR PORT DESCRIPTIONS
CHAP. 2
and N, possess impedance matrices 2,and Z., the latter being partitioned
like the ports of N, as
where Z,, is m x m and Z,, is n x n, then simple algebraic manipulation
shows that N possesses an impedance matrix Z given by
assuming the inverse exists. (Problem 2.5.1 requests verification of this result.)
Likewise, if N, and N , are described by admittance matrices Y, and Y,,
and an analogous partition of Y,defines submatrices Y,,it can be checked
that N possesses an admittance matrix
Again, we must assume the inverse exists.
For scattering-matrix descriptions of N, the situation is a little more
tricky. Suppose that the scattering matrices for N, and N, are, respectively,
S,and So,with the latter partitioned as
where S , , is m x m and S,, is n x n. Then the scattering matrix S of the
cascade-loading interconnection N is given by (see Problem 2.5.1)
or equivalently
In (2.5.3), (2.5.4),.and (2.5.5) it is possible for the inverses not to exist, in
which case the formulas are not valid. However, it is shown in [6] that if
N,and N , are passive, the resultant cascade-connected network N always
possesses a scattering matrix S. The scattering matrix is given by (2.5.5),
with the inverse replaced by a pseudo inverse if necessary.
An important situation in which (2.5.3) and (2.5.4) break down arises when
N, consists of a multiport transformer. Here N, possesses neither an impedance nor an admittance matrix. Consider the arrangement of Fig. 2.5.5a,
and suppose that N, possesses an impedance matrix Z,.Let v, and i, denote
the port voltage and current of N,,and v and i those of N. Application of the
NETWORK INTERCONNECTIONS
FLGURE 2.5.5.
37
Cascade-Loaded Transformer Arrange-
ments.
multipart-transformer definition with due regard to sign conventions yields
since v,(s) = Zds)I,(s), it follows that V(s) = T'Zis)TZ(s) or
Z = TIZ,T
(2.5.6)
The dual case for admittances has the transformer turned around (see
Fig. 2.5.5b), and the equation relating Y and Y,is
We note also another important result of the arrangement of Fig. 2.5.5a.
the development of which is called for in the problems. If T is orthogonal,
i.e., TT' = T'T = I, and if N,possesses a scattering matrix S,,then Npossess
a scattering matrix S given by
Finally, we consider one important special case of (2.5.5) in which N,
consists of uncoupled 1-Q resistors, as in Fig. 2.5.6. In this case, one
compute's S, = 0, and then Eq. (2.5.5) simplifies to
38
in-PORT NETWORKS AND THEIR PORT DESCRIPTIONS
FIGURE 2.5.6.
CHAP. 2
Special Cascade Load Connection.
Thus the termination of ports in unit resistors blocks out corresponding rows
and columns of the scattering matrix.
The three types of interconnections-series, parallel, and cascade-load
connections-are equally important in synthesis, since, as we shall see, one of
the basic principles in synthesis is to build up a complicated structure or network
from simple structures. It is perhaps interesting to note that the cascade-load
connection is the most general form of all, in that the series and parallel
connection may be regarded as special cases of the cascade-load connection.
This result is illustrated in Fig. 2.5.7.
Scattering-Matrix Existence
More generally, it is clear that any finite linear time-invariant
m-port network N composed of resistors, capacitors, inductors, transformers,
and gyrators can be regarded as the cascade connection of a network No
composed only of connecting wires, by themselves constituting opens and
shorts, and terminated in a network N, consisting of the circuit elements
of N uncoupled from one another. The situation is shown in Fig. 2.5.8.
Now we have already quoted the result of [6],which guarantees that if
N, and N,are passive and individually possess scattering matrices, then N
possesses a scattering matrix, computable in accordance with (2.5.5) or a
minor modification thereof. Accordingly, if we can show that N, and N,
possess scattering matrices, it will follow, as claimed earlier, that every m port
(with the usual restrictions, such as linearity) possesses a scattering matrix.
Let us therefore now note why Nc and N,possess scattering matrices.
First, it is straightforward to see that each of the network elements-the
resistor, inductor, capacitor, gyrator, and two-port transformer-possesses
a scattering matrix. In the case of a multipart transformer with a turns-ratio
matrix T , its scattering matrix also exists and is given by
(I f T ' T ) ( T I T- 1 )
2(1+ TIT)-IT'
2T(I f T'T).'
(I -b TT')-'(1 - TT')
]
(2.5.10)
NETWORK INTERCONNECTIONS
39
FIGURE 2.5.7. Cascade-Load Equivalents for:
(a) Parallel; (b) Series Connection.
(Note that S i n (2.5.10) reduces to that of (2.4.9) for the two-part transformer
when Tis a I x 1 matrix.) The scattering matrix of the network N, is now
seen to exist. It is simply the direct sum* in the matrix sense of a number of
scattering matrices for resistors, inductors, capacitors, gyrators, and transformers.
The network N. composed entirely of opens and shorts is evidently linear,
time invariant, passive, and, in fact, lossless. We now outline in rather technical terms an argument for the existence of So.The argument can well be
omitted by the reader unfamiliar with network topology, who should merely
note the main result, viz., that any m-port possesses a scattering matrix.
Consider the augmented network corresponding to N, as shown in Fig.
2.5.9. The only network elements other than the voltage sources are the
m
n unit resistors. With the aid of algorithms detaiied in standard textbooks on network topology, e.g., 171, one can form a tree for this augmented
+
'The direct sum of A and B, denoted by [A 4- Bl, is
K ;I.
40
m-PORT NETWORKS AND THEIR PORT DESCRIPTIONS
CHAP. 2
I
I
I
I
1
I
I
I
I
I
I
I
I
I
Inductors
.
I
I
I
I
I
I
I
I
rans sf or Ihers
I
1
I
I
Open- and !Short
I
I
I
I
I
Gyrotors,
I
I
I
I
I
L
FIGURE 2.5.8.
--------- J
CascadeLoading Representation for N.
network and select as independent variables a certain set of corresponding
link currents, arranged in vector form as I,. The remaining tree branch
currents, in vector form I,, are then related to I, via a relation of the form
I, = BI,. Notice that I, and I, between them include all the port currents,
but no other currents since there are no branches in the circuit of Fig. 2.5.9
other than those associated with the unit resistors. One is free to choose
reference directions for the branch-current entries of I , and I,, and this is
SEC. 2.5
NETWORK INTERCONNECTIONS
41
~7J----$e"
-
em 'L
-
Nc
FIGURE 2.5.9. Augmented Network for N, of Figure2.5.8.
done so @at they coincide with reference directions for the port currents of
N o . Denoting the corresponding port voltages by V , and V,, the losslessness
of N, implies that
v:I,
+ v:1,
=0
or, using I, = BI,,
(V', -k v:B)I, = 0
Since I , is arbitrary, it must be true therefore that
Comparing the results V , = -B'V, and I, = BI, with (2.2.5), we see that
constraining relations on the port variables of N , are identical to those for
a multiport transformer with a turns-ratio matrix T = -B. Therefore, N,
possesses a scattering matrix. [See Eq. (2.5.10).]
Problem Consider the arrangement of Fig. 2.5.4.
2.5.1
(a) Let the impedance matrices for N, NI,and N. be, respectively, Z,
Z,, and Z, with Z, partitioned as the ports
(Here Z , , is m x m, etc.) Show that Z is given by
(b) Establish the corresponding result for scattering matrices.
Problem Suppose that a multiport transformer with turns-ratio matrix T is ter2.5.2
minated at its secondary ports in a network of scattering matrix SI.
Suppose also that T is orthogonal. Show that the scattering matrix of
the overall network is T'S,T.
Problem Show, by working from the definitions, that if an (m n) port with scat2.5.3
tering matrix
+
42
rn-PORT NETWORKS AND THEIR PORT DESCRIPTIONS
CHAP. 2
(SI
I being m x m, etc.) is terminated at its last n ports in 1-GI resistors,
the scattering matrix of the resulting m-port network is simply SII.
2.6
THE BOUNDED REAL PROPERTY
We recall that the m-port networks we are considering are finite,
linear, lumped, time invariant, and passive. Sometimes they are lossless also.
The sort of mathematical properties these physical properties of the network
impose on the scattering, impedance, admittance, and hybrid matrices will
now be discussed.
In this section we shall define the bounded real property; then we shall
show that the scatteripg matrix of an nz port N is necessarily bounded real.
Next we shall consider modifications of the bounded real property that are
possible when a bounded real matrix is rational. Finally, we shall study the
lossless bounded real property for rational matrices.
In the next section we shall define the positive real property, following
which we show that any immittance or hybrid matrix of an m port is necessarily positive real. Though immittance matrices will probably be more
familiar to the reader, it turns out that it is easier to prove results ab initio
for scattering matrices, and then to deduce from them the results for immittance matrices, than to go the other way round, i.e., than to prove results
first for immittance matrices, and deduce from them results for scattering
matrices.
Bounded real and positive real matrices occur in a number of areas of
system science other than passive network theory. For example, scattering
matrices arise in characterizing sensitivity reduction in control systems and
in inverse linear optimal control problems [8]. Positive real matrices arise in
problems of control-system stability, particularly those tackled via the circle
or Popov criteria 191, and they arise also in analyzing covariance properties
of stationary random processes [lo]. A discussion now of applications would
take us too far afield, but the existence of these applications does provide an
extra motivation for the study of bounded real and positive real matrices.
The Bounded Real Property
We assume that there is given an m x m matrix A ( - ) of functions
of a complex variables. With no assumption at this stage on the rationality
or otherwise of the entries of A(.), the matrix A ( . ) is termed bounded real
if the following conditions are satisfied [I]:
SEC. 2.6
THE BOUNDED REAL PROPERTY
43
1. All elements of A(.) are analytic in Re Is] > 0.
2. A(s) is real for real positive s.
3. For Re IS] > 0, Z - A1*(s)A(s) is nonnegative definite Hermitian, that
is, x'*[Z - A'*(s)A(s)]x 2 0 for all complex m vectors x. We shall write
this simply as I - A'*(s)A(s) 2 0.
Example Suppose that A,@) and A2(s)-aretwo m x m bounded real matrices.
We show that Al(s)A2(s)is bounded real. Properties 1 and 2 are clearly
true for the product if they are true for each A,(s). Property 3 is a little
more tricky. Since for Re Is] > 0,
2.6.1
I - A',*(s)A,(s) r o
we have
Also for Re [sl > 0,
Adding the second and third inequalities,
for Re [sl > 0, which establishes property 3 for A,(s)A,(s).
We now wish to show the connection between the bounded real property
and the scattering matrix of an m port. Before doing so, we note the following
easy lemma.
Lemma 2.6.1. Let v and i be the port voltage andcurrent vectors
of an m port, and d and 'v the incident and reflected voltage
vectors. Then
I. v'i = vrd - v"vr.
2. The passivity of the m port is equivalent to
(2.6.1)
for all times to at which the network is unexcited, and all
incident voltages vl, with v: v r fulfilling the constraints
imposed by the network.
3. The losslessness of the m port is equivalent to passivity in
the sense just noted, and
44
m-PORT NETWORKS A N D THEIR PORT DESCRIPTIONS
CHAP. 2
for all square integrable u'; i.e.,
The formal proof will be omitted. Part 1 is a trivial consequence of the
definitions of u'and v: part 2 a consequence of the earlier passivity definition
in terms of v and i, and part 3 a consequence of the earlier losslessness definition.
Now we can state one of themajor results of network theory. The reader
may well omit the proof if he desires; the result itself is one of network
analysis rather than network synthesis, and its statement is far more important than the details of its proof, which are unpleasantly technical. The
reader should also note that the result is a statement for a broader class of
m ports than considered under our usual convention; very shortly, we shall
revert to our usual convention and. indicate the appropriate modifications
to the result.
Theorem 2.6.1. Let N be an m port that is linear, time invariant,
and passive-but not necessarily lumped or jinite. -Suppose that
N possesses a scattering matrix S(s). Then S(.) is bounded real.
Before presenting a proof of the theorem (which, we warn the reader,
will draw on some technical results of Laplace-transform theory), we make
several remarks.
1. The theorem statement assumes the existence of S(s); it does not give
conditions for existence. As we know though, if N is finite and lumped,
as well as satisfying the conditions of the theorem, S(s) exists.
2. When we say "S(s) exists," we should really qualify this statement by.
saying for what values of s it exists. The bounded real conditions only
require certain properties to be true in Rc [s] > 0, and so, actually, we
could just require S(s) to exist in Re [s] > 0 rather than for all s.
3. It is the linearity of N that more or less causes there to be a linear
operator mapping v' into v', while it is the time invariance of N that
permits this operator to be described in the s domain. It is essentially
the passivity of N that causes properties 1 and 3 of the bounded real
definition to hold;.property 2 is a consequence of the fact that real v'
lead to real vr.
4. In a moment we shall indicate a modification of the theorem applying
when N is also lumped and finite. This will give necessary conditions
on a matrix function for it to be the scattering matrix of an m port
with the conventional properties. One major synthesis task will be to
show fhat these conditions are also suflcient, by starting with a matrix
S(s) satisfying the conditions and exhibiting a network with S(s) as its
scattering matrix.
SEC. 2.6
THE BOUNDED REAL PROPERTY
45
Proof. Consider the passivity property (2.6.2), and let T
approach infinity. Then if d is square integrable, we have
so that v' is square integrable. Therefore, S(s) is the Laplace
transform of a convolution operator that maps square integrable
functions into square integrable functions. This means (see [I, 111)
that S(s) must have every element analytic in Re [s] > 0.This
establishes property 1.
Let a, be an arbitrary real positive constant, and let v' be
xeso'l(t - to), where x i s a real constant m vector and l(t) is the
unit step function, which is zero for negative t, one for positive
t. Notice that er' will be smaller the more negative is t .
As to - w and t ca,vr(t)will approach S(u,)xe"'l(t-to).
Because x is arbitrary and vr(t) is real, $(a,) must be real. This
proves property 2.
Now let so = u, jw, be an arbitrary point in Re [s] > 0.
Let x be an arbitrary constant complex m vector, and let v be
Re [ ~ e ' ~ ' l( t to)]. As to
-w, vr approaches
-
-
+
-
and the instantaneous power flow v"d - v"vr at time t becomes
where 0, = argx, and 4, = a r g ( S ( ~ , ) x )Using
~
the relation
cosZa = j(1 f cos 2a), we have
p(t) = $x'*[I - S'*(so)S(so)]xez"'
+ 4 C 1 x, lz eZ-' cos (2w,t
+ 28,)
- 4 C 1 (S(s,)x), lye2"' cos (2wot + 263
Integration leads to E(T):
46
m-PORT N E W O R K S A N D THEIR PORT DESCRIPTIONS
CHAP. 2
Now if w, is nonzero, the angle of (I/s,)x'[I- S'(s,)S(s,)]xe2'~
takes on aU values between 0 and 2n as T varies, since e2'eT =
e2~*e2ja(*,
and the angle of e2jmr takes on all values between 0
and 27c as T varies. Therefore, for certain values of T,
The nonnegativity of E(T) then forces
x'"[l
- S*(S,)~(S,)]X
=0
If w, is zero, let x be an arbitrary real constant. Thenj recalling
that S(s,) will be real, we have
Therefore, for so real or complex in Re [s] > 0, we have
I - S * ( S , ) ~ ( S2
,) 0
as required. This establishes property 3.
VV
Once again, we stress that them ports under consideration in the theorem
need not be lumped or iinite; equivalently, there is no requirement that S(s)
be rational. However, by adding in a constraint that S(s) is rational, we can
do somewhat better.
The Rational Bounded Real Property
We d e h e an m x m matrix A(.) of functions of a complex
variable s to be rational bounded real, abbreviated simply BR, if
1. It is a matrix of rational functions of s.
2. It is bounded real.
Let us now observe the variations on the bounded real conditions that
become possible. First, instead of saying simply that every eIement of A(.)
is analytic in Re [s] > 0, we can clearly say that no element of A ( - ) possesses
a pole in Re [sl > 0. Second, if A(s) is real for real positives, this means that
each entry of A(-) must be a real rational function, i.e., a ratio of two
polynomials with real coefficients.
The implications of the third property are the most interesting. Any
function in the vicinity of a pole takes unbounded values. Now the inequality
I - A'*(s)A(s) Z 0 implies that the (i - i ) term of I - Af*(s)A(s)is non-
'
THE BOUNDED REAL PROPERTY
SEC. 2.6
negative, i.e.,
1-
47
x Ia,'(s) l22 0
i
Therefore, 1 a,(s) I must. be bounded by 1 at any point in Re [s] > 0. Consequently, it is impossible for aij(s) to have a pole on the imaginary axis
Re [s] = 0, for if it did have a pole there, I a,,(s) I would take on arbitrarily
large values in the half-plane Re [sl > 0 in the vicinity of the pole. It follows
that
A ( j 4 = lim A(a j o )
r-0
+
.>o
exists for all real m and that
1 - AJ*(jm)A(jo)2 0
(2.6.5)
for all real o.What is perhaps most interesting is that, knowing only that:
(1) every element of A(.) is analytic in Re [s] 2 0; and (2)-Eq.(2.6.5) holds
for all real o,we can deduce, by an extension of the maximum modulus
theorem of complex variable theory, that I - A'*(s)A(s) 2 0 for all s in
Re [s] > 0. (See [ I ] and the problems at the end of this section.) In other
words, if A(.) is rational and known to have analytic elements in Re [s] 2 0,
Eq. (2.6.5) carries as much information as
.'.
I - A'*(s)A(s) 2 0
(2.6.6)
for all s in Re [s] > 0.
We can sum up these facts by saying that an m x m matrix A(.) of real
rational functions of a complex variable s is bounded real if and only if:
(1) no element of A(.) possesses a pole in Re [sf 2 0; and (2) I A1*(jcu)A(jm)2 0 for all real m. If we accept the fact that the scattering
matrix of an m port that is finite and lumped must be rational, then Theorem
2.6.1 and the above statement yield the following:
Theorem 2.6.2. Let N be in m port that is linear, time invariant,
lwnped, finite, and passive. Let S(s) be the scattering matrix of
N. Then no element of S(s) possesses a pole in Re [s] 2 0, and
I - S1*(jo)S(jm)2 0 for all real o.
In our later synthesis work we shall start with a real rational S(s) satisfying
the conditions of the theorem, and provide procedures for generating a
network with scattering matrix S(s).
Example We
shall establish the bounded real nature of
2.6.2
r o
11
48
m-PORT NETWORKS AND THEIR PORT DESCRIPTIONS
CHAP. 2
Obviously, S(s) is real rational, and no element possesses a pole in Re [s]
2 0. It remains to examine I - S'*(jw)S(jw). We have
The Lossless Bounded Real Property
We now want to look at the constraints imposed on the scattering
matrix of a network if that network is known to be lossless. Because it is far
more convenient to do so, we shall confine discussions to real rational
scattering matrices.
I n anticipation of the main result, we shall define an m x m real rational
matrix A(.) to be lossless bounded real (abbreviated LBR) if
1. A(.) is bounded real.
2. I - A'*(jw)A(jw) = 0 for all real w.
Comparison with the BR conditions studied earlier yields the equivalent
conditions that (1) every element of A(-) is analytic in Re [s] 2 0, and (2)
Eq. (2.6.7) holds.
The main result, hardly surprising in view of the definition, is as follows:
Theorem 2.6.3. Let N be an m port (usual conventions apply)
with S(s) the scattering matrix. Then N is lossless if and only if
I - s*(jo)S(jw) = 0
for all real
Proof. Suppose that the network is unexcited at time
losslessness implies that
(2.6.8)
to. Then
for all square integrable v'. Since vr is then also square integrable,
the Fourier transforms V'(jo) and Vr(jw) exist and the equation
becomes, by Parseval's theorem,
'
THE BOUNDED REAL PROPERTY
49
Since 8' is an arbitrary square integrable function, Vr(jw)is
sufficently arbitrary to conclude that
I
- S'*(jo)S(jw)
=. 0
for all real o
The converse follows by retracing the above steps.
VVV
Example IF S, and S, are LBR, then S,& is LBR. Clearly the only nontrivial
2.6.3
matter to verify is the equality (2.6.8) with S replaced by S,S2.
1 - S;*(jo)S\*(jw)S,(jo)S2(
jw)
= 1 - S;*(jw)S2(jw)
=0
since S, is LBR
since S, b LBR
Example The scattering matrix of Example 2.6.2,
is LBR, since, as verified in ExampSe 26.2, Eq. (2.6.8) holds.
In view of the fact that A'(-) is real rational, it follows that Si(jo) =
S'(-jo), and so the condition (2.6.8) becomes
for s = jo, w real. But any analytic function that is zero on a line must be
zero everywhere, and so (2.6.9) holds for all s. We can call this the extended
LBR property. It is easy to verify, for example, that the scattering matrix
of Example 2.6.4 satisfies (2.6.9).
Testing for the BR and LBR Properties
Examination of the BR conditions shows that testing for the BR
property requires two distinct efforts. First, the poles of elements of the
candidate matrix A(s) must be examined. To check that they are all in the left
half-plane, a Hurwitz test may be used (see, e.g., 112, 13D. Second, the
nonnegativity of the matrix I - A'*(jm)A(jw) = I - A'(-jw)A(jw) must
be checked. As it turns out, this is equivalent to checking whether certain
polynomials have any real zeros, and various tests, e.g., the Stnrm test, are
available for this 112,131. To detail these tests would take us too far afield;
suffic~it to say that both tests require only a finite number of steps, and do
not demand that any polynomials be factored.
Testing for the LBR property is somewhat simpler, in that the zero nature,
rather than the nonnegativity, of a matrix must be established.
m-PORT NETWORKS A N D THEIR PORT DESCRIPTIONS
50
CHAP. 2
Summary
For future reference, we summarize the following sets of condi.
tions.
Bounded Real Property
1. All elements of A(-) are analytic in Re [s]> 0.
2. A(s) is real for real positive s.
3. I - A'*(s)A(s) 2 0 for Re [s] > 0.
BR Property
1. A(.) is real rational.
2. All eIements of A(.) are analytic in Re [s] 2 0.
3. I - A'*(jo)A(jo) 2 0 for all real q.
LBR Property
1. A(-) is real rational.
2. AU elements of A(.) are analytic in Re [s] 3 0.
3. I - A'*(jo)A(jo) = 0 for all real w .
4. I - A'(-s)A(s) = 0 for all s (extended LBR property).
We also note the following result, the proof of which is called for in
Problem 2.6.3. Suppose A(s) is BR, and that [If A(s)b = 0 for some s in
Re [s] > 0, and some constant x. Then [I& A(s)]x = 0 for all s i n Re [sl2 0.
Problem Prove statement 3 of Lemma 2.6.1, vii, losslessness of an rn port is
2.6.1
equivalent to passivity together with the condition
for all compatible incident and reflected voltagevectors, with thenetwork
assumed to be storing no energy at time t o , and with
You may need to use the fact that the sum and difference of square
integrabIe quantities are square integrable.
Problem The maximum modulus theorem states the following. Let f(s) be an
2.6.2
analytic function within and on a closed contour C. Let M be the upper
bound of ( / ( s f1 on C. Then If (s)1 < M within C,and equality holds at
one point if and only i f f (s) is a constant. Use this theorem to prove that
if all elements of A(s) are analyticin Re[s]>O, then I - A ' * ( j o ) A ( j o )
2 0 for all real o implies that I - A8*(s)A(s)2 0 for all s in Re Is] r 0.
SEC. 2.7
THE POSITIVE REAL PROPERTY
51
Problem Suppose that A(s) is rational bounded real. Using the ideas contained in
Problem 2.6.2, show that if
2.6.3
for some s in Re [$I> 0, then equality holds for all Re Is] 2 0.
Problem (Alternative derivation of Theorems 2.62 and 2.6.3.) Verify by direct
2.6.4
calculation that the scattering matrices of the passive resistor, inductor,
capacitor, transformer, and gyrator are BR, with the last four being
LBR. Represent an arbitrary m port as a cascade load, with the loaded
network comprising merely opens and shorts, and, as shown in the last
section, possessing a scattering matrix l i e that of an ideal transformer.
Assume the validity of Eq. (2.5.5) to prove Theorems 2.6.2 and 2.6.3.
Problem Establish whether the following function is BR:
2.7 THE
POSITIVE REAL PROPERTY
In this section our task is to define three properties: the positive
real property, the rational positive real property, and the Iossless positive
real property. We shall relate these properties to properties of immittance
or hybrid matrices of various classes of networks, much as we related the
bounded real properties to properties of scattering.matrices.
The Positive.Real Property
We assume that there is given an m x m matrix B ( . ) of functions
of a complex variable s. With no assumption at this stage on the rationaIity
or otherwise of the entries of B(.), the matrix B(.) is termed positive real
if the following conditions are satisfied [I]:
1. All elements of B(.) are analytic in Re [s] > 0.
2. B(s) is real for real positive s.
3. B'*(s) B(s) 2 0 for Re [s] > 0.
+
Example Suppose that A(s) is bounded real and that [Z
2.7.1
s in Re [s] > 0. Define
Then B(s) is positive real.
- A($)]-' exists for all
52
rn-PORT NETWORKS AND THEIR PORT DESCRIPTIONS
CHAP. 2
To verify condition 1, observe that an element of B(s) will have a
right-half-plane singularity only if A(s) has a right-half-planesingularity,
or [I - A@)] is singular in Re [s] > 0. The first possibility is ruled out
by the bounded real property, the second by assumption. Verification
of condition 2 is trivial, while to verify condition 3, observe that
and the nonnegativity follows from the bounded real property. The converse result holds too; i.e., if B(s) is positive real, then A(s) is bounded
real.
In passing we note the following theorem of [I, 81. Of itself, it will be of
no use to us, although a modified version (Theorem 2.7.3) will prove important. The theorem here parallels Theorem 2.6.1, and a partial proof is
requested in the problems.
Theorem 2.7.1. Let N be an m port that is linear, time invariant
and passive-but not necessarily lumped or finite. Suppose that
N possesses an immittance or hybrid matrix B(s). Then B(.) is
positive real.
The Rational Positive Reai Property
We define an m x m matrix B ( - ) of functions of a complex variable s to be rational positive real, abbreviated simply PR, if
1. B(.) is a matrix of rational functions of s.
2. B(.) is positive real.
Given that B(.) is a rational matrix, conditions 1 and 2 of the positive real
definition are equivalent to requirements that no element of B(.) has a pole
in Re [s] > 0 and that each element of B(.) be a real rational function. It
is clear that condition 3 implies, by a limiting operation, that
for all real o such that no element of B(s) has a pole at s = jw. If it were
true that no element of B ( - ) could have a pole for s =j o , then (2.7.1) would
be true for all real w and, we might imagine, would imply that
SEC.
THE POSITIVE REAL PROPERTY
2.7
53
However, there do exist positive real matrices for which an element possesses
a pure imaginary pole, as the following example shows.
Example Consider the function
2.7.2
capacitor. Then B(s) is
B(s) = lls, which is the impedance of a 1-F
analytic in Re [sJ 7 0,is real for real positives,
and
1
1
2Re[s]
B'*(s) C B(s) = = 1sJ1 2 0
s* t- S
in Re [s] I 0. Thus B(s) is positive real, but possesses a pure imaginary
pole.
One might then ask the following question. Suppose that B(s) is real
rational, that no element of B(s) has a pole in Re [s] > 0, and that (2.7.1)
holds for all w for which jw is not a pole of any element of B(s); is it then
true that B(s) must be PR; i.e., does (2.7.2) always follow? The answer to
this question is no, as shown in the following example.
Example
2.7.3
Consider B(s) = -11s. This is real rational, analytic for Re [sl> 0, with
B'*(jw) +B(jm) = 0 for all a # 0. But it is not true that B"(s) + Bfs)
2 0 for all Re [s] > 0.
At this stage, the reader might feel like giving up a search for any jo-axis
conditions equivalent to (2.7.2). However, there are some conditions, and
they are useful. A statement is contained in the following theorem.
Theorem 2.7.2. Let B(s) be a real rational matrix of functions
of s. Then B(.) is PR if and only if
1. No element of B(-) has a pole in Re [sl > 0.
2. B'YJo) B(jo) > 0 for all real w, with jw not a pole of
any element of B(.).
3. If jw, i s a pole of any element of B(.), it is at most a simple
pole, and the residue matrix, KO= lim,,,, (s - jwO)B(s)
in case w, is finite, and K, = lim,,
B(jw)/jo in case w, is
infinite, is nonnegative definite Hermitian.
+
Proof. In the light of earlier remarks it is clear that what we
have to do is establish the equivalence of statements 2 and 3 of
the theorem with (2.7.2). under the assumption that all entries
of B(-) are analytic in Re [s] > 0. We first show that conditions
2 and 3 of the theorem imply (2.7.2); the argument is based on
application of the maximum modulus theorem, in a slightly more
sophisticated form than in our discussion on BR matrices. Set
f(s) = xi*3(s)x for an arbitrary n vector x, and consider a contow C extending from -jQ to j!2 along the jw axis, with small
54
m-PORT NETWORKS AND THEIR PORT DESCRIPTIONS
CHAP. 2
semicircular indentations into Re [s] > 0 around those points
jw, which are poles of elements of B(s); the contour C is closed
with a semicircular arc in Re [s] > 0. On the portion of C coincident with the jo axis, Ref > 0 by condition 2. On the semicircular indentations,
x'*K 0
x
f - s - jw,
and Ref 2 0 by condition 3 and the fact that Re [s] > 0. if
B(.) has no element with a pole at infinity, Ref 2 0 on the righthalf-plane contour, taking the constant value lim,..,x'*[B'*(jw)
B(jw)]x there. If B(.) has an element with a pole at infinity,
then
f sx'*K~
+
--
with Ref 2 0 on the large semicircular arc.
We conclude that Ref 2 0 on C,and by the maximum modulus
theorem applied to exp [-f(s)] and the fact that
we see that Ref 2 0 in Re [s] > 0. This proves (2.7.2).
We now prove that Eq. (2.7.2) implies condition 3 of the
theorem. (Condition 2 is trivial.) Suppose that jw, is a pole of
order m of an element of B(s). Then the values taken by x'*B(s)x
on a semicircular arc of radius p, center jw,, are for small p
Therefore
p" Re [x'*B(s)x]
2.Re
[x'*K,x] cos m0
+ Im [x'*K,x] sin me
which must be nonnegative by (2.7.2). Clearly m = I ; i.e., only
simple poles are possible else the expression could have either
sign. Choosing 0 near -z/2, 0, and 7112 shows that Im [x'*K,x]
= 0 and Re [x'*K,x] 2 0. In other words, KO is nonnegative
VV0
definite Hermitian.
B(s) = -11s fulfills conditions I and 2 of Theorem 2.7.2, but not condition 3 and is not positive real. On the other hand, B(s) = 11s is positive
real.
Example Let B(s) = sL. Then B(s) is positive real if and only if L is nonnegative
2.7.6
definitesymmetric. We show this as follows. By condition 3 of Theorem
2.7.2, we know that L is nonnegative definite Hermitian. The Hermitian
Example
2.7.4
SEC. 2.7
55
THE POSITIVE REAL PROPERTY
+
nature of L means that L = L, jLl, where L, is symmetric, L1 skew
symmetric, and L, and L, are real. However, L = lim,,, B(s)/s, and if
s
m along the positive real axis, B(s)/sis always real, so that L is real.
Example We shall verify that
-.
2.7.6
z(s) =
+ +
s3 3sf 5s + 1
s'+zs"st2
+
is PR. Obviously, z(s) is real rational. Its poles are the zeros of s3 2s'
s 2 = (s"
1))s 2), so that L(S) is analytic in Re [s] > 0. Next
+ +
+
Therefore, the residues at the purely imaginary poles are both positive.
Finally,
20
for all real w
# 1.
In Theorem 2.7.3 we relate the PR property to a property of port descriptions of passive networks. If we accept the fact that any immittance or
hybrid matrix associated with an m port that is finite and lumped must be
rational, then we have the following important result.
Theorem 2.7.3.Let N be anm port that is linear, time invariant,
lumped, finite, and passive. Let B(s) be an immittance or hybrid
matrix for N. Then B(s) is PR.
Proof. We shall prove the result for the case when B(s) is an
impedance. The other cases follow by rninor variation. The steps
of the proof are, first, to note that N has a scattering matrix
S(s) related in a certain way to B(s), and, second, to use the BR
constraint on S(s) to deduce the PR constraint on B(s).
The scattering matrix S(s) exists because of the properties
assumed for N. The impedance matrix B(s) is related to S(s) via
a formula we noted earlier:
56
m-PORT NETWORKS AND THEIR PORT DESCRIPTIONS
CHAP. 2
As we know, S(s) is bounded real, and so, by the result in
Example 2.7.1, B(s) is positive real. Since B(s) is rational, B(s) is
then PR.
VVV
The Lossless Positive Real Property
Our task now is to isolate what extra constraints are imposed on
the immittance or hybrid matrix B(s) of an m port N when the m port is
lossless. The result is simple to obtain and is as follows:
Theorem 2.7.4. Let N be an m port (usual conventions apply)
with B(s) an immittance or hybrid matrix of N. Then N is lossless
if and only if
B'*( jw)
+ Bljw) = 0
(2.7.3)
for all real o such that j o is not apoIe of any element of B(-).
Proof. We shall only prove the result for the case when B(s) is
an impedance. Then there exists a scattering matrix S(s) such that
It follows that
if and only if S(.) is LBR, i.e., if and only if N is lossless. Note
that if j o is a pole of an element of B(s), then [ I - S(jw)] will be
singular; so the above calculation is only valid when j o is not a
pole of any element of B(-).
VVV
Having in mind the result of the above theorem, we shall say that a
matrix B(s) is lossless positive real (LPR) if (1) B(s) is positive real, and (2)
B'*(jw) B ( j o ) = 0 for all real a, with j o not a pole of any element of
4s).
In view of the fact that B(s) is rational, we have that B'*(jw) = B'(-jw),
and so (2.7.3) implies that
+
THE POSITIVE REAL PROPERT(
SEC. 2.7
57
for all s =jw, where jw is not a pole of any element of E(.). This means, since
Bf(-s)
E(s) is an analytic function of s, that (2.7.4) holds for all s that
are not poles of any element of E(.).
To this point, the arguments have paralleled those given for scattering
matrices. But for impedance matrices, we can go one step farther by isoIating
the pole positions of LPR matrices. Suppose that s, is a pole of E(s). Then
as s
so but s f so, (2.7.4) remains valid. It follows that some element of
B'(-s) becomes infinite as s -+ so, so that -so is a pole of an element of
+
-
Now if whenever S , is a pole, -so is too, and if there can be no poles in
Re [s] > 0, allpoles of elements of E(s) must be pure imaginary. It follows
that necessary and sufficient conditions for a real rational matrix E(s) to be
LBR are
I. AU poles of elements of E(s) have zero real part* and the residue
matrix at any pole is nonnegative definite Hermitian.
2. B'(-s)
B(s) = 0 for all s such that s is not a pole of any element of
E(s).
+
Example
We shall verify that
+
is lossless positive real. By inspection, z(-s)
z(s) = 0, and the poles
of z(s), viz., s = m, s = fj l have zero real part. The residue at s = m
is 1 and at s =jl is lim,,,, s(s' f 2)/(s + j l ) = 4. Therefore, z(s) is
lossless positive real.
Testing for the PR and LPR Properties
The situationas far as testing whether the PR and LPR conditions
are satisfied is much as for testing whether the BR and LBR conditions are
satisfied, with one complicating feature. It is possible for elements of the
matrices being tested to have jwaxis poles, which requires some minor
adjustments of the procedures (see [12]). We shall not outline any of the
tests here, but the reader should be aware of the existence of such tests.
Problem
2.7.1
Suppose that Z(s) is positive real and that Z-l(s) exists. Suppose also
that [I Z(s)l-1 is known to exist. Show that Z-'(s) is positive real.
[Hint:Show that AD) = [Z(s) I]-ljZ(s) - I ] is bounded real, that
-A(s) is bounded real, and thence that Z-'(s) is positive real.]
+
+
Problem Suppose that an rn port N, assumed to be linear and time invariant,
2.7.2
is known to have an impedance matrix Z(s) that is positive real. Show
*Far this purpdsc, a pole at infinity is deemed as having zero real part.
58
m-PORT NETWORKS AND THEIR PORT DESCRIPTIONS
CHAP. 2
+
that [I Z(s)l-' exists for all s in Re [s] z 0, and deduce that N has a
bounded real scattering matrix.
Problem Find whether the following matrices are positive real:
2.7.3
Problem Suppose that Z,(s) and Z2(s) are m x m PR matrices. Show that Z,(s)
ZZ(S)is PR. Establish an analogous result for LPR matrices.
2.7.4
+
An important class of networks are those m ports that contain
resistor, inductor, capacitor, and transformer elements, but no gyrators.
As already noted, gyrators may need to be constructed from active elements,
so that their use usually demands power supplies. This may not always be
acceptable.
The resistor, inductor, capacitor, and multipoli transformer are alt said
to be reciprocal circuit elements, while the gyrator is nonreciprocal. An
m port consisting entirely of reciprocal elements will be termed a reciprocal
m-port network. We shall not explore the origin of this terminology here,
which lies in thermodynamics. Rather, we shall indicate what constraints
are imposed on the port descriptions of a network if the network is reciprocal.
Our basic tool will be Tellegen's Theorem, which we shall use to prove
Lemma 2.8.1. From this lemma will follow the constraints imposed on
immittance, hybrid, and scattering matrices of reciprocal networks.
Lemma 2.8.1. Let N be a reciprocal m-port network with all but
two of its ports arbitrarily terminated in open circuits, short
circuits, or two-terminal circuit elements. Without loss of generality, take these terminated ports as the last m - 2 ports.
Suppose also that sources are connected at the lirst two ports so
that with one choice of excitations, the port voltages and currents
have Laplace transforms V\"(s), V',"(s),
Z:"(s) and Iil'(s), while
with another choice of excitations, the port voltages and currents
have Laplace transfonns ViZ1(s), Vi2'(s), Z\z'(s), and liZ)(s). As
depicted in Fig. 2.8.1, the subscript denotes the port number
RECIPROCAL rn PORTS
59
,\il
+
-
rp
4
m- port
I
1Termination
I
I
--
N
..
Termination
FIGURE 2.8.1. Arrangement to Illustrate Reciprocity;
j = 1,2 Gives Two Operating Conditions for Network.
and the superscript indexes the choice of excitations. Then
Before presenting the proof of this lemma, we ask the reader to note carefully the following points:
I. The terminations on ports 3 through rn of the network Nare unaltered
when different choices of excitations are made. The network N is also
unaltered. So one could regard N with the terminations as a two-port
network N' with various excitations at its ports.
2. It is implicitly assumed that the excitations are Laplace transformable-it turns out that there is no point in considering the situation when they
are not Laplace transformable.
Proof. Number the branches of the network N' (not N) from
3 to b, and consider a network N" comprising N' together with
the specified sources connected to its two ports. Assign voltage
and current reference directions to the branches of N' in the
usual way compatible with the application of Tellegen's theorem,
60
m-PORT NETWORKS AND THEIR PORT DESCRIPTIONS
CHAP. 2
with the voltage and current for the kth branch being denoted by
Vy'(s) and IY'(s), respectively. Here j = 1 or 2, depending on
the choice of excitations. Notice that the pairs V:"(s), I\J)(s)and
VY'(s), Iy'(s) have excitations opposite to those required by
Tellegen's theorem.
Now V\"(s), . . , VP'(s), being Laplace transforms of branch
voltages, satisfy Kirchhoff's voltage law, and Ii2'(s), .. ,IJ1)(s),
being Laplace transforms of branch currents, satisfy Kirchhoff's
current law. Therefore, Tellegen's theorem implies that
.
.
Similarly,
Comparison with (2.8.1) shows that the lemma will be true if
Now if branch k comprises a two-terminal circuit element, i.e.,
a resistor, inductor, or capacitor, with impedance Z,(s), we have
V ~ ' ) ( S ) ~ ~=' (l~"(s)Zk(s)I:=~(s)
S)
=I~'(s)V~'(s)
(2.8.3)
I€branches k,, . . . k,,, are associated with a ( p
former, it is easy to check that
+ 9)-port trans-
.
vi:'(s)Ig'(s)
+ . . . + vl?,(s)lb".!#(sf
+ . . . + V ~ ~ q ( s ) I i(2.8.4)
~
= V~'(s)lC'(s)
Summing (2.8.3) and (2.8.4) over all branches, we obtain (2.8.2),
as required.
9V
The lemma just proved can be used to prove the main results describing
the constraints on the port descriptions of m ports. We only prove one of
these results in the text, leaving to the problem of this section a request to
the reader to prove the remaining results. All proofs are much the same,
demanding application of the lemma with a cleverly selected set of exciting
variables.
Theorem 2.8.1. Let N be a reciprocal m port. Then
1. If N possesses an impedance matrix Z(s), it is symmetric.
2. If N possesses an admittance matrix Y(s), it is symmetric.
SEC. 2.8
RECIPROCAL
m PORTS
61
3. If H(s) is a hybrid matrix of N, then for i # j, one has
h,j(s) = fhjt(s)
+
with the
sign holding if the exciting variables at ports
i and j are both currents or both voltages, and the - sign
holding if one is a current and the other a voltage.
4. The scattering matrix S(s) of N is symmetric.
Proof. We shall prove part 1 only. Let i, j be arbitrary different
integers with 1 _< i, j< m. We shall show that z,,(s) = z,(s),
where Z(s) = [z,,(s)] is the impedance matrix of N. Let us renumber temporarily the ports of N so that the new port 1 is the
old port i and the new port 2 is the old port j. We shall show that
with the new numbering, z,,(s) = z2,(s).
First, suppose that all ports other than the first one are terminated in open circuits, and a current source with Laplace
transform of the current equal to i:lj(s) is connected to port I.
Then the voltage present at port 2, viz., V:''(s), is z2,(s)l',"(s),
while I$"(s) = 0, by the open-circuit constraint.
Second, suppose that all ports other than the second are terminated in open circuits, and a current source with Laplace
transform of current equal to iiZ'(s)is connected to port 2. Then
the voltage present at port I , viz., Y',2)(s),is z,,(s)li2'(s), while
I$(s) = 0.
Now apply Lemma 2.8.1. There results the equation
z2,(s)Iy'(s)I~'(s)
=Z~~(~)I\"(~)I~~'(S)
Since I',"(s) and Iia'(s) can be arbitrarily chosen,
~ t l (=
~ z,a(s)
)
VVv
Complete proofs of the remaining points of the theorem are requested
in the problem. Note that if Z(s) and Y(s) are both known to exist, then
Z(s) = Z'(s) implies that Y(s) = Y1(s),since Z ( . ) and Y(.)are inverses. Also,
if Z(s) is known to exist, then symmetry of S(s) follows from S(s) =
[Z(s) - I][Z(s) I]-' and the symmetry of Z(s).
In the later work on synthesis we shall pay some attention to the reciprocal
synthesis problem, which, roughly, is the problem of passing from a port
description of a network fulfilling the conditions of Theorem 2.8.1 to a
reciprocal network with port description coinciding with the prescribed
description.
The reader should note exactly how the presence of a gyrator destroys
reciprocity. First, the gyrator has a skew-symmetric impedance matrix-so
that by Theorem 2.8.1, it cannot be a reciprocal network. Second, the argu-
+
62
m-PORT NETWORKS AND THEIR PORT DESCRIPTIONS
CHAP. 2
ments ending i n Eq. (2.8.3) and (2.8.4) simply cannot be repeated for the
gyrator, so the proof of Lemma 2.8.1 breaks down if gyrators are brought
into the picture.
Problem Complete the proof of Theorem 2.8.1. To prove part 4, it may be helpful
toaugment N by connectingseriesresistorsat its ports, or to use the result
of Problem 2.4.6.
2.8.1
REFERENCES
[ I ] R. W. NEWCOMB,
Linear Multipart Synfhesis, McGraw-Hill, New York, 1966.
[ 2 1 C. A. DESOER
and E. S. KWH,Basic Circuit n e o r y , McGraw-Hill, New York,
1969.
[ 3 I R. A. ROHRER,
Circuit Theory: An Introduction to the State Variable Approach,
McGraw-Hill, New York, 1970.
[ 4 1 V. BELEVITCH,
"Thwry of 2n-Terminal Networks w ~ t hApplications to Conference Telephony," Electrical Communication, Vol. 27, No. 3, Sept. 1950, pp.
231-244.
15 I B. D. 0. ANDERSON,
R. W. NEWCOMB,
and J. K. ZUIDWEG,
"On the Existence
of H Matrices," IEEE Transactions on Circuir Theory, Vol. CT-13, No. I,
March 1966, pp. 109-110.
[ 6 1 B. D. 0. ANDERSON
and R. W. NEWCOMB,
"Cascade Connect~onfor TlmeInvariant n-Port Networks," Proceedings of the IEE, Vol. 113, No. 6, June 1966,
pp. 970-974.
[ 7 ] W. H. KIM and R. T. W. CHIEN,Topological Analysis and Synthesis of Communication Networks, Columbia University Press, New York, 1962.
[ 8 I B. D. 0. ANDERSON
and J. B. MOORE,
Linear Optimal Control, Prentice-Hall,
Englewood Cliffs, N.J., 1971.
191 R. W. B ~ o c m r rFinite
,
Dimensional Linear Systems, W~ley,New York, 1970.
[I01 J. L. Dooe, Stochastic Processes, Wiley, New York, 1953.
1111 M. R. WOHLERS,h p e d and Distribuled Passive Networks, Academlc Press,
New York, 1969.
[I21 E. A. GUILLEMIN,
Synthesis of Possive Networks, Wiley, New York, 1957.
[I31 F. R. GANTMACHER,
The Theory of Matrices, Chelsea, New York, 1959.
1141 B. D. 0. ANDERSON
and R. W. NEWCOMB,
Apparently Lossless Time-Varying
Networks, Rept. No. SEL-65-094 (TR No. 6559-1). Stanford Electrorrics Laboratories, Stanford, Calif., Sept. 1965.
State-Space Equations
and Transfer-Function Matrices
In this chapter our aim is to explore (independently of any network associations) connections between the description of linear, lumped,
time-invariant systems by transfer-function matrices and by state-space equations. We assume that the reader has been exposed to these concepts before,
though perhaps not in sufficient depth to understand all the various connections between the two system descriptions. While this chapter does not purport to be a definitive treatise on linear systems, it does attempt to survey
those aspects of linear-system theory, particularly the various connections
between transfer-function matrices and state-space equations, that will be
needed in the sequel. To do this in an appropriate length requires a terse
treatment; we partially excuse this on the grounds of the reader having had
prior exposure to most of the ideas.
In this section we do little more than summarize what is to come, and
indicate how a transfer-function matrix may be computed from a set of statespace equations. In Section 2 we outline techniques for solving state-space
equations, and in Sections 3, 4, and 5 we cover the topics of complete controlIability and observability, minimal realizations, and the generation of
state-space equations from a prescribed transfer-function matrix. In Sections
6 and 7 we examine two specific connections between transfer-function
matrices and state-space equations-the relation between the degree of a
transfer-function matrix and the dimension of the associated minimal real-
64
STATE-SPACE EQUATIONS
CHAP. 3
izations, and the stability properties of a system that can be inferred from
transfer-function matrix and state-space descriptions. For more complete
discussions of linear system theory, see, e.g., [I-31.
Two Linear System Descriptions
The most obvious way to describe a lumped physical system is
via differential equations (or, sometimes, integro-differential equations).
Though the most obvious procedure, this is not necessarily the most helpful.
If there are well-identified input, or excitation or control variables, and wellidentified output or response variables, and if there is little or no interest in
the behavior of other quantities, a convenient description is provided by
the system impulse response or its Laplace transform, the system transferfunction matrix.
Suppose that there are p scalar inputs and m scalar outputs. We arrange
the inputs in vector form, to yield a p-vector function of time, u(.). Likewise
the outputs may be arranged as an m-vector function of time y(.). The
impulse response is an m x p matrix function* of time, w ( . ) say, which is
zero for negative values of its argument and which relates u(.) to y(.) according to
where it is assumed that (I) u(t) is zero for f 2 0, and (2) there are zero
initial conditions on the physical system at t = 0.
Of course, the time invariance of the system allows shifting of the time
origin, so we can deal with an initial time to # 0 if desired.
Introduce the notation
[Here, F(.) is the Laplace transform off(.).]
Then, with mild restrictions on u(.) and y ( . ) , (3.1.1) may be written as
The m x p matrix W(.),which is the Laplace transform of the impulse
response w(.), is a matrix of real rational functions of its argument, provided
the system with which it is associated is a lumped system.? Henceforth in
*More properly, a generalized function, since w ( . ) may contain delta functions and
even their derivatives.
?Existence of W(s)for all but a finite set of points in the complex plane is guaranteed
when W(s) is the transfer-function matrix of a lumped system.
SEC. 3.1
DESCRlPTlONS OF LUMPED SYSTEMS
65
this cha~terwe shall almost always omit the use of the word "real" in describing ~ ( s j .
A second technique for describing lumped systems is by means of statevariable equations. These are differential equations of the form
i=Fx+Gu
y = H'x Ju
+
where x 1s a vector, say of dimension n, and F, G, H, and J are constant
matrices of appropriate dimension. The vector x is termed the stare vector,
and (3.1.4) are known as state-space equations. Of course, u(.) and y(.) are
as before.
As is immediately evident from (3.1.4), the state vector serves as an intermediate variable linking u(.) and y(.). Thus to compute y(.) from u(-), one
would first compute 4.)-by
solving the first equation in (3.1.4)-and then
compute y(.) from x(.) and u(.). But a state vector is indeed much more
than this; we shall develop briefly here one of its most important interpretations, which is as follows. Suppose that an input u,(.) is applied over the interval [ t ~ ,z,] and a second input u , ( . ) is applied over It,, t,]. To compute the
resulting output over [to, t,], one could use
1. the value of x(t_,) as aninitial wndition for the first equation in (3.1.41,
2. the control u,(.) over It-,, I,], and
3. the control u,(.) over fro,t,].
Alternatively, if we knew x(t,), it is evident from (3.1.4) that we could
compute the output over [ t o , r,] using
1. the value of x(t,) as an initial wndition for the first equation in (3.1.4),
and
2. the control u , ( . ) over [to, t,].
Let us think of to as a present time, t-, as a past time, and t , as a future
time. What we have just said is that in computing rhe future behavior of the
system, the present state carries as much information as a pasl state and past
control, or we might say that the present state sums up all that we need to
know of the past in or&r to computefuture behavior.
State-space equations may be derived from the differential or integrodifferential equations describing the lumped system; or they may be derived
from knowledge of a transfer-function matrix W(s) or impulse response w@);
or, again, they may be derived from other state-space equations by mechanisms such as change of coordinate basis. The derivation of state-space equations directly from other differential equations describing the system cannot
generally proceed by any systematic technique; we shall devote one chapter
to such a derivation when the lumped system in question is a network. Deri-
66
CHAP. 3
STATE-SPACE EQUATIONS
vation from a transfer-function matrix or other state-space equations will be
discussed subsequently in this chapter.
The advantages and disadvantages of a state-space-equation description as
opposed to a transfer-function-matrix description are many and often subtle.
One obvious advantage of state-space equations is that the incorporation of
nonzero initial conditions is possible.
In the remainder of this section we shall indicate how to compute a transferfunction mairix from state-space equations.
From (3.1.4) it follows by taking Laplace transforms that
[Zero initial conditions have been assumed, since only then will the transfer
function matrix relate U(s) and Y(s).] Straightforward manipulation yields
and
Y(s)
= [J
+ H'(s1-
F)-'GI U(s)
(3.1.5)
Now compare (3.1.3) and (3.1.5). Assuming that (3.1.3) and (3.1.5) describe
the same physical system, it is immediate that
Equation (3.1.6) shows how to pass from a set of state-space equations to
the associated transfer-function matrix. Several points should be noted. First,
It follows that if W(s) is prescribed, and if W ( m )is not finite, there cannot be
a set of state-space equations of the form (3.1.4) describing the same physical
system that W(s) describes.* Second, we remark that although the computation of (sl- F)-' might seem a difficulttask for large dimension F, an efficient algorithm is available (see, e.g., [4D.Third, it should be noted that the
impulse response w(t) can also be computed from the state-space equations,
though this requires the ability to solve (3.1.4) when u is a delta function.
*Note that a rational W(s)will becomeinfinitefor s = m if and only if thereis differentiation of the input to the system with tmnsfer-functionmatrix W(s) in producing the
system output. This is an uncommon occurrence, but will arise in some of our subsequent
work.
SEC. 3.1
DESCRIPTIONS OF LUMPED SYSTEMS
67
The end result is
~ ( t=
) J5(t)
where
a(.)
+ H'eFfGl(t)
(3.1.8)
is the unit delta function, i(t) the unit step function, and eP' is
defied as the sum of the series
The matrix 8'is well defined for all Fand all t, in the sense that the partial
sums S, of n terms of the above series are such that each entry of S,,
approaches a unique finite limit as n
m [4]. We shall explore additional
properties of the matrix e" in the next section.
-
Problem Compute the transfer function associated with state-space equations
3.1.I
-1
.=[
0
dx+[:lu
Problem Show that if F is in block upper triangular form, i.a,
-
then (st - F)-1 can be expressed in terms of (sl F,)-J,(sl - F,)-I,
and F,.
Problem Find state-space equations for the transfer functions
3.1.3
Deduce a procedure for computing state-space equations from a scalar
transfer function W(s), assuming that W ( s ) is expressible as a constant
plus a sum of partial fraction terms with linear and quadratic denominators.
Problem Suppose that W(s) = J f H'(sI - F)-IG and that W ( m ) = J is non3.1.4
singular. Show that
This result shows how to write down a set of state-space equations of a
system that is the inverse of a prescribed system whose state-space equations are known.
68
CHAP. 3
STATE-SPACE EQUATIONS
In this section we outline briefly procedures for solving
We shall suppose that u(t) is prescribed for t 2 t o , and that x(t,) is known.
We shall show how to compute y(t) for t 2 to.
In Section 3.1 we defined the matrix 6' as the sum of an infinite series of
matrices. This matrix can be shown to satisfy the following properties:
1. e"'fl" = eP"""'1,
2. S'e-" = I.
3. p ( F ) S 1 = eP'p(F)for any polynomial p ( . ) in the matrix F.
4. d/dt(ePr)= Fern= eprF.
One way to prove these properties is by manipulations involving the infinitesum definition of @'.
Now we turn to the solution of the first equation of (3.2.1).
Theorem 3.2.1. The solution of
for prescribed x(t,)
by
= x,
and u(z), z 2 t o ,is unique and is given
In particular, the solution of the homogeneous equation
Proof. We shall omit a proof of uniqueness, which is somewhat
technical and not of great interest. Also, we shall simply verify
that (3.2.3) is a solution of (3.2.2), althoughit is possible to deduce
(3.2.3) knowing the solution (3.2.5) of the homogeneous equation
(3.2.4) and using the classical device of variation of parameters.
To obtain iusing (3.2.3), we need to differentiate eP('-" and
@"-". The derivative of eF"-" follows easily using properties 1
SEC. 3.2
SOLVING THE STATE-SPACE EQUATIONS
69
and 4:
Similarly, of course,
Differentiation of (3.2.3) therefore yields
on using (3.2.3). Thus (3.2.2) is recovered. Setting t = r, in (3.23)
also yields x(t,) = x,, as required. This suffices to prove the
VVV
theorem.
Notice that x(t) in (3.2.3) is the sum of two parts. The first, exp[F(t - t,)]x,,
is the zero-input response of (3.2.2), or the response that would be observed if
u ( . ) were identically zero. The remaining part on the right side of (3.2.3) is
the zero-initial-state response, or the response that would be observed if the
initial state x(t,) were zero.
ExamDle
Consider
Suppose that x(0) = [2 01'and u(f) is a unit step function, zero until
t = 0. The series formula for 8' yields simply that
Therefore
70
CHAP. 3
STATE-SPACE EQUATIONS
The complete solution of (3.2.1), rather than just the solution of the differential equation part of (3.2.1), is immediately obtainable from (3.2.3). The
result is as follows.
Theorem 3.2.2. The solution of (3.2.1) forprescribed x(t.) =x,
and u(z), T 2 to,is
Notice that by setting x ,
= 0, to = 0,
and
in the formula for y(t), we obtain
In other words, w(t) as defined by (3.2.7) is the impulse response associated
with (3.2.1), as we claimed in the last section. .
To apply (3.2.6) in practice, it is clearly necessary to be able to compute
8'.We shall now examine a procedure for calculating this matrix.
Calculation of eF'
We assume that Fis known and that we need to find 8'.An exact
technique is based on reduction of F to diagonal, or, more exactly, Jordan
form.
If F is diagonalizable, we can find a matrix T (whose columns are actually
eigenvectors of F ) and a diagonal matrix A (whose diagonal entries are actually eigenvalues of F) such that
(Procedures for finding such a Tare outlined in, for example, [4] and [51.
The details of these procedures are inessential for the comprehension of this
book.) Observe that
A2
= T-lm-IFT=
More generally
A- = T - I p T
Therefore
T-lF2T
SOLVING THE STATE-SPACE EQUATIONS
71
It remains to compute eA'. This is easy however in view of the diagonal
nature of A. Direct application of the infinite-sum definition shows that if
A
= diag (A,,
Al.
then
PC
= diag (eA",e"':
...,Ax)
(3.2.1 I)
...
(3.2.1 2)
.
e".')
Thus the problem of computing 8' is essentially a problem of computing
eigenvalues and eigenvectors of F.
If F has complex eigenvalues, the matrix T is complex. If it is desired to
work with real T, a T is determined such that
where 4-denotes the direct sum operation; A,, A,, . . . ,1, are real eigenvalues
) . f j#r+2),
. . . (A,+, jp;,,) are complex
of F; and ((a,+, & j ~ + ~(Afiz
eigenvalues of F.
It may easily be verified that
. *
The simplest way to see this is to observe that for arbitrary x,,
i i m p -eA'rin p 3 x 0 ]
eAfcos pt
b { [ isinfp
A i r cos p - pea sin pt -18' sin pt - flCleu cos pt
A@ sin pt + +peUcosfit
ReU ws p t - #&'sin pt
d
72
STATE-SPACE EQUATIONS
Observe also that with
r = 0,
Xo
= Xo
Therefore
x(t) =
-P sin pi
eUcos pt
r8' cos pt
lea' sin pt
is the solution of
which is equal to x, at f ='O. Equation (3.2.14) is now an immediate consequence of Theorem 3.2.1.
From (3.2.13) and (3.2.14), we have
exp ([T-'FT]t ) = [e'"]-? [&I]4
. ..
[eac"]
I+ ..
cos p,+,t -e"le'' sin p,+,t
eaisZ
sin p,+,t
cos p,+,t
e"",'cos p,+,l -ehed sin p,+,f7
e"" sin p,+,t
eh'.Acos p,, ,t
+["
The left side is T-'enT, and so, again, evaluation of Fais immediate.
Example
Suppose that
3.2.2
-1
-I
1 -5
With
1 -2
it follows that
0
1 -2
Hence
SEC. 3.2
SOLVING THE STATE-SPACE EQUATIONS
73
e-L -2-1+ ze-Zrcos t + e-2: sin f Sf-' - 5e-" cos t - 5e-='sin t
e-" cos t + 3c-2' sin t
-lOe-flsin f
e-" sin t
e-" eos t 3e-2rsin t
-
If F is not diagonalizable, a similar approach can still be used based on
evaluating e", where J is the Jordan form associated with F. Suppose that
we find a T such that
with J a direct sum of blocks of the form
(Procedures for finding such a Tare outlined in [4] and M.) Clearly
elf
+
.
= & ~+
l I + ..
(3.2.17)
and
= Te"T-1
(3.2.18)
The main problem is therefore to evaluate e'lf. If JLis an r x r matrix, one
can show (and a derivation is called for in the problems) fhat
From (3.2.10) through (3.2.12) notice that for a diagonalizable F, each
where 2, is an eigenvalue of F.
entry of eP'will consist of sums of terms
Each eigenvalue A, of F shows up in the form a,@ in at least one entry of
74
STATE-SPACE EQUATIONS
CHAP. 3
e: though not necessarily in all. As a consequence, if all eigenvalues of F
have negative real parts, then all entries of eF' decay with time, while if one or
more eigenvalues of F have positive real part, some entries of en' increase
exponentially with time, and so on. If F has a nondiagonal Jordan form we
see from (3.2.15) through (3.2.19) that some entries of em contain terms of
the form a,tSe2'' for positive integer a. If the Aj have negative real parts,
such terms also decay for large t.
Approximate Solutions of the State-Space
Equations
The procedures hitherto outlined for solution of the state-space
equations are applicable provided the matrix ex' is computable. Here we wish
to discuss procedures that are applicable when ep' is not known. Also, we
wish to note procedures for evaluating
when either 6' is not known, or em is known but the integral is not analytically computable because of the form of u(.).
The problem of solving
f=Fx+Gu
(3.2.2)
is a special case of the problem of solving the general nonl~nearequation
x =f(x,
tr,f)
(3.2.20)
Accordingly, techniques for the numerical solution of (3.2.20)-and there are
many-may be applied to any specialized version of (3.2.20), including
(3.2.2). We can reasonably expect that the special form of (3.2.2), as compared
with (3.2.20), may result in those techniques applicable to (3.2.20) possessing
extra properties or even a simple form when applied to (3.2.2). Also, again
because of the special form of (3.2.2),we can expect that there may be numerical procedures for solving (3.2.2) that are not derived from procedures for
solving (3.2.20), but stand independently of these procedures. We shall first
discuss procedures for solving (3.2.20), indicating for some of these procedures how they specialize when applied to (3.2.2). Then we shall discuss special procedures for solving (3.2.2).
Our discussion will be brief. Material on the solution of (3.2.2) and (3.2.20)
with particular reference to network analysis may be found in [6-81. A basic
reference dealing with the numerical solution of differential equations is [9].
The solution of (3.2.20) commences in the selection of a step size h. This is
the interval separating successive instants of time at which x(t) is computed;
i.e., we compute x(h), x(2h), x(3h), .given x(0). We shall have more to say
..
SEC. 3.2
SOLVING THE STATE-SPACE EQUATIONS
75
about the selection of h subsequently. Let us denote x(n11) by x,, u(nh) by u,
and f(x(nh), u(nh), nh) by f,. Numerical solution of (3.2.20) proceeds by
specifying an algorithm for obtaining x. in terms of x ".,, . . . ,x,_,,f,, . . ,
f.-,.The general form of the algorithm will normally be a recursive relation
of the type
.
where the a, and b, are certain numerical constants. We shall discuss some
procedures for the selection of the a, and b, in the course of making several
comments concerning (3.2.21).
1. Predictor and predictor-corrector formulas. If b, = 0, then f, is not
required in (3.2.21), and x, depends only o n x,-,, x
. . ,x,., and u
u,_,,
.. .,u,_, both linearly and through f(., ., .). The formula (3.2.21) is
then called a predictor formula. But if b, f 0, then the right-hand side of
(3.2.21) depends on x.; the equation cannot then necessarily be explicitly
solved for x,,. But one may proceed as follows. First, replacef, =f(x,, u,,, nh)
by f(x,_,, u,,nh) and with this replacement evaluate the right-hand side of
(3.2.21). Call the result Z,,, since, in general, x, will not result. Second, replace
f, =f(x,, u,, nh) by f(A,, u,,, nh) and with this replacement, evaluate the right
side of (3.2.21). One may now take this quantity as xn and think of it as
a corrected and predicted value of x,. [Alternatively, one may call the new
right side 2,,,replace f, by f(B,, u,, nh), and reevaluate, etc.] The formula
(3.2.21) with b, # 0 is termed apredictor-corrector formula.
._,,.
,_,,
Example A Taylor series expansion of (3.2.20) yields
which is known as the Euler formula. It is a predictor formula. An
approximation of
by a trapezoidal integration formula yields
m
x = xn-I
+ Th [~(x.-I,UB-I,(n - l)h) +f (x,, u,, nh)]
(3.2.23)
which is a predictor-correctorformula. In practice, one could implement
the slightly simpler formula
which is sometimes known as the Heun algorifhm.
76
STATE-SPACE EQUATIONS
CHAP. 3
When specialized to the linear equation (3.2.21, Eq. (3.2.22) becomes
+ hFx.. + ~ C I I . .
+ hF)x.-5 + hGzln_,
x. = x.-I
= (I
(3.2.25)
Equation (3.2.24) becomes
(3.2.26)
2. Single-step and multistep formulas. If in the baslc algorithm equation
(3.2.21) we have k = 1, then x, depends on the values of x ,.,, u
and u,,
but not-at least d~rectly-on the earlier values of x
u,_,,
x
....In
this instance the algorithm is termed a single-step algorithm. If x, depends o n
values of x, and u, for i < n - I, the algorithm is called a multistep algorithm.
Single-step algorithms are self-starting, in the sense that if x(0) = x, is
, can be computed straightforwardly; in contrast, multiknown, x,, x,,
step algorithms are not self-starting. Knowing only x,, there is no way of
obtaining x, from the algorithm. Use of a multistep algorithm has generally
to be preceded by use of a single-step algorithm to generate sufficient initial
data for the multistep algorithm. Against this disadvantage, it should be
clear that multistep algorithms are inherently more accurate-because at each
step they use more data-than their single-step counterparts.
,_,,
,.,,
._,,
...
Example (Adam's predictor-corrector algorithm) Note: This algorithm uses a
3.2.4
different equation for obtaining the corrected value of x. than for obtaining the predicted value; it is therefore a generalization of the predictorcorrector algorithms described. The algorithm is also a multistep algorithm; the predicted inis given by
The corrected x, is given by
Additional correction by repeated use of (3.2.27) may be employed. Of
course, the algorithm must be started by a single-step algorithm. Derivation of the appropriate formulas for the linear equation(3.2.2) is requested
in the problems.
SEC.
3.2
SOLVING THE STATE-SPACE EQUATIONS
77
3. Order of an algorithm. Suppose that in a single-step algorithm x,_, agrees
exactly with the solution of the differential equation (3.2.20) for f = (n - 1)h.
In general x, will not agree with the solution at time nh, but the difference,
call it c,,, will depend on the algorithm and on the step size h. If IIcnII =
O(hg+'), the algorithm is said to be of order* p. The definition extends to multistep algorithms. High-order algorithms are therefore more accurate in going
from step to step than are low-order ones.
Example The Euler algorithm is of order 1, as is immediately checked. The Heun
algorithm is of order 2-this is not difficultto check. An example of an
algorithm of order p is obtained from a Taylor series expansion in
(3.2.20):
.
3.2.6
assuming that the derivatives exist; A popular fouth-order algorithm is
the Runge-Kutta algorithm:
x. = X.-I
+ hR.-,
.......
. . .
(3.2.29)
with
%-I
=&(kt
+ 2k1 + 2kn + k4)
.,
(3.2.30)
and
kl =A,
k , = f (xn-,
h h
+ -hT k l , u (n- lh-+ T
) ,n - Ih + T
)
This algorithm may be interpreted in geometric terms. First, the value
k , of f(x, U, f ) at r = (n - l)h is computed. Using this to estimate x
at a time h/2 later, we compute k, =f (x, u, r) at t = (n - I)h h12.
Using this new value as an estimate of i ,wereestimate x at time (u - I)h
(hl2)and compute k, = f(x, u, t ) at this time. Finally, using this value
as an estimate of i ,we compute an estimate of x(nh) and thus of k, =
f(x, u, t ) at t = nh. The various values off (x, u, t ) are then averaged,
and this averaged value used in (3.2.29). It is interesting to note that
this algorithm reduces to Simpson's rule if f(x, u, t ) is independent of x.
+
+
4. Stability of an algorithm. There are two kinds of errors that arise in
applying the numerical algorithms: truncation error, which is due to the fact
*Wewrite f(h) = O(h*)when, as h
- 0;f(h)
0 at least as fast as h ~ .
78
STATE-SPACE EQUATIONS
CHAP. 3
that any algorithm is an approximation, and roundofferror,due to rounding
off of numbers arising in the calculations. An algorithm is said to be numerically stable if the errors do not build up as iterations proceed. Stability
depends on the form off(x, u,I ) and the step size; many algorithms are stable
only for small h.
Example
In 181 the stability of the Euler algorithm applied to the linear equation
3.2.6
(3.2.2) is discussed. The algorithm as noted earlier is
x. = ( I
+ hF)x.-I -t- hCu.-,
(3.2.31)
It is stable if the eigenvalues of F all possess negative real parts and if
h
< min [-2-+]Re
I&Irl
where 1, is an eigenvalue of F.
It would seem that of those methods for solving (3.2.2) which are specializations of methods for solving (3.2.20), the Runge-Kutta algorithm has
proved to be one of the most attractive. It is not clear from the existing
literature how well techniques above compare with special techniques for
(3.2.2), two of which we now discuss.
The simple approximation of e" by a finite number of terms in its series
expansion has been suggested as one approach for avoiding the difficulties
in computing 8'.Since the approximation by a fixed number of terms will be
better the smaller the interval 10, TI over which it is used, one possibility is to
approximate eF' over [0, h] and use an integration step size of h. Thus, following [81,
If Simpson's rule is applied to the integral, one obtains
The matrices eFband ee'I f i hare approximated by polynomials in Fobtained
by power series truncation and the standard form of algorithm is obtained.
A second suggestion, discussed in 161, makes use of the fact that the exponential of a diagonal matrix is easy to compute. Suppose that F = D A,
where D is diagonal. We may rewrite (3.2.2) as
+
.
i
= Dx
t- [Ax + Gu]
(3.2.34)
SEC. 3.2
SOLVING THE STATE-SPACE EQUATIONS
79
from which it follows that
~ ( t=
) eDfx(0)4-
en('-"[Ax(z)
+ GI@)] dz
and
Again, Simpson's rule (or some other such rule) may be applied to evaluate
the integral. Of course, there is now no need to approximate eDa.A condition
for stability is reported in [6] as
where c is a small constant and rl,(A) is an eigenvalue of A.
The homogeneous, equation (3.2.4) has, as we know, the solution x(t) =
@'x,, when x(O).= x,. Suggestions for approximate evaluation of PLhave
been many (see, c-g., [lo-17J). The three most important ideas used in these
various techniques are as follows (note that any one technique may use only
one idea):
I. Approximation of em by a truncated power series.
2. Evaluation of dkA
by evaluating em for small A (with consequent early
truncation of the power series), followed by formation of
3. Making use of the fact that, by the Cayley-Hamilton theorem, @( may
be written as
where F i s n x n, and the a,(.) are analytic functions for which power
series are available (see Problem 3.2.7). In actual computation, each
a,(.) is approximated by truncating its power series.
x n matrix and X an unknown n x n matrix.
Show that a solution (actually the unique solution) of
Problem Let F be a prescribed n
3.2.1
~ = F X
with X(l0) = XOis
X(t)
2%
&"'-'dXo
If matrices A and B, respectively n x n and m x m, are prescribed, and
Xis an unknown n x m matrix, show that a solutionof
80
STATE-SPACE EQUATIONS
when X(t,) = XOis the prescribed boundary condition.
What is the solution of
where C(t) is a prescribed n x m matrix?
problem Suppose that
3.2.2
is an r x r matrix. Show that
Problem (Sinusoidal response) Consider the state-space equations
3.2.3
f = Fx
Cu
+
y=H'x+Ju
+
and suppose that for r 2 0,u(t) = u, cos (wt 4) for some constant
rr,, $, and w,with jw not an eigenvalue of F. Show from thedifferential
equation that
~ ( t=
) X, cos Wt
+
X.
sin wt
is a solution if x(0) is appropriately chosen. Evaluate x, and x, and conclude that
SEC. 3.3
PROPERTIES OF STATE-SPACE REALIZATIONS
y(t) = Re[[J
81
+ H'(jwI - F)-lGlej+]uocos wt
+ Im([J+ H'fjol- F)'fCle"Juo sin at
--
Show also that if all eigenvalu-s of F have negative real parts, so that
the zero-input component of y(t) approaches zero as t
m, then the
above formula for y(t) is asymptotically correct as t
ao, irrapetive
of ~ ( 0 ) .
Problem Derive an Adams predictor-corrector fonnula for the integration of
.
i
= Fx Gu.
+
3.2.4
Problem Derive Runge-Kutta formulas for the integration of i= Fx
+ Gu.
3.2.5
Problem Suppose that @ is approximated by the fist K f 1 terms of its Taylor
series. Let R denote the remainder matrix; i.e.,
3.2.8
Let us introduce the constant
E,
defined by
Show that 1101
This error formula can be used in determining the point at which apower
series should be truncated to achieve satisfactory accuracy.
Problem The Cayley-Hamilton theorem says that if an n x n matrix A has
characteristic polynomial s" 0"s"-1
- a l , then An a,,An-1
+ .. a,l = 0. Use this to show that there exist functions a&),
i = 0, I , . .. n
1 such that exp [At1 =
.%&)Ai.
3.2.7
.+
3.3
.-
+
+ .. +
+
x;;,L
PROPERTIES OF STATE-SPACE
REALIZATIONS
In this section our aim is to describe the properties of complete
controllability and complete observability. This will enable statement of one
of the fundamental results of linear system theory concerning minimal realizations of a transfer-function matrix, and enable solution of the problem of
passing from a prescribed transfer-function matrix to a set of associated
state-space equations.
82
STATE-SPACE EQUATIONS
CHAP. 3
Complete Controllability
The term complete controllability refers to a condition on the
matrices F and G of a state-space equation
that guarantees the ability to get from one state to another with an appropriate control.
Complete Controllability Definition. The pair [F, C ] is
completely controllable if, given any x(to) and to, there exists t,
> to and a control u(.) defined over [to, t,] such that if this control is applied to (3.3.1), assumed in state x(to) at to,then at time
r, there results x(t,) = 0.
We shall now give a number of equivalent formulations of the complete
controllability statement.
Theorem 3.3.1.The pair [F, G] is completely controllable if and
only if any of the following conditions hold:
1. There does not exist a nonzero constant vector w such that
w'eP'G = .Ofor all t.
2. If F is n x n, rank [G FG F 2 G . - - Fn-'C] = n.
3.
f:e-"'GG'eLP'idt is nonsingular for all to and t,, with t, >to.
4. ~ i v e nr, and t,, to < t,, and two states x(to) andx(t,), there
exists a control u ( - ) defined over [to,t,] taking x(to) at time
to to x(t,) at time t,.
Proof. We shall prove that the definition implies 1, that 1 and 2
imply each other, that 1 implies 3, that 3 implies 4, and that 4
implies the definition.
DeJTnition implies I . Suppose that there exists a nonzero constant vector w such that w'ePIG = 0 for all t. We shall show that
this contradicts the definition. Take x(t,) = e-"'8-'"w. By definition, there exists u(-) such that
SEC. 3.3
PROPERTIES OF STATE-SPACE REALIZATIONS
83
Multiply on the left by w'. Then, since w'eP'G = 0 for all t, we
obtain
0 = w'w
This is a contradiction.
I md 2 imply each other. Suppose that w'emG = 0 for all t and
some nonzero constant vector w. Then
8 (w1ePJ(3= w'Fie"G
=0
dl'
for all 1. Set r = 0 in fhese relations for i = I , 2, .
conclude that
d [ G FG
..,n - I to
... Fn"C;I= 0
(3.3.2)
..
Noting that the matrix [G FG . P - l G ] has n rows, it follows
that if this matrix has rank n, there cannot exist w satisfying
(3.3.2) and therefore there cannot exist a nonzero w such that
w'ePrG = 0 for all t ; i.e., 2 implies I.
To prove the converse, suppose that 2 fails. We shall show that
1 fails, from which it follows that 1 implies 2. Accordingly, s u p
pose that there exists a nonzero w such that (3.3.2) holds. By
the Cayley-Hamilton theorem, F",En+', and higher powers of F
are linear combinations of I, F, . .. , P-', and so
w'FIG = 0
for all i
Therefore, using the series-expansion definition of em,
w'ePtG = 0
for all t
This says that 1 fails; so we have proved that failure of 2 implies
failure of 1, or that I implies 2.
I implies 3. We shall show that if 3 fails, then 1 fails. This is
equivalent to showing that 1 implies 3. Accordingly, suppose that
f:e-alGG'e-P"dt w = 0
for some to, t , , and nonzero constant w, with t ,
1:
(~'e-~'G)(G'e-~'~w)
df = 0
> 1,.
Then
84
STATE-SPACE EQUATIONS
The integrand is nonnegative; so
(~'e-~'G)(G'e-~'~w)
=0
for all f in [ f , , t , ] , whence
for all t in [to, t , ] . By the analyticity of w'd'G as a function oft,
it follows that w'eprG= 0 for all t ; i.e., 1 fails.
3 implies 4. What we have to show is that for given to, t , , x(t,),
and x(t,), with t , to, there exists a control u ( . ) such that
Observe that if we take
[I:
&) = ~ ' e - " "
1
~ - F ~ G G ' ~ - do]P ' . ' [e-".x(t,)
- e-Prox(t,)]
then
as required.
4 implies the definition. This is trivial, since the defi~tionis
a special case of 4, corresponding to x(f,)= 0.
VVV
Property 2 of Theorem 3.3.1 offers what is probably the most efficient
procedure for checking complete controllability.
Example Suppose that
3.3.1
-1
-3
-3
SEC. 3.3
PROPERTIES OF STATE-SPACE REALIZATIONS
'
85
Then
0
1
-3
This matrix has rank 3, and so [F,
gJis completely controllable.
Two other important properties of complete controllability will now be
presented. Proofs are requested in the examples.
Theorem 3.3.2. Suppose that IF, GI is completely controllable,
with F n x n and G n x p. Then for any n x p matrix K,the pair
[F GK', GI is completely controllable.
+
In control-systems terminology, this says that complete controllability is
invariant under state-variable feedback. The system
2 = {F
+ GK')x + Gu
may be derived from (3.3.1) by replacing u by (u f a)
i.e.,
,by allowing
the input to be composed of an external input and a quantity derived from
the state vector.
Theorem 3.3.3. Suppose that [F, GI is completely controllable,
and T is an n X n nonsingular matrix. Then [TIT-', TG]is completely controllable. Conversely, if [F, C;1 is not completely controllable, [TFT", TG] is not completely controllable.
To place this theorem in perspective, consider the effect of the transformation
E = Tx
(3.3.3)
on (3.3.1). Note that the transformation is invertible if T is nonsingular. It
follows that
$
. = T t = TFx
+ TGu = TIT-'.? $ TGu
(3.3.4)
Because the transformation (3.3.3) is invertible, it follows that x and R
abst~actlyrepresent the same quantity, and that (3.3.1) and (3.3.4) are abstractly the same equation. The theorem guarantees that two versions of
the same abstract equation inherit the same controllability properties.
The definition and property 4 of Theorem 3.3.1 suggest that if [F, GI is not
completely controllable, some part of the state vector, or more accurately,
one or more linear functionals of it, are unaffected by the input. The next
86
CHAP. 3
STATE-SPACE EQUATIONS
theorem pins this property down precisely. We state the theorem, then offer
several remarks, and then we prove it.
Theorem 3.3.4. Suppose that in Eq. (3.3.1) the pair [F, is not
completely controllable. Then there exists a nonsingular constant
matrix T such that
ellcompletely controllable.
with [PC,,
As we have remarked, under the transformation (3.3.3), Eq. (3.3.1) passes
to
2 = TET-'2
+ TGu
(3.3.6)
Let us partition R conformably with the partitioning of TFT''. We derive,
in obvious notation,
It is immediately clear from this equation that 2 2 will be totally unaffected
by the input or R,(t,); i.e., the R2part of 2 is uncontrollable, in terms of both
the definition and common sense. Aside from the term l ? , , ~ in
, the differential equation for R,, we see that 2, satisfies a differential equation with the
completi controllability property; i.e., the 2, pas of R is completely controllable. The purpose of the theorem is thus to explicitly show what is controllable and what is not.
Proof. Suppose that F is n x n, and that [F,GI is not completely
controllable. Let
rank[GFG
... P - ' G ] = r < n
(3.3.8)
Notice that the space spanned by {G, FG, . . . ,Fn-'Gj is an invariant F space; i.e., if any vector in the space is multiplied by F,
then the resulting vector is in the space. This follows by noting
that
SEC. 3.3
PROPERTIES OF STATE-SPACE REALIZATIONS
87
for some set of constants a,, the existence of which is guaranteed
by the Cayley-Hamilton theorem.
Define the matrix T as follows. The first r columns of T-I are
a basis for the space spanned by (G, FG, . ,Fn-'GI, while the
remaining n - r columns are chosen so as to guarantee that T-I
is nonsingular. The existence of these column vectors is guaranteed by (3.3.8). Since G is in the space spanned by {G, FG, . . . ,
FP1G} and since this space is spanned by the first r colllmns of
T-I,it follows that
..
for some 6, with r rows. The same argument shows that
where S^, has r rows. By (3.3.8), it follows that
Also, we must have
3, has rank r.
for some 3, with r rows. This implies
Since 3, has rank r, this implies that
where PI, is r x r. Equations (3.3.9) and_(3.3.11) jointly yield
(3.3.5). It remains to be shown that [$,,, GI] is completely controllable. To see this, notice that (3.3.9) and (3.3.11) imply that
88
CHAP. 3
STATE-SPACE EQUATIONS
From (3.3.10) and the fact that
has rank r, it follows that
. .. (p,1,)"-33,]
rank [(?,
= ranks, =r
Now gl,is r X r, and therefore&,, P;:', . . .,E';;' depend lin. ,@;';;I, by the Cayley-Hamilton theorem.
early o n I, j,,,F:,,
It follows that
..
VVV
This establishes the complete controllability result.
Example
Suppose that
3.3.2
Observe that
which has rank 2, so that [F,g] is not completely controllable. We now
seek an invertible transformation of the state vector that will highlight
the uncontrollability in the sense of the preceding theorem.
We take for T-1 the matrix
a
The first two columns are basis for the subspace spanned by g, Fg, and
FZg, while the third column is linearly independent.of the first two. It
follows then that
Observe that
0
811
-1
=[1 -2 ]
$I
=[:I
it.
Elltl]=['
,,
o
which exhibits the complete controllability of [PI ell.Also
O]
1
PROPERTIES OF STATE-SPACE REALIZATIONS
89
We can write new statespace equations as
If, for the above example, we compute the transfer function from the
original F, g, and h, we obtain l/(s f l)z, which is also the transfer function
associated with state equations from which the uncontrollable part has been
removed:
This is no accident, but a direct consequence of Theorem 3.3.4. The following
theorem provides the formal result.
Theorem 3.3.5.Consider the state-space equations
Then the transfer-function matrix relating U(s) to Y(s) is J
Ht1(sI
- F,,)-'GI.
+
The proof of this theorem is requested in the problems. In essence, the
theorem says that the "uncontrollable part" of the state-space equations can
be thrown away in determining the transfer-function matrix. In view of the
latter being a description of input-output properties, this is hardly surprising, since the uncontrollable part of x is that part not connected to the input
U.
Complete Observability
The term complete observability refers to a relationship existing
between the state and the output of a set of state-space equations:
92
STATE-SPACE EQUATIONS
CHAP. 3
Under (3.3.3), we obtain
1 = TFT-'2
+ TGu
y = [(T-')'HI': = H'T-'2
(3.3.14)
Thus transformations of the form (3.3.3) transform both state-space equations, according to the arrangement
In other words, if these replacements are made in a set of statespace equations, and if the initiaI condition vector is transformed according to (3.3.3),
the input u ( - ) and output y(.) will be related in the same way. The onIy
difference is that the variable x, which can be thought of as an intermediate
variable arising in the course of computing y(-) from u(.), is changed.
This being the case, it should follow that Eq. (3.3.14) should define the
same transfer-function matrix as (3.3.13). Indeed, this is so, since
Let us sum up these important facts in a theorem,
Theorem 3.3.9. Suppose that the triple {F, G, H) defines state
space equations in accord with (3.3.13). Then if T is any nonsingular matrix, {TFT-I, TG, (T-')'HI defines statespace equations in accord with (3.3.14), where the relation between E and
x is 2 = Tx. The pairs [F, GI and [TFT-', TG]are simultaneously
either completely controllable or they are not, and [F, HI and
[TFT-I, (T-')'HI are simuItaneously either completely observable
or they are not. The variables u(.) and y(.) are related in the same
way by both equation sets; in particular, the transfer-function
matrices associated with both sets of equations are the same [see
(3.3.15)l.
Notice that by Theorems 3.3.3 and 3.3.8, the controllability and obsemability properties of (3.3.13) and (3.3.14) are the same.
Notice also that if the second equation of (3.3.13) were
y = H'x
+ JU
then the above arguments would still carry through, the sole change being
that the second equation of (3.3.14) would be
SEC. 3.3
PROPERTIES OF STATE-SPACE REALIZATIONS
93
The dual theorem corresponding to Theorem 3.3.4 is as follows:
[F,HI is
not completely observable. Then there exists a nonsingular matrix
T such that
Theorem 3.3.10. Suppose that in Eq. (3.3.13) the pair
with I$,,, I?,] completely observable. The matrix
where r = rank [H F'H . .. (F'p-'HI.
fill is r
x r,
It is not hard to see the physical significance of this structure of TFT-'
and (T-')'I?. With P as state vector, with obvious partitioning of 2,and with
TG
- ["-3
G,
we have
2, = P,,R,
+
e,,
y = H',R,
32=P21P,f~2222+e,~
Observe that y depends directly on 2, and not P,, while the differential equation for 2i.,is also independent of 2,. Thus y is not even indirectly dependent
on Z2.In essence, the P,part of P is observable and the P2part unobservable.
The dual of Theorem 3.3.5 is to the effect that the state-space equations
y = [H', O]x
define the same transfer-function matrix as the equations
In other words, we can eliminate the unobservable part of the state-space
equations in computing a transfer-function matrix. Theorem 3.3.5 of course
said that we could scrap the uncontrollable part. Now this elimination of
94
STATE-SPACE EQUATIONS
CHAP. 3
the unobservable part and uncontrollable part requires the F, G, and H
matrices to have a special structure; so we might ask whether any form of
elimination is possible when F, G, and H do not have the special structure.
Theorem 3.3.9 may be used to obtain a result for all triples (F, G, H}, rather
than specially structured ones, the result being as follows:
Theorem 3.3.11. Consider the state-space equations (3.3.13),
and suppose that complete controllability of [F, GI and complete
observability of [F, If] do not simultaneously hold. Then there
exists a set of state-space equations with the state vector of lower
dimension than the state vector in (3.3.13), together with a constructive procedure for obtaining them, such that if these equations are
then H'(s1- 4 - ' G = H',(sI - F,)-'G,,
with [F,, G,] completely controllable and IFM,HM] completely observable.
Proof. Suppose that [F, GI is not completely controllable. Use
Theorem 3.3.4 to find a coordinate-basis transformation yielding
state-space equations of the form of (3.3.12). By Theorem 3.3.9,
the transfer-function matrix of the second set of equations is still
H'(sI - F)-'G. By Theorem 3.3.5, we may drop the uncontrollable states without affecting the transfer-function matrix. A set of
state-space equations results that now have the complete controllability property, the transfer-function matrix of which is known
to be W(sI F)-IG.
Theorem 3.3.10 and further application of Theorem 3.3.9 allow
us to drop the unobservable states and derive observable state
space equations. Controllabiiity is retained-a proof is requested
in the problems-and the transfer-function matrix of each set of
state-space equations arising is still H'(s1- F)-'G.
If [F, C]is initially completely controllable, then we simply drop
the unobservable states via Theorem 3.3.10. In any case, we obtain
a set of state-space equations (3.3.16) that guarantees simultaneously the complete controllability and observability properties,
and with transfer-function matrix H'(sI - F)-lG.
VVV
-
Example Consider the triple IF,g,h] described by
SEC. 3.3
PROPERTlES OF STATE-SPACE REALIZATIONS
95
As noted in Example 3.3.2, the pair [F, g] is not completely controllable.
Moreover, the pair [F,hl is not completely observable, because
which has rank 2. We shall derive a set of statespace equations that
are completely controllable and observable and have as their transfer
function h'(s1- F)-'g, which may be computed to be -l/(s
1).
First, we aim to get the state equation in a form that allows us to
drop the uncontrollable part. This we did in Example 3.3.2, where we
showed that with
+
we had
Also, it is easily found that
Completely controllable state equations with transfer function
h'(sI - F)-lg ate therefore provided by
These equations are not completely observable because the "obse~ability
matrix"
does not have rank 2. Accordingly, we now seek to eliminate the unobservable part. The procedure now is to find a matrix T such that the
columns of T' consist of a basis for the subspace spanned by thecolumns
of the observability matrix, together with linearly independent columns
96
STATE-SPACE EBUATIONS
yielding a nonsingular T'. Here, therefore,
will suffice. It follows that,
and the new state-space equations are
= 11
01x
Eliminating the unobservable part, we obtain
+ 1-IIU
a = [-IIX
y=x
which has transfer function -I/(s
+ 1).
Any triple (F, G, HJ such 'that Hi(sl - F)-'G = W(s) for a prescribed
transfer-function matrix W(s)is called a realization or state-space realization
of W(s). If [F,G] q ~ d
[F, HI are, respectively,.completelycontrollable and
observable, the realizationis termed minimal, for reasons that will shortly
become clear. The dimension of a realization is the dimension of the associated
state vector.
Does an arbitrary real rational W(s) always possess a realization? If
lim,, W(s) = W ( m ) is nonzero, then it is impossible for W(s) to equal
H'(s1- F)-"3 for any triple IF, G, H).However, if W(m) is finite but W(s)
is otherwise arbitrary, there are always quadruples {F, G, H, Jf,as we shall
show in Section 3.5, such that W(s)is precisely J $ H'(s1- F)-'G. We call
such quadruples realizations and extend the use of the word minimal to such
realizations. As we have noted earlier, the matrix J i s uniquely determined-in
contrast to F, C, and H-as W(m).
Theorem 3.3.11 applies to realizations [F, C, H, J ) or state-space equations
i =F x + Gu
y=H'x+Ju
MINIMAL REALIZATIONS
SEC. 3.4
97
whether or not J = 0. In other words, Theorem 3.3.11 says that any nonminimal realization, i.e., any realization where [F, is not completely controllable or [F, HI is not completely observable, can be replaced by a realization (F,, G,, H,, J ] that is minimal, the replacement preserving the associated transfer-function matrix; r.e.,
The appearance of J # 0 is irreIevant in all the operations that generate
F,, G,, and H, from E, G, and H.
P ~ O ~ J &Prove Theorem 3.32. (Use the rankcondition or the original definition.)
. .
. .. .
3.3.1
Problem Prove Theorem 3.3.3. (Use the rank condition.)
3.3.2
Problem Prove Theorem 3.3.5.
3.3.3
Problem Consider the statespace equations
Suppose that
r1']
F21
fiz
and E ] f o r m a completely controllable pair,
,,
and that IF,,, HI]is completely observable. Show that [F, C,]is completely &ntrollable. This result shows that elimination of the Lob-able part of state-space equations preserves complete controllability.
Problem Find a set of stat-space equations with a one-dimensional state vector
and with the same transfer function relating input and output as
3.3.5
3.4
MINIMAL REALIZATIONS
We recall that, given a rational transfer-function matrix W(s)with
W(-)finite, a realization of W(s) is any quadruple {F,G, H,J ] such that
W(S)= J
+ H ~ S -I F)-10
(3.4.1)
98
CHAP. 3
STATE-SPACE EQUATIONS
and a minimal realization is one with [F, G] completely controllable and
[F, HI completely observable. With W(m)= 0, we can think of triples
(F, G, H ) rather than quadruples "realizing" W(s).
We postpone until Section 3.5 a proof of the existence of realizations.
Meanwhile, we wish to do two things. First, we shall establish additional
meaning for the use of the word minimal by proving that there exist no realizations of a prescribed transfer function W(s) with state-space dimension
less than that of any minimal realization, while all minimal realizations have
the same dimension. Second, we shall explain how different minimal realizations are related.
To set the stage, we introduce the Markov matrices A, associated with a
prescribed W(s),assumed to be such that W ( w )is finite. These matrices are
defined as the coefficients in a matrix power series expansion of Wcs) in
powers of s-I; thus
Computation of the A, from W(s)is straightforward; one has A_, = lim,,
W(s), A, = Jim,-, s[W(s)- A _ , ] , etc. Notice that the A, are defined independently of any realization of W(s). Nonetheless, they may be related to
the matrices of a realization, as the following lemma shows.
Lemma 3.4.1. Let W(s) be a rational transfer-function matrix
with W(m) < m. Let (A*] be the Markov matrices and let [F, G,
H, J ] be a realization of W(s). Then A _ , = J and
Proof. Let w(t) be the impulse response matrix associated with
W(s). From (3.4.2) it follows that for t 2 0
Also, as we know,
w(t) = J&t) + H'ePtG
= JKt)
..
+ W'G + H'FGt + H'F'GtE
2!
+
,
The result follows by equating the coefficients of the two power
series.
8 77
We can now establish the first main result of this section. Let us define the
Hankel matrix of order r by
*
A,.
x, =
A
A,
MINIMAL REALIZATIONS
99
.. .
.
.,.
A%,-
Theorem 3.4.1. A11 minimal realizations of W(s)have the same
dimension, n, where n, is rank X, for any n 2 n, or, to avoid
the implicit nature of the definition, for suitably large n. M o w
over, no realization of W(s) has dimension Iess than n,; i.e.,
minimal realizations are minimal dimension realizations.
Proof. For a prescribed realization (F,G,H, J), define the
matrices W j and V, for positive integer i by
... F-'GI
. .. (F')<-'HI
W , = [G FG
V, = [ H F'H
Let n be the dimension of the realization (F, G, H ) ; then
v: W"=
H'G
H'FG
...
If IF,G, H ) is minimal and of dimension n, then, by definition,
[F, GI is completely controllable. It therefore follows from an earlier theorem that W, has rank n,. A similar argument shows
100
CHAP. 3
STATE-SPACE EQUATIONS
that V, has rank n,. But the dimensions of W, and V., are such
that the product Vb,W., has the same rank; that is,
n,
= rank
X,
(3.4.5)
But also, V , and W , for any n > n, will have rank n,; to see
this, consider for example W,,,, = [W, F n ~ G By
] . the CayleyHamilton theorem, F"m is a linear combination of J, F, F 2 , . . ,
F"M-1, SO FnMG is a linear combination of G, FG,
,Fn*-'G.
Hence rank Wow+,=rank W,. The argument is obviously extendable ton = n,
2, n,
3,etc. Thus for all n > ns,n, 2 rank
.:(V:W,,)= rank X , )rank X,, = n, with the second inequality following, because X, is a submatrix of X,. Evidently, then,
...
+
+-
.. .
.
.
.
n, = rank X,
= rank
X,
for all n 2 n ,
(3.4.6)
Now suppose that n, is not unique; i.e., there exist minimal
realizations of dimensions n,, and n,,, with n,, # n,,. We
easily find a contradiction. Suppose also without loss of generality
that n,, > n,,. By the minimality of the realization of dimension
n,,, we have by (3.4.5)
rank X
,
= n,,
rank X,
=n
,
But (3.4.6) implies, because n,,
> n,,,
Likewise
rank X,,,
that
= n,,
The two expressionsfor rank X.,, are contradictory; hence minimal realiiations have the same dimension.
To see that there do not exist realizations of lower dimension,
suppose that (F, G, H ) is a realization of dimension n and is not
minimal, i.e., is either not completely controllakle or completely
observable. Then, as we saw in the last section, there exists a realization of smaller dimension that is completely controllable and
observable-in-fact, we have a constructive procedure for obtaining such a realization. Such a realization, being minimal, has
dimension n,. Hence n, < n, proving the theorem.
VV7
Exampla
3.4.1
Consider the scalar transfer function
W(s)=
1
(S
+ 1)(s + 2)
MINIMAL REALIZATIONS
SEC. 3.4
101
Then
A, = f i i {sf [ ~ ( s -) - A _ ,
-
$11
=1
It follows that
and one can verify that
. =2
rank X 2= rank X3 = rank X, = . .
Therefore, all minimal realizations of W(s) have dimension 2
As the above example illustrates, the computation of the dimension of
minimal realizations using Hankel matrices is not particularly efficient and
suffers from the drastic disadvantage that, apparently, the ranks of an infinite
numbcr of matrices have to be examined. If W(s) is rational, this is not
actually the case, as we shall see in later sections. We shall also note later
some other procedures for identifying the dimension of a minimal realization
of a prescribed W(s).
As background for the second main result of this section, we recall the
following two facts, established in Theorem 3.3.9:
1. If {F, G, H, J ) is a realization for a prescribed W(s),so is [TFT-',TG,
(T")'H, J ] for any nonsingular T.
2. If [F, G, H, J ] is minimal, so is [ T l T - l ,TG,(T-l)'H,J j , and conversely.
Evidently, we have a recipe for constructing an infinity of minimal realizations given one minimal realization. The natural question arises as to whether
102
STATE-SPACE EQUATIONS
CHAP. 3
we can construct all minimal realizations by this technique. The answer is
yes, but the result itself should not be thought of as trivial.
Theorem 3.4.2. Let IF,, G I ,H I ,J l ] and (F,, G,, Hz, J,] be two
minimal realizations of a prescribed W(s).Then there exists anonsingular T such that
F, = TF,T-'
G, = TG,
Hz = (T-l)'H1
(3.4.7)
Proof. Suppose that Fl and F, are n x n matrices. Define W ,
and V j by
We established in the course of proving Theorem 3.4.1 that
where X, is the appropriately defined Hankel matrix. Hence
Now Vl, W l , V,, and W , all possess rank n and have n rows.
This means that we can write
(V, V',)-l(VzV\) W , = W,
(3.4.10)
and the inverse is guaranteed to exist. We claim that
carries one realization into the other. Note that T is invertibleotherwise W Eand W , in (3.4.10) could not simultaneouslypossess
rank n. Also, (3.4.10) and (3.4.11) imply that
That G, = TG, is immediate. We shall now show that F,
TFIT-l.As well as (3.4.9), it is easy to show that
V',FIWl = V;F,W,
whence
TF;W , = f i W ,
Because TW;= W,, we have
=
SEC. 3.4
103
MINIMAL REALIZATIONS
and the fact that W , has rank n yields the desired result. To show
that H, = (T-')'HI,we have from (3.4.9) and the equation TW,
= W,that
ViT-' W, = V i W,
and thus, using the fact that W, has rank n,
V:T-'
= V:
VV v
The result follows by using the formula (3.4.8).
Notice that the proof of the above theorem contains a constructive procedure for obtaining T ; Eqs. (3.4.8) and(3.4.11) define T in terms of Fj, G,,
and H, f o r j = 1,2.
Example To illustrate the procedure for computing T, consider the following two
minimal realizations of the scalar transfer function l/(s 1)(s 2):
3.4.2
+
+
We have
Observe that
so thai TFIT-' = Fz. Also
=[:I
TG,
Problem Suppose that
3.4.1
= G,
and
T'H,-[:]=HI
-
W(s)is a scalar transfer function,with minimal realization
(F, G,H, J ] . Because W(s)is scalar, W(s)= W'(s).Show that{F: H, G, J j
is a minimal realization of W(s)and show that the. matrix T such that
F'
= TFT-1
H = TG
G
(T-')%I
104
CHAP. 3
STATE-SPACE EQUATIONS
Problem Suppose that W(s)is a scalar transfer function with minimal realization
[F, G, H, J ] , and suppose that F i s diagonal. The matrices G and H are
vectors. Show that no entry of C or H is zero.
Problem Suppose that W(s) is an rn X p transfer-function matrix with minimal
3.4.3
realization [F, G,H, Jl. Show that if F is n x n, G has rankp, and H
has rank m, then
3.4.2
rank [G FC ... F n - p G l
mnk[H F'H .., (F3""HI
=n
=n
and that rank X. = rank
X..,
where
.
.
Problem I n the text a formula for the matrix Trelating two minimal realizations
{F,, G,, Hi] with j = 1,2 of the same transfer-function matrix was given
in terms qf the matrices [H, FjHj
(F;)"-'HJ],where Fj is n x n.
Obtain a formula in terms of the matrices [GI F,Gl
F;-'G,].
3.4.4
. ..
...
3.5 CONSTRUCTION
OF STATE-SPACE
EOVATIONS FROM A TRANSFERFUNCTION MATRIX
I n this section we examine the problem of passing from a prescribed rational transfer-function matrix W(s), with W(w) < m, to a quadruple of constant matrices IF, G, H, J}such that
W(s)= J
+ H'(sI - F)-'G
(3.5.1)
Of course, this is equivalent to the problem of finding state-space equations
corresponding to a prescribed W(s).
In general, we shall only be interested in minimal realizations of a prescribed W(s), although the first results have to do with realizations that are
completely controllable, but not necessarily completely observable, and vice
versa. We follow these results by describing the Silverman-Ho algorilhm; this
is a technique for directly finding a minimal, or completely controllable and
completely observable, realization of a transfer-function matrix W(s).
We note first an almost obvious point-that in restricting ourselves to
W(s) with W(w) ico, we may as well restrict ourselves to the case when
W(w) = 0. For suppose that W(m) is nonzero; as we know, the matrix J
of any realization of W(s), minimal or otherwise, is precisely W(m). Also,
SEC. 3.5
CONSTRUCTION OF STATE-SPACE EQUATIONS
105
and
has the property that Y ( w ) = 0, with V(s)rational. Evidently, if we can find
a minimal realization IF, G, H) for V(s), it is immediate that (F, G, H, J) is
a minimal realization of W(s).
Scalar W(s).
It turns out that there are rapid ways of computing realizations
for a scalar W(s) that are always either completely controllable or completely observable. With a simple precaution, the realizations can be assured
to be minimal also.
Suppose that W(s) is given as
Notice that we are restricting attention.:^ the case W ( w ) = 0 without any
real loss of generality, as we have noted. A completely controllable realization, often called the phase-variable realization, is described in the following
theorem. The matrix F appearing in the theorem is termed in linear algebra
a companion matrix, and the realization has sometimes been tenned a companion-matrix realization.
Theorem 3.5.1. Let W(s)be givenas in Eq. (3.5.2). Then a completely controllable realization {F, G, HI for W(s) is as follows:
Moreover, this realization is also completely observable if and
a, and b.sn-I ...
only if the polynomials sn$ a,sn-' f .
b, have no common factors.
+
. .+
+
Proof. The proof that fF, G, HI is a realization of W(s)divides
into two main steps. First, we show that
106
STATE-SPACE EQUATIONS
Then we show that
That [F, G,HI is a realization is then immediate from (3.55) and
the definition of H.
Let us prove (3.5.4). Assume that the result holds for F of
dimension n = 1,2, 3, . . ,m - 1. We shall prove it must hold
for n = m. Direct calculation establishes that it holds for n = 2,
so the inductive proof will be complete. We have, on expanding
by the first row,
.
s
0-.. 0
-1
-1
-1
.
.
-1
a,
= s(s"-'
a,
a,
.
.
. s+a,
+ anS"-=+ . .. + a,s + a,) +
where we have used the inductive hypothesis. This equality is
precisely (3.5.4).
SEC. 3.5
CONSTRUCTION OF STATE-SPACE EOUATIONS
107
Let us now prove (3.5.5). In view of the form of G, it follows
that (sl- F)-'G is a vector that is the last column of (sl- F ) - ' .
The Jacobi formula for computing the inverse o f a matrix in terms
of the minors of its cofactors then establishes that, for example,
with a similar calculation holding for-the other entries of
(sI - F)-'G. Equation (3.5.5) follows.
To complete the theorem proof, we must argue that [F, GI is
completely controllable, and we must establish the complete
observability condition. That [F, is completely controllable
foUows from the fact that [G FG . . Fn-'GIis a matrix of the
form
.
4
['#:!:
..,X
X
X
where x denotes an arbitrary element. (This is easy to check.)
Clearly, rank [G FG ... F"-'G] = n.
Finally, we examine the observability condition. If (3.5.2) is
such that the numerator and denominator polynomial have common factors, then these factors may be canceled. The resulting
denominator polynomial will have degree n' < n, and, by what
we have already proved, there will exist a realization of dimension
n'. Hence nis not the minimal dimension of arealization, and thus
the pair [F, H] In (3.5.3) cannot be completely observable. Thus,
if [F, H] is completely observable, the numerator and denon~inator
of W(s) as given in (3.5.2) have no common factors.
Now suppose that [F,H]is not completely observable. Then,
108
STATE-SPACE EQUATIONS
CHAP. 3
as we have seen, there exists a minimal realization {F,, G,, H M )
constructible from {F, G, H ) that will, of course, have lower
dimension, and for which
The matrix (sI - FM)-' is expressible as a matrix of polynomials
divided by IsI- F,I, so H',(sI- FM)-'G, will consist of a
polynomial divided by the polynomial 1 sI - F, 1. This means
that W(s) is a ratio of two polynomials, with the denominator
po1ynomial of degree equal to the size of F,, which is less than
n. Consequently, the numerator and denominator of (3.5.2) must
have common factors.
Since we have proved that lack of complete observability of
[F, H ] implies existence of common factors in (3.5.2), it follows
that if the numerator and denominator polynomials in (3.5.2) have
no common factors, then [F, H] is completely observable. The
proof of the theorem is now complete.
V VV
An important corollary follows immediately from the above theorem; this
corollary relates the dimension of a minimal realization of a scalar W(s) to
properties of W(s).
Corollary. Let W(s)be givenasin(3.5.2),withno wmmon factors
between the numerator and denominator. Then the dimension of
a minimal realization of W(s)is n.
Example In an example in the last section we studied the transfer function
3.5.1
and went to some pains to show that the dimension of minimal realizations is 2. This is immediate from thecorollary. By Theorem 3.5.1, also, a
minimal realization is provided by
It is easy to check that [F, HI is completely observable. On the other
band, if
we could have the same Fand G but would have H =
[:1.Then
SEC. 3.5
CONSTRUCTION OF STATE-SPACE EOUATIONS
109
and lack of complete observability is evident.
In the problems of ¶hissection, proof of the following theorem is requested;
the theorem is closely akin to, and can be proved rapidly by using, Theorem
3.5.1.
~heoiem
3.5.2. Let W(s) be given as in Eq.(3.5.2).'~henacom-
pletely observable realization {F, G, H] for W(s)is as follows:
Moreover, this realization is completely controllable if and only
if the polynomials s' u,s"-'
.
o, and bmsn-I - . f
b, have no common factors.
+
+ .. +
+ .
Matrix W(s)
Now we turn to transfer-function matrices. First, we shall give
matrix generalizations of Theorems 3.5.1 and 3.5.2, following the treatment
of [IS]. Then we shall indicate without proof one of a number of other procedures, the Silverman-Ho algorithm, for direct computation of a minimal
realization. (The matrix generalizations of Theorems 3.5.1 and 3.5.2 lead, in
the first instance, to completely controllable or completely observable realizations, but not necessarily minimal realizations.) For a treatment of the
Silverman-Ho algorithm, including the proof, see [191. For an example of
another procedure for generating a minimal realization, see 1201.
To derive a completely controllable realization of a rational m x p transfer-function matrix W(s) with W(m) = 0, we proceed in the foUowing way.
Let p(s) be the least common denominator of the mp entries of W(s), with
Because of the definition of p(s), the matrix p(s)W(s) will be a matrix of
polynomials in s, and because W(w) = 0, the highest degree polynomial
occurring inp(s)W(s) has degree less than n. Therefore, there exist constant
110
STATE-SPACE EQUATIONS
m x p matrices B , , B,,
CHAP. 3
. .., E nwith
Now we can state the main result.
Theorem 3.5.3. Let W(s)be a rational m x p transfer-function
matrix with W(m)= 0, and let p(s) and B,, B,,. . . ,B, be constructed in the manner just descnbed. Then a completely controllable realization for W(s)is provided by
0,
. - .
1-
and a completely observable realization is provided by
where 0, and I, denote q x q zero and unit matrices, respectively.
The proof of this theorem is closely allied to the proof.of the corresponding
theorems for scalar W(s)and, accordingly, will be omitted. Proof is requested
in the problems. Notice that, in contrast to the theorems for scalar transfer
function matrices, there is no statement in Theorem 3.5.3 concetning conditions for complete observability of the completely controllable realization,
SEC. 3.5
CONSTRUCTION OF STATE-SPACE EQUATIONS
111
and complete controllability of the completely observable realization. Such
a statement seems very difficuIt to give.
We now ask the reader to note very carefully the following points:
1. Theorem 3.5.3 establishes the result that any rational W(s)with W(m)
< oo possesses a realization-in fact, a completely controllable or completely observable realization. This in itself is a nontrivial result and, of
course, justifies much of the discussion hitherto, in the sense that were
there no such result, much of our discussion would have been valueless.
2. The characteristicpolynomial of the F matrix in the completely controllable realization is
Therefore, any eigenvalue of F must also be a pole of some eIement of
W(s),since p(s) is the least common denominator of the entries of W(s).
It follows that any eigenvalue of the Fmatrix in any minimlrealizatioon
of W(s)must be apole of some eler~~e~ft
of W(s).( A proof is requested in
the problems.) The converse is also true; i.e., any pole of any element of
W(s)must be an eigenvalue of the F matrix in any minimal realization, in
fad, in any realization at all. This is immediate from the formula W(s)
= J $ H'(s1- F)-'C, which shows that any singularityof any element
of W(s) must be a zero of det(s1- F).
3. Theorem 3.5.3 can be used to generate minimal realizations of a prescribed W(s) by employing it in the first step of a two-step procedure.
From W(s) one first constructs a completely controllable or completely
observable realization. Then, using the technique exhibited in an earlier
section, we construct from either of these realizations a minimal realization. The existence of these techniques when noted in conjunction with
remark 1 also shows that any rational W(s) with W ( m ) < m possesses
a minimal realizaiion.
Example We
provide a simple illustration of the preceding. Suppose that
3.5.2
1
s+l
Evidently, p(s)
= s(s
2
s+l
+ 1) and there are two matrices B,, viz.,
Following Theorem 3.5.3, we see that a completely controllablerealization is provided by
112
STATE-SPACE EQUATIONS
CHAP. 3
Next, we check for complete observability by computing
[H F'H (F')ZH (F')sH] =
1 -2
2 1 -2 -1
-1 2
1 -2
2 1 -2 -1
Evidently. this matrix has rank 3; therefore, the r e a l i t i o n just computed
is not minimal, though a realization of dimension 3 will he minimal.
Accordingly, we seek a matrix Tsuch that
,,fi,] is completely observable. If 8, and
where fllis 3 x 3 and [4
are defined by
TG
- [$:I
it will then follow that
(?,I is completely controllable, and that
[El,, I?,] is a minimal realization of W(s).
We have discussed the computation of T for the dual problem of
eliminating the uncontrollable part of a realization. It is easy to deduce
from this procedure one applying here. We shall take for the first three
columns of T' a basis for the space spanned by [H, F'H, (F')=H,(F3'H)
and for the remaining column any independent vector. This leads to
e,,
and thus
1 0 0 0
1 0 0 0
0 0 1 0
SEC. 3.5
CONSTRUCTION OF STATE-SPACE EQUATIONS
113
It follows that
and thus a minimal realization of W(s) is provided by
Of course, we could equally well have formed a completely observable
realization of W(s) using Theorem 3.5.3 and then eliminated the uncontrollable part.
We shall now examine a procedure for obtaining a minimal realization
directly, i.e., without having to cast out the uncontrollable or unobservable
part of a realization obtained in an intermediate step.
Silverman-Ho Algorithm
Starting with an rn X p rational matrix W(s) with W(m) =0,
the algorithm proceeds by generating the Markov matrices (A,] via
A
A
A
W(s)=++$-k$t-
-..
and the Hankel matrices X. by
The rationality of W(8) is sufficient to guarantee existence of a reaIization,
though, as we have noted, this is a nontrivial fact. Since realizations exist,
so do minimal realizations. We recall that if n, is the dimension of a minimal
realization, then rank X, = rank X, for all n 2 n,. Thus, knowing the
X i but not knowing n,, we can examine rank X,, rank X,, rank X,, and
so on, and be assured that at some point the rank will cease to increase.
In fact, one can show that if n, is the degree of the least common denominator
STATE-SPACE EQUATIONS
114
CHAP. 3
of the entries of W(s),then n, 5 n, x min (m,
p). Thus one can stop testing
the rmtks of the X i at, say, i = n, x min (m, p) in the knowledge that no
further increase in the rank is possible.
Let r be the first integer for which rank X , = rank X,for all n 2 r. Let
rank X, = n, which turns out to be the dimension of a minimal realiurtion. The remaining steps in the algorithm are as follows (for a proof, see
[19l):
1. Find nonsingular matrices P and Q such that
(Standard procedures exist for this task. See, e.g., [4&)
2. Take
G = n, x p top left corner of PX,
H' = m x n, top left corner of X,Q
F -; n, x n,,, top left comer of P(aX,)Q
where
A,
...
ax,=
A, A,+,
. . . A,,-,
Example Suppose that
3.5.3
W(s) =
1
It is easy to calculate that
A2
.-
= lim (s*[sW(s)- A, - Al/s]j =
G3
SEC. 3.5
CONSTRUCTION OF STATE-SPACE EQUATIONS
115
Furthermore,
A,=A,=A,=...
A , = A ~ *
= =~, . .
Then
Furthermore,
rank Xi= 2
for all i. In terms of our earlier d k p t i o n of the algorithm, r = 1,
and rank X, = n, = 2. [Note that the bound n~ I
no x min (m,p)
yields here that n, 12, so that the only matrices that need be formed
are XI and X,.] The next task is to find matrices P and Q so that
Such are easy to find. One pair is
In accordance with the algorithm,
It is quickly verified that H'(sI - F)-'G is the prescribed W(s), that
[F, GI is completely controIlable, and [F, HI completely observable.
Problem Prove Theorem 3.5.2.
3.5.1
Problem Suppose that G is an n vector and F a n n
pletely controllable. Define T by
3.5.2
X
n matrix with [F,
com-
116
CHAP. 3
STATE-SPACE EQUATIONS
+
+
where is1 - Fl =s" adn-[ ,.. + a , . Show that TFT-I and TG
have the form of the phase-variable realizaticn of Theorem 3.5.1.
Problem Construct two minimal realizations of
3.5.3
by constructing first a completely controllable realization, and by
constructing first a completely observable realization.
Problem Prove Theorem 3.5.3.
3.5.4
Problem Using the fact that the eigenvalues of Fare the same as those of TFT-',
and using the theorem dealing with the elimination of uncontrollable
states, show that any eigenvalue of the Fmatrix in any minimal realization of a transfer-function matrix is also a pole of some element of that
transfer-function matrix. [Hint:See remarks following Theorem 3.5.3.1
Problem Using the Silverman-Ho algorithm, find a minimal realization of
3.5.5
3.5.6
3.6
DEGREE OF A RATIONAL TRANSFER
FUNCTION MATRIX
In this section we shall discuss the concept of the degree of a
rational transfer-function matrix W(s).If W(s)has the property W ( w ) < co,
then the degree of W(s),written 6[W(s)]o r 6[W]for short, is defined simply
as the dimension of a minimal realization of W(s);we shall, however, extend
the definition of degree to consider those W(s)for which W(m)< m fails.
We have already explained how the degree of a W(s)with W(co)< m may
be determined by examining the rank of Hankel matrices derived from the
SEC. 3.6
DEGREE OF A RATIONAL TRANSFER FUNCTION MATRIX
117
Markov coefficients of W(s). We shall, however, state a number of other
properties of the degree concept, some of which offer alternative procedures
for the computation of degree. A number of these will not be proved here,
though they will be used in the sequel. We shall note the appropriate references (where proofs may be found) as we state each result.
The history of the degree concept is interesting; the concept originated in
a paper by McMillan 1211 dealing with a network-theory problem. McMiilan
posed the following question: given an impedance matrix Z(s) of a network
for which it is known that a passive synthesis exists using resistors, inductors,
capacitors, transformers, and gyrators, what is the minimum number of
energy storage elements, i.e., inductors and capacitors, required in a synthesis
of Z(s)? McMillan termed this number the degree of Z(s) and gave rules for
its computation as well as various properties. In 1201 thir network-theory-based
definition was shown to be identical with a definition of degree os the dimension
of a minimal realization of Z(s), in case Z(m) < m. This was an intuitively
appealing result: in almost all cases, as we shall see, state-space equations of
a network can be computed with the state vector entries being either a capacitor voltage or inductor current. Thus the result of [20] showed that, despite
the evident constraints placed by passivity of a network on state-space equations derived from that network, minimal-dimension state-space equations
could still be obtained.
Other network theoretic formulations of the degree concept have been
given by Tellegen [22] and Duffin and Hazony 1231. Connection between these
formulations and the definition of degree in terms of the dimension of a statespace realization appears also in [20].
We proceed now with a definition of the degree concept and a statement of
some of its properties.
Definition of Degree. Let W(s) be a matrix of real rational
functions of s. If W(m) < m, the degree of W(s), 6[W(s)] or6[W1,
is defined as the dimension of a minimal realization of W(s). If
one or more elements of W(s) has a pole at s = m, write
W(s)
= W-,(s)
+ W,S + W2s2+ . .. + Wrsr
where W-,(s) has W-,(as)
stant matrices. Set
< m and
W,, W2, . .
(3.6.1)
. ,W, are con-
Then 6[W] is defined by
6tWI = 6IW-,I
+ 61VI
(3.6.3)
118
CHAP. 3
STATE-SPACE EQUATIONS
Though transfer-function matrices of the form of (3.6.1) with W , , W,, . . . ,
nonzero occur very infrequently, it is worth noting that the impedance Z(s)
of a one-port passive network may well be of the form
if the network comprises an inductance L in series with another one-port
network. In this case, the degree definition yields
More generally, the impedance matrix of an m port may be of the form of
(3.6.4),where L is a matrix. Then we obtain d[Z] = rank L 6[Z_,].This
depends on the result that 6[sL]= rank L, a proof of which is requested in
the problems.
Note also from the above definition that if p(s) is a scalar polynomial in
s, G[p(s)]coincides with the usual polynomial degree.
We shall now prove a number of properties.
+
Property 1. If M and N are constant matrices
Proof. For simplicity, assume that W(m) < w . Property 1 follows by observing that if [F, G, H, J ) is a minimal realization of
W(s), (F, GN, HM', MJNJ is a realization of MW(s)Nof dimension 6[m and is not necessarily minimal.
VVV
As a special case of property 1, we have
Property 2. Let W,(s)be a submatrix of W(s).Then
SIW,l IS[W
Next, we have
Property 3
Proof. For simplicity, assume that W , ( m )< w and W,(w) <
w . The right-hand inequality in property 3 follows by noting that
if we have separate minimal realizations of W , and W,, simple
'paralleling' of inputs and outputs of these two realizations will
provide a realization of W,
W,, but this realization will not
necessarily be minimal. More precisely, if (F,, G,, H,,
J,) is a minimal realization for Wj(s),i = 1,2, a realization of W(s) = W,(s)
+
SEC. 3.6 DEGREE OF A RATIONAL TRANSFER FUNCTION MATRIX
119
+ W2(s)is provided by
Proof of the left-hand inequality in (3.6.8) is requested in the
problems.
VVV
+
+
Example If WAS)= Il(s
I ) , Wz(s) = I/(s 2), then 6[W,] = 6[W,] = I and
3.6.1
6[W, W,1 = 2. If W,(s) = W2(s)= l/(s I), then 6[W, W2]= 1,
and if Wds) =- W ~ S=) 1/(s I), then 6[W, W,] = 0. This example shows that neither inequality in (3.6.8) need be satisfied with equality,
+
+
+
+
+
but either may be.
An important special case of property 3 permitting replacement of the
right-hand inequality in (3.6.8) by an equality is as follows.
Property 4. Suppose that the set of poles of elements of W, is
disjoint from the set of poles of elements of W,. Then
In proving this property we shall make use-of a generalization of the wellknown fact that if
5 a,@Y= O
*=I
for arbitrary a, and b, (real or complex), with b, # b, for i # j, then ai = 0
for all i. The generalization we shall use is an obvious one and is as follows:
if
w',dl'G,
+ w\eF.'G,
=0
f6r some vectors w , and w, and matrices F,, GI,
Fa, and G,, with no eigenvalue of F, the same as an eigenvalue of P,,then
Proof. Assume for simplicity that W , ( m ) < m and W 2 ( m ) <
-. Suppose ttiat (F,,G,,H, J,], for i = 1,2, define minimal realizations of W,(s) and W,(s). Let IF, G , H, J ) as given in(3.6.9) be
a realization for W(s) = W,(s) W,(s). Suppose that [F, GI is
not completely controllable. Then there exists a constant nonzero
+
vector w such that
i20
STATE-SPACE EQUATIONS
CHAP. 3
Partitioning w cornformably with F,it follows that
Now because {Fj,G,, H,, J,] is a minimal realization for W((S),
the set of eigenvalues of F, is contained in the set of poles of
W,(s), as we showed in an earlier section. Therefore, the set of
eigenvalues of Fl is disjoint from the set of eigenvalues of F, and
Because w is nonzero, at least one of w, and w, is nonzero. The
complete controllability of [F,, GJ is then contradicted: So our
assumption that [F, GI is not controllabje is untenable; similarly,
[F,H] is completely observable, and the result follows. V V V
An interesting application of this property is as follows. Suppose that
a prescribed W(s) is written as W(s) = W_,(s) W,s W,s2 f .
Wrs7for constant matrices W,, i 2 1. We can speak of a generalized partial
fraction expansion of W(s) when we mean a decomposition of W(8) as
+
+
.. +
Let
A*, + .
Vks) = (s - s,>
and
V,(s) = w,s
.. +
Ah,
(s - s,)"
+ . . . + w,sr
We adopt the notation 6[W;s,] to denote 6[V,],the degree of that part of
W(s) with a pole at s,. It is immediate then from property 4 that
Property 5
This property says that if a partial fraction expansion of W(s) is known,
the degree of W(s) may be computed by computing the degree of certain
summands in the partial fraction expansion.
121
SEC. 3.6 DEGREE OF A RATIONAL TRANSFER FUNCTION MATRIX
Example
Suppose that
It is almost immediately obvious that the degree of each summand is
one. For example, a realization of the fin1 summand is
F=[-21
G=[l
01
H=[2
11
and it is clearly minimal. It follows that b[WI = 2.
The next property has to do with products, rather than sums, of transferfunction matrices.
Property 6
:
..
.
'
Proof. Assume for simplicity that W,(oo)< m and W2(oo)< m.
The property follows by observing that if we have separate minimal realizations for W , and W,, a cascade or series connection
of the two realizations will yield a realization of W , W , , but this
realization will not necessarily be minimal. More formally, suppose that WXs) has minimal realization IF,,G,,Hi, JJ, for i =
1,2. Then it is readily verified that a set of stite-space equations
with transfer-function matrix W,(s)Wz(s) is provided by
+
+
Example If W,(s) = l/(s 1) and Wz(s) = I/(s Z), then 61W1WZI= 2 =
b [ ~ ,+] d[w,J. But if W,(s) = I/(# 1) and Wz(s)= (s l)/(s 21,
then SIWl W21= 1.
3.6.3
+
+
+
For the case of a square W(s) with W ( m ) nonsingular, we can easily prove
the following property. Actually, the property holds irrespective of the nonsingularity of W(W).
122
STATE-SPACE EQUATIONS
CHAP. 3
Property 7. If W(s) is square and nonsingular for almost all s,
Proof. To prove this property for the case when W ( m ) is nonsingular, suppose that (F, G, H, 5 ) is a minimal realization for
W(s). Then a realization for W - ' ( s ) is provided by (F - GJ-'H',
GJ-', -H(J-I)', 5-'3. To see this, observe that
The last term on the right side is
and we then have
This proves that ( F - GJ-'H', GJ'', -H(J-I)', J'') is arealization of W-'(s).
Since [F, GI is completely controllable, [[F - GJ-'H', GI and
[F - GJ-'H', GJ-'1 are completely controllable, the first by a
result proved in an earlier section of this chapter, the second by
a trivial consequence of the rank condition. Similarly, the fact
that [F- GJ-'H', -H(J-')'I is completely observable follows
from the complete observability of [F,HI. Thus the realization
(F - GJ-'H', GJ-I, -H(J-I)', J-I) of W"(s) is minimal, and
property 7 is established, at Least when W(oo) is nonsingular.
vvv
An interesting consequence of property 7 of network theoretic importance
is that the scattering matrix and any immittance or hybridmatrix o f a network
have the same degree. For example, to show that the impedance matrix ,Z(s)
and scattering matrix S(s) have the same degree, we have that
Then S[S(s)] = 6MI f
z)-'1
= b[(I
+ Z ) ] = J[Z(s)].
SEC. 3.6 DEGREE OF A RATIONAL TRANSFER FUNCTION MATRIX
123
Next, we wish to give two other characterizations of the degree concept.
The first consists of the original scheme offered in [21] for computing degree
and requires us to understand the notion of the Smith canonical form of
a matrix of polynomials.
Smith c a n o n i c a l f o r m [4]. Suppose that V(s)is an m x p matrix of real polynomials in s with rank r. Then there exists a representation of V(s),termed the Smith canonical form, as a product
where P(s) is m x m, r ( s ) is m
x p, and Q(s) is p x p. Further,
I. P(s) and Q(s) possess constant nonzero determinants.
2. r ( s ) is of the form
written in shorthand notation as
3. The y,(s) are uniquely determined monic polynomials* with
the property that yJs) divides y,,!(s) for i = 1,2,
,
r - 1; the polynomial yJs) is the ratlo of the greatest common divisor of all i x i minors of V(s)to the greatest common divisor of aU (i - 1) x (i - 1) minors of V(s). By
convention, y,(s) is the greatest common divisor of all
entries of V(s).
.. .
Techniques for computing P(s), r(s), and Q(s) may be found in [4]. Notice
*A monic polynomial in s is one in which the coefficient of the highest powerof s is one.
124
STATE-SPACE EQUATIONS
CHAP. 3
that r ( s ) alone is claimed as being unique; there are always an infinity of
P(s) and Q(s) that will serve in (3.6.15).
In [21] the definition of the Smith canonical form was extended to real
rational matrices W(s)in the following way.
Smith-McMillan canonical form. Let W(s) be an m X p
matrix of real rational functions of s, and let p(s) be the least
common denominator of the entries of W(s). Then V(s)=
p(s) W(s)is a polynomial matrix and has a Smith canonical form
representation P(s)r(s)Q(s), as described above. With y,(s) as
defined above, define cis) and y,(s) by
where €As) and v,(s) have no common factors and the v,(s) are
monic polynomials. Define also
Then there exists a representation of W(s) (the Smith-McMian
canonical form) as
where P(s) and Q(s)have the properties as given in the description
of the Smith canonical form, E(s)and Y ( s )are unique, c,(s)divides
E.-+,(s) for i = 1.2, . . . , r - 1, and yl,+,(s) divides y,(s) for i =
1,2,..., r - I .
Finally, we have the following connection with 6[W(s)],proved in 1201.
.. .
Property 8. Let W(-) < w and let yl(s), y,(s),
,y,(s) be
polynomials associated with W(s) in a way described in the
Smith-McMillan canonical form statement. Then
[Recall that G[y.(s)]is the usual degree of the polynomial ~ ~ ( s ) ] .
In case the inequality W(oo) < w fails, one can write
SEC. 3.6 DEGREE OF A RATIONAL TRANSFER FUNCTION MATRIX
125
From this result, it is not hard to show that if Wis invertible, b[W-'I= b[W]
even if W(m) is not finite and nonsingular. (See Problem 3.6.6.)
Notice that the right-hand side of (3.6.20) may be rewritten as b[n,yr,(s)l,
and that b[W(s)] may therefore be computed ifwe know only p(s) = y,(s),
rather than the individual v,(s). A characterization of yr(s) is available as
follows.
n,
Property 9. With W(m)
< m, with y(s)
=,s)(n
y.,l
and yi(s)
as above,
and yr(s) is the least common denominator of all p x p minors of
W(s), where p takes on all values from 1 to min(m, p).
For a proof of this result, see [ZO]; a proof is also requested in the problems,
the result following from the definitions of the Smith and Smith-McMillan
canonical forms together with property 8.
Example
Consider
3.6.4
+
The least common denominator of all 1 x 1 minors is (s i)(s
Aiso, the denominators of the three 2 x 2 minors of W(s) are
+ 2).
Accordingly, .the least common denominator of all minors of W(s) is
(s 1)2(s + 2)l and S[W(s)l = 4.
+
There is a close tie between property 9 and property 5, where we showed
that
m'(s)l
= C b[w ;st]
(3.6.10)
I,
Recall that the summation is over the poles of entries of W(s), and 6[W; si]
is the degree of the sum Vks) of those terms in the partial fraction expansion
of W(s) with a singularity at s = s,.
Theleast common denominator of all p X p minors of V,(s) will be a power
of s - si,and therefore the least common denominator of all p x p minors
will simply be that denominator of highest degree. Since the degree of this
denominator will also be the order of s, as a pole of the minor, it follows
126
STATE-SPACE EQUATIONS
CHAP. 3
that, by property 9, 6 [ W ;s,] will be the maximum order that s, possesses as
apole of any minor of V,(s).
Because V,(s) for j # i has no poles in common with V,(s),it follows that
6[W; s,] will also he the maximum order that s, possesses as a pole of any
VJs). Since W(s)= C, V,(s), we have
minor of
Property 1 0 . 6 [W;s,] is the maximum order that sfpossesses as
a pole of any minor of W(s).
Example Consider
3.6.5
The poles of entries of W(s) are -1 and -2. Observing that
det W(s)
=
4
(s+ I n s +
1
- (s + I)(s + 21
it follows that -1 has a maximum order of 1 as a pole of any minor of
W(s), being a simple pole of two 1 x 1 minors, and of the only 2 x 2
minor. Also, -2 has a maximum order of 2 as a pole. Therefore, 6 [ v
= 3.
In our subsequent discussion of synthesis problems, we shall be using many
of the results on degree. McMiUan's original definition of degree as the
minimum number of energy storage elements in a network synthesis of an
impedance of course provides an impetus for attempting t o solve, via statespace procedures, the synthesis problem using a minimal number of reactive
elements. This inevitably leads us into manipulations of minimal realizations
of impedance and other matrices, which, in one sense, is fortunate: minimal
realizations possess far more properties than nonminimal ones, and, as it
turns out, these properties can play an important role in developing synthesis
procedures.
Problem
3.6.1
Prove that 6[sLl
= rankL.
Problem Show that I6(W,)
- 6(W2) I< 6(WI
+ Wz).
3.6.2
Problem Suppose that W, is square and W;' exists. Show that 6[W, W,] 2 6 [ W z ]
- 6[W,]. Extend this result to the case when W, is square and W i l
does not exist, and to the case when W, is not square. (The extension
represents a difficultproblem.)
3.6.3
SEC. ,3.7
STABILITY
127
Problem Prove property 9. [Hint: First fix p. Show that the greatest common
divisor of all p x p minors of p(s) W(s)is y,(s&,(s) . . y,(s). Noting
that p(sP is a common denominator of all p x p minors of W(s),show
that the least common denominator of all p x p minors of W(s) is the
numerator, after common factor cancellations, of p(s)~/y,(s&,(s). ..
y,(s). Evaluate this numerator, and complete the proof by letting p vary.]
Problem Evaluate the degree of
.
3.6.4
3.6.5
s3
- sZ f 1
I
Sfl
- z + 1
-s3 + s'
-1.5s-2
sI-s-2
-2
1
.
Problem Suppose that W(s) is a square rational matrix decomposed as
P(X)E(J)Y(S)Q(S
with P(s1 and Q(s) possessing constant nonzero determinants, and with
3.6.6
-
Also, ci(s) divides €,+,(s)and yl,+~(s)
divides y/,(s). As we know, b[W(s)]
C 8Ic,(s)ly,(s)].Assuming W(s) is nonsingular almost everywhere
prove that
where P
=
QaR,
R being a nonsingular permutation matrix and
& = SP-1, S being a nonsingular permutation matrix, and with ?, &, I?,
and 3 possessing the same properties as P,Q, E, and 'f'. In particular,
a s ) -. diag [ymO).ym-ds), . ..,yl(s)l
9 ( s ) = diag [E;'(s),~ ; t , ( s ) ., . .,e;$(s)l
Deduce that 6[W-11 = d[W].
Our aim in this section is to make connection between those
transfer-function-matrix properties and those state-space-equation properties
associated with stability. We shall also state the lemma of Lyapunov, which
provides a method for testing the stabirity properties of state-space equations.
As is learned in elementary courses, the stability properties of a linear
system described by a transfer-function matrix may usually (the precise
qualifications will he noted shortly) be inferred from the pole positions of
entries of the transfer-function matrix. In fact, the following two statements
should be well known.
128
STATE-SPACE EOUATIONS
CHAP. 3
1. Usually, if each entry of a rational transfer-function matrix W(s)with
W(m) < oo has all its poles in Re [s] < 0, the system with W(s)as transfer-function matrix is such that bounded inputs will produce bounded
outputs, and outputs associated with nonzero initial conditions will
decay to zero.
2. Usually, if each entry of a rational transfer-function matrix W(s) with
W(m) < m has all its poles in ReIsJ < 0, with any pole on Re [s] = 0
being simple, the system with W(s) as transfer-function matrix is such
that outputs associated with nonzero initial conditions will not be unstable, but may not decay to zero.
The reason for the qualification in the above statement arises because
the system with transfer-function matrix W(s)may have state equations
with [F, GI not completely controllable and with uncontrollable states uustable. For example, the state-space equations
have an associated transfer functiotl l/(s 4- I), but x , is clearly unstable, and
an unbounded outputwould result from an excitation u(.) commencing at
t = 0 if x,(O) were nonzero. Even if x,(O) were zero, in practice one would
find that bounded inputs would not produce bounded outputs. Thus, either
on the grounds of inexactitude in practice or on the grounds that the bounded-input, bounded-output property is meant to apply for nonzero initial
conditions, we run into difficulty in determining the stability of (3.1.1) simpIy
from knowledge of the associated transfer function.
It is clearly dangerous to attempt to describe stability properties using
transfer-function-matrix properties unless there is an associated state-space
realization in mind. Description of stability properties using state-spaceequation properties is however quite possible.
Theorem 3.7.1. Consider the state-space equations (3.7.1). If
Re &[F]< 0,* then bounded u(.) lead to bounded y(.), and
the state x(t) resulting from any nonzero initial state x(t,) will
decay to zero under zero-input conditions. If Re ,Xi[fl 10, and
pure imaginary eigenvalues are simple, or even occur only in 1
*This notation is shorthand for "the real parts of all eigenvalues of Fare negative."
STAB1LIN
SEC. 3.7
129
x 1 blocks in the Jordan form of F, then the state x(t) resulting
from any nonzero initial state will remain bounded for all time
under zero-input conditions.
Proof. The equation relating x(t) to x(t,) and u(.) is
Suppose that Re Ai[q < 0. It is easy to prove that if 11 u'(T) 11 IC,
for all r and llx(t,)l( 5 c,, then IIx(t)ll 5 c,(c,, c,) for all t ; i.e.,
bounded u(.) lead fo bounded x(.), the bound on x(.)depending
'purely on the bound on u ( . ) and the bound on x(t,). With x(.)
and ut.) bounded, it is immediate from the second equation in
(3.7.1) that y ( . ) is bounded. With u(t) E 0, it is also trivial to see
that x(t) decays.to zero for nonzero x(t,).
Suppose now that Re A,[0, with pure imaginary eigenvalues occurring only in 1 x 1 blocks in the Jordan form of F.
(If no pure imaginary eigenvalue is repeated, this will certainly be
the case.) Let T be a matrix, in general complex, such that
<
:
where J i s the Jordan form of F. Then 8 is easily computed, and
the condition that pure imaginary eigenvalues occur only in 1 x 1
blocks will guarantee that only diagonal entries of 8;will contain
exponentials without negative real part exponents, and such
diagonal entries will be of the form eip' for some real p, rather
than tee"" for positive integer a and real p. It follows that
will be such that the only nondecaying parts of any entry will
be of the form cos ,uf or sin pt. This guarantees that if t i ( . ) 5 0,
x(t) will remain bounded if * t o ) is bounded.
OVV
Example If F =
3.7.1
[;:
13
the eigenvalues of F a n -2 ij and, accordinsly,
whatever G, H, and J may be in (3.7.1), bounded u ( . ) will produce
bounded A.), and nonzero initial state vectors will decay to zero under
zero excitations. If F =
[+: -@
the eigenvalues becomi ij. m e
solution of 2 = Fx, the homogeneous version of (3.7.1) is
and, as expected, the effectof nonzero initial conditions will not die out.
132
STATE-SPACE EQUATIONS
Consider now for arbitrary positive 6
jn6 V. d t = -J
ln+lld
Cn+L16
xfHH'x df
n6
where Q is a matrix whose nonsingularity is guaranteed by the
wmplete observability of [F, KJ. Since
it follows from the existence of lim,,
positivity of p that
J
Intlld
n6
Pdt-o
V(x(t)) and from the non-
asn-m
That is,
-
Since Q is positive definite, it is immediate that x(t) +0 as t
m and that F has eigenvalues satisfying the required constraint.
We comment that the reader familiar with Lyapunov stability
theory will be able to shorten the above proof considerably; V is
a Lyapunov function, being positive definite. Also, p is nonpositive, and the complete observability of IF, H ] guarantees that
p is not identically zero unless x(0) = 0. An important theorem
on Lyapunov functions (see, e.g., [25]) then immediately guaran0.
tees that x(t)
Next, we have to prove that if F is such that Re A,[F] < 0,then
a positive definite P exists satisfying (3.7.3). We shall prove that
the matrix
-
+
exists, is positive definite, and satisfies IIF F i l l = -HH'.
Then we shall prove uniqueness of the solution P of (3.7.3). Thus
we shall be able to identify P with II.
SEC. 3.7
STABILITY
133
That 11 exists follows from the eigenvalue restriction on F
which, as we know, guarantees that every entry of 8'contains
only decaying exponentials. That II is positive deiinite follows by
the complete observability of [F, HI.Also,
on using the fact that
(This follows from the eigenvalue restriction.)
To prove uniqueness, suppose that there are two diierent solutions P, and P, of (3.7.3). Then
It follows that
dt
= d [e"(P, - P,)eq
Since ePf(2, - P,)em is constant, it follows by taking t = 0 that
Letting t approach infinity and using the'eigenvalue property
again, we obtain
PI = P,
This is a contradiction, and thus uniqueness is established.
'
vvv
The uniqueness part of the above proof is mildly misleading, since it
would appear that the eigenvalue constraint on F is a critical requkement.
This is not actually the case. In fact, the following theorem, which we shall
have occasion to use later, is true. For the proof of the theorem, we refer
the reader to [4].
134
STATE-SPACE EQUATIONS
CHAP. 3
Theorem 3.7.4. Suppose that the matrices A, B, and C are n
x n, m x m, and n x m, respectively. There exists a unique
n x m matrix X satisfying the equation
if and only if &[A]
B.
+ &[B] # 0 for any two eigenvalues of A and
The reader is urged to check how this theorem may be applied to guarantee
uniqueness of the solution P of (3.7.3), under the constraint Re &[F] c 0.
Theorem 3.7.3. does not in itself suggest that checking that Re
<0
is an easy task computationally. The only technique suggested for computing
P i s via the integral (3.7.6), where, we recall, II turns out to be the same as P.
Obviously, use of this integral for. computing P would be pointless if we
merely wished to examine whether Re
< 0.
Fortunately, there is another proGdure for obtaining P, which, coupled
with a well-known test for positive definjteness, allows ready examination
of the stability of F. [This procedure for computing P may also be used to
compute the solution X of (3.7.V.l Equation (3.7.3) is nothing more than
a highly organized way of writing down tn(n 1) linear simultaneous scalar
equations (where n is the dimension of F), with the unknowns consisting of
the +n(n I) diagonal and superdiagonal entries of P. Procedures for solving
such equations are of course well known; solvability is actually guaranteed
by the eigenvalue constraint on F. Thus, if in a specific situation, (3.7.3)
proves insolvable, then F is known to not have eigenvalues all with negative
real parts.
Once P has been found, its positivedefinite character may be checked by
examining the signs of all leading principal minors. These are positive if and
only if P is positive definite 141.
Observe that in applying the test, there is great freedom in the possible
choice of H. A simple choice that cad always be made is
&[a
&[a
+
+
It can easily be checked that any H satisfying this equation is such that
[F, HI is completely observable for any F.
Example
3.7.2
Suppose that
F=
-2
-4
-3
SEC. 3.7
STABlLlN
135
(The characteristic polynomial of F is easily found in this case, as
+ 3s' 4s 2, and can be verified to have negative real part zeros.
However, we shall proceed via the lemma of Lyapunov.)
We shall take HH' = I, and solve
s3
+ +
From these equations we have
from which one can compute
Tedious calculations can then be used to check that
This establishes stability.
A simple extension of the lemma of Lyapunov, which we shall have occasion to use, is the following. Suppose that [F, is completely observable and
L is an arbitrary matrix with n rows, n being the dimension of F. Then
Re ,Ii[&' i0 if and only if there is a positive definite symmetricP satisfying
PF
+ F'P = -HH' - LL'
Proof is requested in the problems.
(3.7.8)
136
STATE-SPACE EQUATIONS
CHAP. 3
As stated, the lemma of Lyapunov is concerned with conditions for
Re
< 0 rather than Re 1,[F] 10 . A complete generalization to cover
this case of a nonstrict inequality is not straightforward; however, we can
prove one result that will be of assistance.
Aim
Theorem 3.7.5. Given an n X n matrix F, if there existsapositive definite matrix P such that
with H = 0 permitted, then Re AJF] SO and the Jordan form of
F has no blocks of size greater than 1 x 1 with pure imaginary
diagonal elements; equivalently, ?. = Fx is stable, in the sense
that for any x(O), the resulting x(t) is bounded for all t.
Proof. Consider the function
Observe that, as we have earlier calculated,
Thus V is nonnegative and monotonic decreasing. It follows that
. Jordan
all entries of x(f) are bounded for any initial ~ ( 0 )The
form of F has the stated property for the following reasons. Since
x(t) = eF1x(0)is bounded for arbitrary x(O), it must be true that
eF'is bounded. Now the entries of 8'are made up of terms a,ea~',
where 1, is an eigenvalue of F, and terms a,tzeas':a > 0 , if a
Jordan block associated with 1, is of size greater than 1 x 1.
Boundedness of CLtherefore implies that no Jordan block can he
of size greater than 1 x 1 if Re 1;= 0.
VVV
As an interesting sidelight to the above theorem, notice that if F has only
pure imaginary eigenvalues with no repeated eigenvalue, the only H for which
there exists a positive definite P satisfying (3.7.3) is H = 0.In that instance,
negative definite P also satisfy (3.7.31,as is seen by replacing P by -P. To
prove the claim that H = 0, observe that F must be such that
+
for some nonsingular T, with denoting direct sum, the first block possibly
absent, and each p, real. Then (3.7.3) gives
SEC. 3.7
STABILITY
137
for a skew P and positive definite p. Now
tr [AB] = tr [B'A'] = tr [A'B']
for any A and 8.
heref fore
tr [?PI = tr [p?]= -tr
= -tr
[$?I
by skewness
[@El
Therefore tr [$PI = 0 and so tr [&?:I'
= 0. This can only be so if fi = 0,
since gf?' is nonnegative definite, and the trace of a matrix is the sum of its
eigenvalues. With & = 0, it follows that H = 0.
Problem Suppose that Re A,[n< 0, and that x(i) is related to x(to) and u(r) for
t o S -C < I by
3.7.1
x
=e
-
x
)
+ S:,
ef('-'lGu(r) d?
Show that 1
I u(7) ll j:c , for all z and llx(to)\I< cz imply that ilx(t)ll
5 c,, where c, is a constant depending solely on c1 and c,.
Problem Discuss how the equation
3.7.2
PF + F'P = -HH'
..
might be solved if F is in diagonal fum and Re AIFI < 0:
Problem Show that if H is any matrix of n rows with HH' = I and F is any
n x ri matrix, then [F,HI is completely observable.
3.7.3
Problem Suppose that [F, HI is completely observable and L is an arbitrarymatrix
3.7.4
with n rows, n being the dimension of F. Prove that Re AtW < 0 if and
only if there is a positive definite symmetric P satisfying
Problem Determinq using the lemma of Lyapunov, whether the eigenvalues
of the following matrix have negative real parts:
3.7.5
138
STATE-SPACE EQUATIONS
CHAP. 3
Problem Let o be an arbitrary real number, and let[F, HI be a completely observable pair. Prove that Re &[F] < -o if and only if there exists apositive
definite P such that
3.7.6
PF
+ F'P t 2 a P = -HH'
REFERENCES
[ 1 I K. OOATA,State Space Analysis of ControlSystem, Prentice-Hall, Englewood
Cliffs, N.J., 1967.
I 2 1 L. A. ZADEHand C. A. DBOER,Lineor System Theory, McGraw-Hill, New
York, 1963.
[ 3 I R. J. SCHWARZ
and B. FRIEDLAND,
Linear Systems, McGraw-Hill, New York,
1965.
[41 F. R. GANTMACHER,
The Theory of Matrices, Chelsea, New York, 1959.
[ 5 I V. N. FADEEVA,
Computational Methodr oflinear Algebra, Dover, New York,
1959.
[ 6I F
. H.BRANIN,"Computational Methods of Network Analysis," in F. F. Kuo
and W. G. Magnuson, Jr. (eds.), Computer Orienled Circuit Design, pp. 71-122,
Prentiee-Hall, Englewood Cliffs, N.J., 1969.
171 L. 0.CHUA,"Computer-Aided Analysis of Nonlinear Networks," in F. F.
Kuo and W. G. Magnuson, Jr. (eds.), Computer Oriented Circuit Design, pp.
123-190,Prentice-Wall, Englewood Cliffs, N. J., 1969.
[ 81 R. A. ROHRER,
Circuit Theory: An Introduction to the State-Varioble Approach,
McGraw-Hill, New York, 1970.
191 A. RALSTON,
A First Course in Numerical Analysis, McGraw-Hill, New York,
1965.
1101 M. L. Lxou, "A Novel Method for Evaluating Transient Response," Proceedings of the IEEE, Vol. 54, No. 1, Jan. 1966,pp. 20-23.
[Ill M. L. Llou, "Evaluation of the Transition Matrix," Proceedings of the IEEE,
Val. 55, No. 2, Feb. 1967, pp. 228-229.
[I21 J. B. PLANT,"On the Computation of Transition Matrices of Tirne-Invariant
Systems," Proceedings of the IEEE, Val. 56, No. 8, Aug. 1968,pp. 1397-1398.
[I31 E. J. M ~ s m s c u s ~"A, Method for Calculating G Based on the CayleyHamilton Theorem," Proceedings of the IEEE, Val. 57, No. 7, July 1969, pp.
1328-1329.
1141 S. GANAPATHY
and A. SUBBARAO,"Transient Response Evaluation from the
State Transition Matrix," Proceedings of ?he IEEE, Val. 57, No. 3, March
1969, pp. 347-349.
1151 N. D.FRANCIS,"On the Computation of the Transition Matrix," Proceedings
of the IEEE, Val. 57, No. 11, Nov. 1969, pp. 2079-2080.
CHAP. 3
REFERENCES
139
[I61 J. S. FRAMEand M. A. NEEDLER,
"State and Covariance Extrapolation from
the State Transition Matrix," Proceedings of the ZEEE, Vol. 59, No. 2, Feb.
1971, pp. 294-295.
[17] M. SILVERBERG,
'A New Method of Solving State-Variable Equations Permitting Large Step Sizes," Proceedings of the ZEEE, Vol. 56, No. 8, Aug. 1968,
pp. 1352-1353.
[IS] R. W. B R O C K Finite
~ , Dimensional Linear Systems, Wiley, New York, 1970.
[I91 R. E.KALMAN,
P. L. FALB,and M. A. ARBIB,Topics in Mathematical System
Theory, Mdiraw-Hill, New York, 1969.
[20] R. E. KALMAN,
''Irreducible Realizations and the Degree of a Matrix of
Rational Functions," Journal of the Society for Industrial and Applied Mathematics, Vol. 13, No. 2, June 1965, pp. 520-544.
[211 B. MCMILLAN,
"Introduction to Formal Realizability Theory," Bell System
Technical Journal, Vol. 31, No. 2, March 1952, pp. 217-279, and Vol. 31, No. 3,
May 1952, pp. 541600.
[221 B. D. H. TELLEGEN, ''Synthesis of 2n-pole by Networks Containing the
Minimum Number of Elements," Journal of Mathematics and Physics, Vol. 32,
No. I, April 1953, pp. 1-18.
[231 R.J. DUFFIN
and D. HAZONY,
"The Degree of a Rational Matrix Function,"
Journal of the Society for Industrial and Applied Mathematics, Vol. 11, No. 3,
Sept. 1963, pp. 645-658.
[241 R.E. KALMAN,
"Lyapunov Functions for the Problem of Lur'e in Automatic
Control," Proceedings of the National Academy of Sciences, Vol. 49, No. 2,
Feb. 1963, pp. 201-205.
[251 J. LASALLE
and S. LEFSCHETZ,
Stability by Lyaprmnov's Direct Method with
Applications, Academic Press, New York, 1961.
Part Ill
NETWORK ANALYSIS
Two of the longest recognized problems of network theory are
those of analysis and synthesis. In analysis problems, one
generaIly starts with a description of a network in terms of its
components or a circuit schematic, and from this description
one deduces what sort of properties the network will exhibit
in its use. In synthesis problems, one generally starts with a list
of properties which it is desired that a network will have, and
from this list one deduces a description of the network in terms
of its components and a circuit schematic. The task of this part,
which consists of one chapter, is to study the use of -state-space
equations in tackling the analysis problem.
144
N E W O R K ANALYSIS VIA STATE-SPACE EQUATIONS
CHAP. 4
vector y evanesce, and we are left with the problem of having to formulate
a homogeneous equation a! = Fx.
At this point, we wish to issue a caution to the reader: given a network
described in physical terms, and given the physical signilicance of the vectors
u and y, i.e., the input or excitation and output or response vectors, there
is no unique state-space equation set describing the network-in fact, as we
shall shortly see, there may he no such set. The reason for nonuniqueness is
fairly obvious: transformations of a state vector x according to 2 = Tx for
nonsingular T yield as equally valid a set of state-space equations, with
state-vector 2, as the equation set with state-vector x. Though this point is
obvious, the older literature on the state-space analysis of networks some
times suggests implicitly that the state-space equations are unique, and that
the only valid state vector is one whose entries are either inductor currents
or capacitor voltages.
In this chapter we shall discuss the derivation of a set of state-space equations of a network at three levels of complexity. At the fist and lowest level,
we shall show how equations may be set up if the network is simple enough
to wn'te down Kirchhoff voltage law or current law equations by inspection;
this is the most common situation in practice. At the second level, we shall
show how equations may be set up if a certain hybrid matrix exists; this is
the next most common situation in practice. At the third and most complex
level, we shall consider a generally applicable procedure that does not rely
on the assumptions of the first- or second-level treatments; this is the least
common situation in practice.
Generally speaking, all the methods of setting up statespace equations
that we shall present attempt to take as the entries of the state vector, the
inductor currents and capacitor voltages, and, having identified the entries
of the state vector in physical terms, attempt to compute from the network
the coefficient matrices of the state-space equations. The attempts are not
always successful. In our discussions at the first level of complexity we shall
try to indicate with examples the sorts of difficulties that can arise in attempting to formulate state-space equations this way.
The fundamental source of such difficulties has been known for some
time; there are generally one of two factors operating:
I. The network contains a node with the only elements incident at the
node comprising inductors and/or current generators. (More generally,
the networks may contain a cut set,* the branches of which comprise
inductors and/or current generators.)
2. The network contains a loop with the only branches in the loop comprising capacitors and/or voltage generators.
*If the reader is not familiar wifh fhe notion of a cut set, this point may be skipped
over.
SEC. 4.2
STATE-SPACE EQUATIONS F O R SIMPLE NETWORKS
145
In case (Z), for example, assuming an allicapacitor loop, it is evident that
the capacitor voltages cannot take on independent values without violating
Kirchhoff's voltagelaw, which demands that the sum of the capacitor voltages
be zero. It is this sort of constraint on the potential entries of a state vector
that may interfere with their actual use.
The treatment of difficulties of the sort mentioned can proceed with the
aid of network topology. However, ifthe networks concerned contain gyrators
and transformers,' the use of nelhork ropology methods involves enormous
complexity. Even a qualitative description of all possible difficulties becomes
very difficult as soon as one proceeds past the two difficulties already noted.
Accordingly, weshall avoid use of network topology methods, and pay
little attention to trying to qualitatively and generally &escribethe difficulties
in setting up state-space equations. We shall, however, present examples
. .
illustrating diffik~ilties.
The earliest papers on the topic bf generating state-space equations for
networks [I-3J dealt almost solely with the unexcited caie; i.e., they dealt
with the derivation of the homogeneous equation
.
.
i= Fx
(4.1.1)
[~ctuilly,[2] and [3] didpermit certain generators to be present, but the
basic issue was still to derive (4.1.1).] More recent work [4-7j considers the
derivation of the nonhomogeneous equation
a = FX
+ GU
(4.1.2)
An extended treatment of the problem of generating the nonhomogeneous
equation can be found in [SJ:
There are two features of most treatments dealing with the derivation of
state-space equations:.that should be noted. First, the treatments depend
heavily on notions of network topology. As already noted, our treatment
will not involve network topology. Second, most treatments have difficulty
coping with ideal transformers and gyrators. The popular approach to deal
with the problem is to set up procedures for dehving state-space equations
for networks containing controlled sources, and to then replace the ideal
transformers aad gyrators with controlled sources. Our treatment ofpassive
networks does not require this, and by avoiding the introduction of element
classes like controlled sources for which elements of the class may be active,
we are able to get results that are probably sharper.
4.2
STATE-SPACE EQUATIONS FOR
SIMPLE NETWORKS
Our approach in this section will be to state a general rule for the
derivation of a set of state-space equations, and then to develop a number
146
NETWORK ANALYSIS VIA STATE-SPACE EQUATIONS
CHAP. 4
of examples illustrating application of the rule. Examples will also show
difficultis with its application. The two following sections will considex
situations in which the procedure of this section is unsatisfactory. Throughout
this section we allow only resistor, inductor, capacitor, transformer, and
gyrator elements in the networks under consideration.
The procedure for setting up state-space equations is as follows:
1. Identify each inductor current and each capacitor voltage with an entry
of the state variable x . Naturally, different inductor currents and
'capscitor voltages correspond to different entries!
2. Using Kirchhoff's voltage law or current law, write down an equation
involving each entry of k, but not involving any entry of the output
y or any variables other than entries of x and u.
3. Using Kirchhoffs voltage law or current law, write down an equation
involving each entry of the output, but not involving any entry of R
or any variables other than entries of x and u.
4. Organize the equations derived in step 2 into the form 2 = Fx Gu,
and those derived in step 3 into the form y = H'x Ju.
+
Example
4.23
+
Consider the network of Fig 4.2.1. We identify the current I with u,
and the voltage V with y. Also, following step 1, we take
...
FIGURE 4.21.
Circuit for Example 4.2.1.
Next, KirchhoWs current law yields
Note that the equation
though correct, is no help and does not conform with the requirements
of step 2 since it involves the output variable V.
SEC. 4.2
STATE-SPACE EQUATIONS FOR SIMPLE NETWORKS
147
An equation for Vc,is straightforward to obtain:
An equation for the output V is now required expressing V in terms
of I, V,,, and V., (see step 3). This is easy to obtain.
Combining the equations for
v ~ ,V<,,
, and V together we have
Example Figure 4.2.2 shows a doubly terminated ladder network, with input vari4.2.2
able V and output variable I,,. Thus u = V and y =I,,. The state
vector x we shall takc to be
FIGURE 4.2.2. Ladder Network Discussed in Example
4.2.2.
Proceeding according to step 2, we have
148
NETWORK ANALYSIS VIA STATE-SPACE EQUATIONS
Following step 3,
State-space equations are thus
CHAP. 4
SEC. 4.2
STATE-SPACE EQUATIONS FOR SIMPLE NETWORKS
149
The above state-space equations of course are not unique; another set is
provided by making the transformation
and the equations that result are
State-space equations of these two form and their connection with
ladder networks have been investigated in 191-1121.
150
NETWORK ANALYSIS VIA STATE-SPACE EQUATIONS
CHAP. 4
Now we shall look a t some difficulties that can arise with this approach.
Example Coosider the circuit of Fig.
4.2.3. By inspection we see that
4.2.3
V=RI+L~
R
FIGURE 4.2.3. Network of Example 4.2.3 with Nonstan-
dard Statdpaoe Equation.
With excitation I and response V, there is obviously no way we can
recover state-space equations of the standard form. Of course, if I were
the response and V the excitation, we would have
which is of the standard form; but this is not the case, and a new form
of state equation, if the name is still appropriate, has evolved. Again,
consider the circuit of Fig. 4.2.4. For this circuit, we have
FIGURE 4.2.4. Second Network for Example 4.2.3.
which is of the general form
i = Fx
+ Gu
y=H'x+Ju+Ed
Again therefore, nonstandard equations have evolved.
SEC. 4.2
STATE-SPACE EQUATIONS FOR SIMPLE NETWORKS
151
One way around the apparent difficulty raised in the above example is
simply to recognize (4.2.1) as a valid set of state-space equations in the same
sense that
i=Fx+Gu
y = H'x
Jtr
+
is a valid set of state-space equations. Whereas the transfer-function matrix
associated with (4.2.2) is
J
+ H1(sI - F)-lG
the transfer-function matrix associated with (4.2.1) is
J t - H'(s1- F)-lG
+ sE
as may easily be checked. With E # 0, this transfer-function matrix has
elements with a pole at s = m, in contrast to the transfer-function matrix
of (4.2.2). Obviously, it must then be impossible for a set of state-space
equations of the form of (4.2.1) to represent a transfer-function matrix
representable by equations of the form (4.2.2), at least with E # 0.
Notice that although there are two energy storage elements in the circuit
of Fig. 4.2.4, the state vector is of dimension 1. However, knowledge of both
V,(O) and I(0-) is required to compute the response V; i.e., in a sense, Iitself
is a state vector entry. Likewise, if equations of the general form of (4.2.1)
are derived from a circuit, it will be found that the dimension of the vector
x will be less than the number of energy storage elements, with the difference
equal to the rank of the matrix E.
A systematic procedure for dealing with circuits in which input derivatives
tend to arise in the statespace equations appears in Section 4.4.
We return now to the presentation of additional examples suggesting
difficulties with the ad hoc approach.
Example Consider the circuit of Fig. 4.2.5. For this circuit,
4.2.4
C ~ , = I L ~ L ~ ~ ~ , = L ~ I ~I ~ + V , I
v=L,~~,
There are no other independent equations, and therefore state-space
FIGURE 4.2.5.
Network for Example 4.2.4.
152
NETWORK ANALYSIS VIA STATE-SPACE EQUATIONS
CHAP. 4
equations can only be constructed with these equations. After a number of
trials, the reader will soon convince himself that with
it is impossible to find F and G such that
The best that can be done is
The difficulty arises in the above instance really because of our insistence
that the entries of the state vector correspond to inductor currents and
capacitor voltages. As we shall see in Section 4.4, by dropping this assump
tion we can get statespace equations of a more usual form.
Another sort of difficulty arises when the circuit and its excitations are
not necessarily well defined.
Example
4.2.5
Consider the circuit of Fig. 4.2.6. Because of the presence of the transformer, V, and V2 cannot be chosen independently. Therefore, it is
impossible to write down state-space equations with u a two-vector with
entries V, and V2.
FIGURE 4.2-6. Network for Example 4.2.5. The Inputs V,
and Vz cannot be Chosen Independently.
There is no real way around this sort of difficulty, a general version of which
will be noted in Section 4.4. Essentially, all that can be done is to disallow
independent generators at ports where the desired excitation variable cannot
in reality be independently specified. Thus, in the case of the above example,
this would mean ceasing to think of V , as an independent generator. Of
course, the solution is not to replace V, by a short circuit-this would mean
that V, would have to be identically zero!
SEC. 4.2
STATE-SPACE EQUATIONS FOR SIMPLE NETWORKS
153
Next we wish to consider a circuit for which no problems arise, but which
does illustrate an interesting feature.
Example Consider the circuit of Fig. 4.2.7. We take
4.2.6
FIGURE 4.2.7. Network for Example 4.2.6.
By inspection of the circu~t,we have
c~.=r-I,
=
-
1
1
r& + - -cI
Also,
L i z = v,+Rr,-RI,
= V, R(I
I=) - RI.
= -2RI,
v, R f
+ -
+ +
or
Further,
Thus the state-space equations are
Let us examine the controllability of this set of equations. We form
154
NETWORK ANALYSIS VIA STATE-SPACE EQUATIONS
CHAP. 4
The determinant of this matrix is zero if and only if
To check observability, we form
and the condition for lack of obseplability, obtained by setting the determinant of the matrix equal to zero, is also
The transfer function associated with the state-space equations is
readily found as
With R2 = LIC, this transfer function becomes simply R. Under this
constraint, the network of Fig. 4.2.7 is known as a constant resistance
network.
The point illustrated by the above example is that the ability to set up
state-space equations is independent of the controllability and observability
of the resulting equations. The reader can check too that the difficulties we
have noted in setting up stafe-space equations in earlier examples have not
resulted from any properties associated with the controllability or observability notions.
Problem
4.2.1
Write down state-variable equations for the network of Fig. 4.2.8. The
sources are a voltage generator at port 1 and a current generator at port
FIGURE 4.2.8. Network for Problem 42.1.
SEC. 4.2
STATE-SPACE EQUATIONS FOR SIMPLE NETWORKS
155
2. Compute from the state-space equations the transfer function relating
the exciting voltage at port 1 to a response voltage at port 2, assuming
that port 2 is open circuit.
Problem Write down statsspace equations describing the behavior of the network
4.2.2
of Fig. 4.2.9. Verify that the eigenvalues of the Fmatrix are simple and
pure imaginary. (Note: There are no excitations for the network.)
'
FIGURE 4.2.9. Network for Problem 4.2.2.
Problem Write down state-spaceequations for the network of ~ i g4.2.10.
.
Examine4.2.3
the controllability and observability properties and comment.
FIGURE 4.2.1 0. Network for Problem 4.2.3.
Problem Write down statespace equations for the network of Fig. 4.2.11. If
4.2.4
V = V, sin o
f for t 2 0, V = 0 for t c 0, w f 1, compute from the
state-space equations initial conditions for the circuit that will guarantee
that I will he purely sinusoidal, containing no transient term.
VC
FIGURE 4.2.11. Network for Problem 4.2.4.
Problem Write down state-space equations for the circuit of Fig. 4.2.12. Can you
4.2.5
show that with L = CRZ,the circuit is a constant resistance network;
i.e., V = IR?
156
NETWORK ANALYSIS VIA STATE-SPACE EQUATIONS
CHAP. 4
L
FIGURE 4.2.12.
Problem
4.2.6
Network for Problem4.2.5
Consider the network of Fig. 4.2.13, which might have resulted from a
classicaI Brune synthesis [I31 of an impedance function. Obtain equations
successively for V,,and I,,, V,, and I,,, and V,,and ZL,.Write down a
set of statespace equations with state vector x given by
x' = [C,
v,, czv.,
c,v,, L
LzL, LsILJ
Note the interesting structure of the Fmatrix.
FIGURE 4.2.13:
4.3
Network for Problem 4.2.6
STATE-SPACE EQUATIONS VIA
REACTANCE EXTRACTION-SIMPLE
VERSION
In this section we outline a somewhat more systematic procedure
than that of the previous section for the derivation of state-space equations
for passive networks. As before, the key idea is still to identify the entries
of the state vector with capacitor voltages and inductor currents. However,
we replace the essentially ad hoc method of obtaining state equations by one
in which the hybrid matrix of a memoryless or nondynamic network is wmputed. (A memoryless or nondynamic network is one that contains no
inductor or capacitor elements, but may contain resistors, transformers, and
gyrators.) The procedure breaks down if the hybrid matrix is not computable;
in this instance, the more complex procedure of the next sectionmust be used.
SEC. 4.3
STATE-SPACE EQUATIONS
157
As before, we assume that the input u and output y of the state-space
equations are identified physically, in the sense that each entry of u and y
is known to be a port voltage or current. However, we shall constrain u and
y further in this and the next section; in what proves to be an eventual
simplification in notation and in development of the theory, we shall assume
that if a network port is excited by a voltage source (current source), the
current (voltage) at that port will be a response; further, we shall assume that
if the current (voltage) at a port is a response, then the port is excited with
a voltage (current) generator. If it turns out that we can still generate the
state-space equations, there is no loss of generality in making this assumption.
To see this, consider a situation in which we are given a two-port network,
with port I excited by a voltage source and the network response (with port 2
short-circuited) comprising the current at port 2 (see Fig. 4.3.la). In effect,
v,~-rl
Qj[-rG
Network
(a)
VI CL
Network
Ib)
FIGURE 4.3.1. Introduction of Excitation at a Port where
Response is Measured.
all we desire is statespace equations with the input u equal to V, and
output y equaI to the response I,. According to our assumption, instead we
shall seek to generate statespace equations with the input u a two-vector,
with entries V , and V,, and the output y a two-vector, with entries I, and
I, (see Fig. 4.3.lb). Assuming that we obtain such equations, it is immediate
that embedded within these equations are equations with input V, and output
I,. Thus assume that we derive
158
NETWORK ANALYSIS VIA STATE-SPACE EOUATIONS
CHAP. 4
Then certainly, with V, = 0, we can write
More generally the process of inserting new excitation variables proceeds
as follows. Suppose that at port j there is initially prescribed a response that
is a current, but no voltage excitation is prescribed. The port will be short
circuited and the response will be the current through this short circuit; an
excitation is introduced by replacing the short circuit with a voltagegenerator.
If at port j there is initially prescribed a response that is a voltage, but no
current excitation, we note that the port must initially be open circuited.
An excitation is introduced by connecting a current generator across the port.
Note that if there is initially prescribed a response that is a current through
some component, we conceive of the current as being through a short circuit
in series with the component, and introduce a voltage generator in this
short circuit (see Fig. 4.3.2a). Likewise, if a response is a voltage across a
component, we conceive of an open-circuit port being in parallel with the
component, and introduce a current generator across this open circuit (see
Fig. 4.3.2b).
FIGURE 4.3.2. Procedure for Dealing with Response
Associated with a Component.
When an excitation has been defined for every port, the definition of response variables at every port is of course immediate.
What our assumption demands is that we require, at least temporarily,
a full complement of inputs and outputs; i.e., every port current and voltage
SEC. 4.3
STATE-SPACE EQUATIONS
159
is either an entry of the input u or output y ; subsequent to the derivation
of the state-space equations, certain inputs and outputs can be discarded if
desired.
We shall also assume that the ith components u, and y, are associated
with the ith port of the network. This ensures, for example, that u'y is the
instantaneous power flow into the network. Having identifiedthe state-space .
equation input rr and output y in the manner just described, the next step
is to extract all the reactive elements, is., inductors and capacitors, from the
prescribed network. This amounts to redrawing the prescribed network N
as a cascade connection of a memoryless network N,terminated in inductors
and capacitors (see Fig. 4.3.3). If N has m ports and n reactive elements,
FIGURE 4.3.3. Illustration of Reactance Extraction.
then N,.will have (m f n) ports, at n of which there will be inductor or
capacitor terminations. This is the procedure of reactance extraction.
Suppose that there are n, inductors L,, L,, . . . ,L., in N, and no capacitors
C,,C,, . . , C., Of course, n, n, = n. We shall take as the state vector
+
.
x=[I,,
I,,
...
I , Vc1 V,,
...
VCnJ
(4.3.3)
The sign convention is shown in Fig. 4.3.4 and is set up so that each inductor
current is positive when the port current is positive and each capacitor voltage
is positive when the port voltage is positive.
160
NETWORK ANALYSIS VIA STATE-SPACE EQUATIONS
CHAP. 4
FIGURE 4.3.4. Delinition of State Vector Components.
To deduce state equations, we form a certain hybrid matrix for N,, assumed
stripped of its terminations. If the matrix does not exist, then the procedure
of this section cannot he used, and a modification, described in the next
section, is required.
To specify the hybrid matrix for N,, we need to specify the excitation and
response variables. The first m excitation variables coincide with the entries
of the input vector u; i.e., the excitation variables for the left-hand m ports
of N , are the same as tbose for the m ports of N. The next n, excitation
variables, corresponding to those ports of N, shown terminated in inductors
in Fig. 4.3.3, are all currents; we shall denote the vector of such currents
by I,. Finally, the remaining n, ports of N,will he assumed to be excited by
voltages, the vector of which will he denoted by V,. These excitations are
shown in Fig. 4.3.5.
The response variables will be denoted by y, an m vector of responses
at the first m ports; by V,, an n, vector of responses at the next n, ports,
the ones shown as inductor terminated in Fig. 4.3.3; and by I,, an n, vector
of responses at the remaining n, ports. Notice that with the inductor and
capacitor terminations, Q has to he the same as y, since y is the response
vector for N when u is the excitation, and N , with terminations is the same
as N. But if N, is considered alone, without the terminations, the response
at the first m ports will not be the same as y, and so we have used a different
symbol for the response.
The equation relating the defined excitation and response variables for N,
is
(4.3.4)
where
STATE-SPACE EQUATIONS
161
r
Generators
Depend
on u
FIGURE 4.3.5.
~xciGionsof N,.
is the hybrid matrix of N,,partitioned conformably with the excitations.
In principle, the determination of M, assuming that it exists, is a straightforwardcircuit analysis task. IfN, is exceedingly complex, analysis techniques
based on topological results may be called into play if desired, although they
are not always necessary. In this sense, the technique being presented for the
derivation of state-space equations could be thought of as demanding notions
of network topology for its implementation. But certainly, application of
network topology could not be regarded as an intrinsic part of the procedure.
We shall make no further comments here or in the examples regarding the
computation of M, in view of the generally easy nature of the task.
Now we can derive state-space equations from our knowledge of M.
Recall that our aim is to describe N. We have a description of N, provided
2 62
NETWORK ANALYSIS VIA STATE-SPACE EQUATIONS
CHAP. 4
by (4.3.4), and we have a definition~ofthestate. vector provided by (4.3.3);
so we need to consider what happens to our description of N, when terminations are introduced. When N, is terminated in inductors and capacitors,
the following relation is forced between I, and V,.
. ,a
where d! = diag [L,,
L,, . .
and we have used the fact that termination
of the ith port of N, in L, identifies the ith entry of I, with I,,. The minus
sign arises because of the sign convention applying to excitations of N,.
Similarly, the following relation holds between 1, and V3.
..
where E = diag [C,,
C,,. , CJ.The minus sign again arises because of the
sign convention applying to excitations of N,.
Now we substitute for V , and I, in (4.3.4) to obtain an equation reflecting
the performance of N, when it is terminated in inductors and capacitors.
We can therefore replace 9 by y in this equation, which becomes
Finally, we use the definition (4.3.3) of the state vector x. Recalling that
each entry I, is the same as an inductor current when N,is terminated, while
each entry of V, is the same as a capacitor voltage, we obtain from (4.3.3)
and (4.3.8)
SEC. 4.3
STATE-SPACE EQUATIONS
[:]
MI,
Ma 2
= [-S-'M,.
-S-1M2.
--e-'MII - -e-'Mg2
M I3
-C1MZ,][:]
-e-lM3,
163
(4.3.9)
Now identify
F=
[- - r 1 M 3 z - e - 1 M 3 J
(4.3.10)
H' = [MI, MIS1
J = Mll
Equations (4.3.9)become simply
This completes our statement of the procedure for deducing (4.3.11). Let
us summarize it.
I. Ensure that every port current and voltage of N, is either an excitation
or response, with the ith entry of u and ith entry of y denoting the
excitation and response at the ith port.
2. Perform a reactance extraction to generate a nondynamic network
N, (see Fig. 4.3.3). Assign the entries of the state variable as inductor
currents and capacitor voltages, with the sign convention shown in
Fig. 4.3.4.
3. Compute a hybrid matrix for N,, with excitation variables comprising
the entries of u, together with the currents at inductively terminated
ports and voltages at capacitively terminated ports [see (4.3.4)l.
4. With 2 = diag [ L , , L,, . . . ,La,]and e = diag [C,, C,, . , C.J, define
the matrices of the state-space equations of N by (4.3.10).
5. If step 3 fails, in the sense that no hybrid matrix exists with the required
excitation and response variables, the method fails.
..
Example
4.3.1
Consider the network of Fig. 4.3.6a. This network is redrawn showing a
reactand e x t r a i o n in Fig. 4.3.6b. The associated nondynamic netyork
N,is shown in Fig. 4 . 3 . 6 ~with sources correspondingto the correct excitation variables. Notice that because no inductors are present, I2 evanesces.
With two capacitors, V, becomes a two-vector.
The hybrid matrix of.the network of Fig. 4.3.6~is easily found. We
have
i
rxl
1
oirui
@
-
Nv'
-
-
-
--
7
(c)
FIGURE 4.3.6. Network for Example 4.3.1.
STATE-SPACE EQUATIONS
165
To derive, for example, the 2-3 e n 0 of this matrix, we replace u by
an open circuit and (V,),by a short circuit (see Fig. 4.3.6d). We readily
derive (I3), = -(V3)L/R2.
In terms'of the block matrices of (4.3.4), we have
M I L= RI
MI3 = 11 01
M ~ =I I-1
Ol'
with the remaining matrices evanescing. The matrix e is of course
and, applying the formulas of (4.3.10), we obtain
Hr=[l
01
J=RI
That is, the state-space equations are
y = [l
O]x
+ Rtu
We obtained the same result in the last section.
166
NETWORK ANALYSIS VIA STATE-SPACE EQUATIONS
CHAP. 4
Example We consider the circuit shown in Pig. 4.3.7a. The excitation u is a two4.3.2
vector comprised of the voltage at the left-hand port and current at the
right-hand port, while the response y consists, of course, of the input
current at the left-hand port and the voltage at the right-hand port. The
effect of reactance extraction is shown in Fig. 4.3.7b, and the excitation
variables used in computing the hybrid matrix of N, are shown in Fig.
4.3.7~.I t is straightforward to obtain from Fig. 4.3.7~the equation
(b)
FIGURE 4.3.7 Network for Example 4.3.2.
STATE-SPACE EQUATIONS
167
FIGURE 4.3.7 (cant.)
Thelast column of the hybrid matrix is computed, for example, by forcing
V = I = I, = 0 (see Fig. 4.3.7d) and.evaluating the resulting @)I, etc.
In this case, the quantities M,,, MI,, etc., of Eq. (4.3.4) are
168
N E W O R K ANALYSIS VIA STATE-SPACE EQUATIONS
CHAP. 4
The matrix b: is simply [I] and e is simply [4]. Using (4.3.10), the statespace equations become
Let us now consider some situations in which this procedure does not
work.
Example Consider the circuit of Fig. 4.3.8a. As we noted in the last section, this
4.3.3
circuit is not describable by state-space equations of the form of (4.3.11).
Since this is the only form of state-space equations the method of this
section is capable of generating, we would expect that the method would
fall down. Indeed, this is the case. Figure 4.3.8b shows the associated
FIGURE 4.3.8. Network for Example 4.3.3. I and I, cannot be taken as Independent Excitations.
nondynamic network together with the sources needed to define the
hybrid matrix. It is obvious from the figure that
which precludes the formation of the desired hybrid matrix. Existence of
the hybrid matrix wouldimply that Iand I, can be selected independently.
Example Consider the networkof Fig. 4.3.9a. Obviously, thenatural way to analyze
4.3.4
this network is to replace the two capacitors by a single one of value
SEC. 4.3
STATE-SPACE E a U A n O N S
FIGURE 4.3.9. Network for Example 4.3.4.
169
(V,),and
(V,), are not Independent.
+
C, C2.Let us suppose however that we donot do this, and attempt to
apply the technique of this section. The associated nondynamicnetwork
is shown m Fig. 4.3.9b, together with the excitation varinblu used for
computing the hybrid matrix.
Evidently, the circuit forces (V3),= (V&, and so the hybrid matrix
cannot be formed, for its existence would imply the ability to select
(V,}, and (V,)Zindependently.
The difficulties described in Examples 4.3.3 and 4.3.4 and difficulties in
related situations are resolvable using the techniques of the next section.
Problem Find a homogeneous state equation for thecircuit of Fig. 4.3.10 using the
4.3.1
procedure of this section.
FIGURE 4.3.1 0. Network for Problem 4.3.1.
170
NETWORK ANALYSIS VIA STATE-SPACE EQUATIONS
CHAP. 4
Problem Find state-space equations for the circuit of Fig. 4.3.1 1 using the proce4.3.2
dure of this section.
FIGURE 4.3.11. Network for Problem 4.3.2.
Problem Any inductor of L henries is equivalent to a transformer of turns-ratio
4.3.3
: 1 terminated at its secondary port in a 1-H inductor. A similar
fl
statement holds for capacitors. This means that before carrying out a
reactance extraction, one can convert all inductors to 1 Hand capacitors
to 1 F, absorbing the transformers into the nondynamic network. What
are the state-space equations resulting from such a procedure, and what is
the coordinate transformation linking thestatevariable in these equations
with the state variable in the equations derived in the text?
Problem Find statespace equations for the circuit of Fig. 4.3.12 using the proce
4.3.4
dure of this section. Compare the result with the worked example of the
FIGURE 4.3.12. Network for Problem 4.3.4.
preceding section. (Notice that the input variable is V and the output
variable I,; an extension of the number of inputs and outputs is therefore
required, a s explained in the text.)
4.4
STATE-SPACE EQUATIONS VIA
REACTANCE EXTRACTION-THE
GENERAL CASE'
Our aim in this section is to present a procedure for generating
state-space equations of an m-port network N t h a t is free from the difficulties
associated with the two approaches given hitherto. The procedure constitutes
*This section may be omitted at a first reading.
SEC. 4.4
STATE-SPACE EQUATIONS
171
a variation of that given in Section 4.3. We retain the same assumption
concerning the input u and output y appearing in the state-space equations:
every port current and voltage must be an entry of either u or y, with the ith
entry of u and of y corresponding to quantities associated with the ith port.
Again, we carry out a reactance extraction to form a nondynamic network
N, from the prescribed network N.Now, however, we consider the situation
in which that hybrid matrix of N, cannot be formed which is associated with
the particuIar set of responses and excitations described in Section 4.3.
The technique to be used in this case is to form a new set of excitat~onand
response variables such that the new hybrid matrix for N, does exlst. The
passivity of N, implies an important property of this hybrid matrix that
enables construction of the state-space equations.
We shall break up our treatment in this section into several subsections,
as foIlows:
1. Proof of some preliminary lemmas, using the pass~vityof the nondynamc network N,.
2. Selection of the first m scalar excitation and response variables required
to generate a hybrid matrix for N,. Here, m is the number of ports of N.
3. Selection of the remaining n scalar excitation and response variables
requ~redto generate a hybrid matrix for N,. Here, n is the number of
reactive elements in N.
4. Proof of hybrid-matrix existence.
5. Construction of the state-space equations from the hybrid matrix.
Preliminan/ Lemmas
lemma 4.4.1. Every m-port network possesses at least one hybrid matrix.*
For the proof of this lemma, see [14]. The lemma says that there exists at
least one choice of excitation and response variable for the network for which
a hybrid matrix may be defined, but it does not say what this choice i s indeed, choices that work in the sense of allowing definition of a hybrid
matrix will vary from network to network.
We shall use Lemma 4.4.1 only to prove Lemma 4.4.3. Lemma 4.4.3 will
be of direct use in developing the procedure for setting up state-space equations. Lemma 4.4.2 will be used both in proving Lemma 4.4.3 and in the
course of the procedure for setting up state-space equations.
Lemma 4.4.2. Let M(s) be the hybrid matrix of a multiport network (the usual conventions apply). Suppose that M is partitioned
as
*Note: This lemma does not extend to networks that may contain active elements.
Then if M,,(s)
= 0,
Proof. The passivity property guarantees that on the jw axis M
MI* is nonnegative definite or, with M,, = 0, that
+
for all w such that jw is not a pole of any element of M(s). Now
suppose that (4.4.2) fails for s equal to some jw,. Then M,,
Mi? has a nonzero element, m say, that occurs in the ith row
and the jth column of M M'*.Since M M'* is nonnegative
definite, the minor
+
(M
[(M
+
+
+ M'*),(M + M"),;
+ M'7r
(M
0
+ M.*Ij] = [m* (M+"M'%.
I
= -Imlz
is nonnegative. Hence m = 0. Therefore, (4.4.2) holds for all jw
VVV
and therefore for all* s.
Lemma 4.4.3. Let fi b e a ( l + m, m,)-port network. Suppose
that the input impedance and Th6venin equivalent voltage at
port 1 are zero whatever the excitations at ports 2,3, . . . 1 m,,
and whatever the terminations at ports 2 f m,,3 m,,. . .,
I m, m,. Then the input impedance and Thevenin equivalent voltage at port 1 will be zero whatever the excitations at
ports 2,3, . . , 1 m, m,. Conversely, if the input impedance
and Thdvenin equivalent voltage at port I are zero for arbitrary
excitations at ports 2,3, . . . ,1 m, + m,, they will be zero for
arbitrary terminations or excitations at these ports.
+
.+
+
+ +
.
+ +
+
Before proving this lemma, we shall make several comments. First, the
lemma has to do with conditions for a port to look like a short circuit and
to be decoupled in an obvious sense from all other ports. Second, the lemma
is saying that if a certain performance is observed for arbitrary terminations,
it will be observed for arbitrary excitations, and converseIy. Third, m, = 0
*Strictly speaking, we should exclude the h i t e set of s that are poles of elements
of M l z .
SEC. 4.4
STATE-SPACE EQUATIONS
173
or m, = 0 are permitted. If ml = 0, the lemma says that if zero input
impedance is observed for arbitrary terminations on ports other than the
first, the first port is essentially disconnected from the remainder. Fourth,
the lemma need not hold if 8 is active. Thus suppose that fi is a two port
with an impedance matrix that is not positive real:
It is easily checked that the impedance seen at port 1 is zero irrespective of
the value of a terminating impedance at port 2. But if port 2 is excited, a
nonzero ThCvenin voltage will be observed at port 1.
Proof of Lemma 4.4.3. By Lemma 4.4.1, fipossesses a hybrid
matrix, which we shall partition as
where M I , is 1 x 1, M,, ism, x m,, and M 3 , is m, x m,. Each
M,, in general will be a function of the complex variable s, but
this fact is irrelevant here.
We do not require knowledge of whether voltage or current
excitations are applied at ports beyond the first in defining M.
However, we do need the fact that M can be defined assuming
a current excitation at port 1. To see this, we argue as follows.
By Lemma 4.4.1, there exists an M with either a voltage or current
excitation assumed at port 1. Suppose that it is a voltage excitation. Set excitations at ports 2 through I $- m, to zero, and-by
the use of short-circuit or open-circuit terminations, as appropriate-set all excitations at ports 2 in, through I m, f m,
to zero. Then M,, will be the input admittance at port 1. By the
lemma statement, M,, = m, which is not allowable. Hence M
exists only with a current excitation assumed at port 1.
Suppose now that M in (4.4.3) is derived on the basis of there
being a current excitation at port 1: then the above argument
shows that Mll = 0. It follows by Lemma 4.4.2 that
+
+
Since the ThCvenin equivalent voltage at port 1 is zero irrespective
of the excitations at ports 2 through 1 4- m,, it follows that
174
NETWORK ANALYSIS VIA STATE-SPACE EQUATIONS
CHAP. 4
M I , = 0.Thus M looks like
and we have
as the equation describing fi (with E standing for excitation, R
for response). Let us set E, = 0, and terminate ports 2 m,
through 1 m, 4- m, so that
+
+
R, = -kE3
for some arbitrary positive constant k. (This amounts to passive
termination of each port in k ohms or k mhos.) Then (4.4.4)
implies that
The inverse always exists because M,, is a passive hybrid matrix
in its own right and kI is positive definite. By the lemma statement, the input impedance is zero, and so M , = 0. Therefore,
fi is described by
,
(4.4.5)
From this equation it is immediate that for arbitrary E, and E,,
the input impedance and Thivenin equivalent voltage at port 1
are zero; i.e., port 1 is internally short circuited and decoupled
from the remainder of the network.
The converse contained in the lemma statement follows easily.
We argue as before that M can only be defmed with a current
excitation at port 1 and that M , , = 0. That M,, and M I , are
zero follows from the fact that arbitrary excitations at all ports
lead to zero Th6venin voltage, and that M,, and M,, are zero
follows from M I , = -Mi? and M I , = -M',?. Thus (4.4.5) is
valid. It is immediate from this equation that replacement of any
SEC. 4.4
175
STATE-SPACE EQUATIONS
number of arbitrary excitations by arbitrary terminations will
not affect the zero input impedance and zero Tkbvenin voltage
properties.
VVV
The dual of Lemma 4.4.3 is easy to state; the proof of the dual lemma
requires but simple variations on the proof of Lemma 4.4.3 and will not be
stated.
+ +
Lemma 4.4.4. Let 3be a (1 m, m,)-port network. Suppose
that the input admittance and Thkvenin equivalent current at
port 1 are zero whatever the excifafionsatports 5 3 , . ,1 m,,
and whatever the terminalions at ports 2 m,, . . . , 1 m, m,.
Then the input admittance and Th6venin equivalent current are
zero whatever the excitations at ports 2,3,. , 1 m, m,.
Conversely, if the input admjttance and Th6venin equivalent current at port I are zero for arbitrary excitations at ports 2,3, . . ,
I m, m,, they will be zero for arbitrary excitations or
terminations at these ports.
+
.. +
+ +
.. + +
.
+ +
The final lemma we require is a generalization of Lemmas 4.4.3 and 4.4.4.
Lemma 4.4.5. Let @be a @
+ m, + ma-port network, andlet
w,, w,, . .. ,w, be variables associated with ports 1 through
p; each w, can be either a current or voltage. Let v =
aJw.
for some constants a,. Then if the network 3 constrains* v = 0
whatever the excitations at ports p + I,p + 2, . . . , p f- nr, and
xp=,
+
+ +
whatever the tem'nations at ports p m, + 1, p m, 2,
. . , p m, m,, then v = 0 for arbitrary excitations or
terminations at thee ports.
. + +
Proof. We reduce the problem to one permitting application of
Lemma 4.4.3. From 8 we construct a network H a s follows. If
w, is a current, we cascade a unit gyrator with port i of fi. For
the network flcomprising 3 with gyrators at some of its ports,
every w, will be a port voltage. Next, connect the secondary ports
of a 1 x p multipart transformer to the first p ports of the network #, with the turns-ratio matrix of the transformer chosen so
that the primary voltage v is C a,w,. The network ?
i with the
'The word constmm here is used roughly in the following sense. Some internal feature
of the network forces a condition to hold, and if sources are connected in such a way as
to apparently prevent the condition from holding, the responses will not be well defined.
Examples of constraints are: a shoR circuit, which constrains a port voltage to be zero,
and the constraint on the port voltages of a two-port transformer, expressible in terms of
the turns ~atio.
176
NETWORK ANALYSIS VIA STATE-SPACE EQUATIONS
CHAP. 4
transformer connected will be denoted by #. Then 8 is a (1 f m ,
m,)-port network such that the voltage at port 1 is constrained
to be zero by 3,for arbitrary excitations at ports 2, . . . ,1 m ,
and arbitrary terminations at ports 2 f m,, 3 f m,, . . . , 1 fm,
m,. To say that the voltage is constrained to be zero is the
same as to say that both Thkvenin voltage and input impedance
are zero under the stated conditions. By Lemma 4.4.3, it follows
that v = 0 for arbitrary excitations at all ports of past the first,
and thus at all ports of fi past the pth. The first part of Lemma
4.4.5 is therefore proved The second part follows similarly.
+
+
+
VVV
Now we return to the development of statespace equations for aprescribed
m-port network N. The associated nondynamic network is N, and is an
(m n) port. The first order of business is to define excitation and response
variables for N, that wil) enable definition of a hybrid matrix for N,.
+
Selection of the First m Excitation and Response
Variables
Suppose that there are m network ports at which excitations
for theprescribed network N are applied and responses are measured. As in
the previous section, we suppose that if initially excitation and response
variables are not both associated with each one of the m ports, then the set
of excitation and response variables is expanded so that one of each sort of
variable is associated with each port. Suppose that before the number of
exciting variables is expanded to one at each port, and likewise for the
number of response variables, there are m' ports at which exciting variables
are present. By renumbering the ports if necessary, we may suppose that these
m' ports are port 1, port 2, . . ;,port. m'. The initially prescribed exciting
variables areu,, u,, . . ,u,.. The quantities y,.,,, . . . ,y, must all beinitially
prescribed as response variables, as well as possibly some of y,, . . .,y,.
(If not, then one or more of ports m'
1 through m would have initially
prescribed neither an excitation nor a response variable, and we could therefore avoid consideration of this port.) Note also that, for the moment, the
variables inentioned are associated with N, not the network N, obtained
from N by reactance extraction.
As earlier described, we expand-the exciting variables of N to become
u,, u,, .. . ,u, and the response variables to become y,, y,, .,y,. Thus if
y,,
is a current, it will be a current through a short-circuited port; in place
of the short circuit, we place a voltage generator u,.,,. Or, if y,.,, is a voltage, it wiU be a voltage at an open-circuited port,. and at this port we connect
a current generator urn.+,.We proceed similarly for y,.,,, . . . ,y,.
Now we commence the selection process for the excitation and response
.
+
..
SEC. 4.4
STATE-SPACE EQUATIONS
177
..
variables for N,.We shall denote excitation variables for N, by e,, e,, . ,
and response variables by r,, r,, . , where each e, and r, is a scalar. In
general, we shall identify each e, with one of u,, u,, . . , u, and each r, with
one of y , , y,,
.,y,. The reasoning suggesting these identifications and an
indication of when the identifications cannot be made will occupy our attention for the remainder of this subsection.
The reader is warned that the subsequent arguments concerning the choice
of excitation and response variables are lengthy and intricate; however,
they are not deep conceptually.
We first choose
..
.
..
subject to one proviso. The network N, must not be such that it constrains
u, to be identically zero irrespective of the terminations (or, by Lemmas
4.4.3 and 4.4.4, excitations) at the ports of N,other than the f k t port. If u,
is a voltage, the constraint u, E 0 irrespective of terminations on ports of
N, is equivalent to the input impedance at port 1 being zero, i.e., port 1
looking like a short circuit; if u, is a current, the constraint is equivalent to
port 1 looking like an open circuit.* By Lemmas 4.4.3 and 4.4.4, the constraint would then apply irrespective of excitations at the remaining ports of
N,, and port 1 of N, would be decoupled from the remaining ports, either as
a short circuit or an open circuit. In eirher case, we could never obtain a nonzero response a1 the port.
In other words, the network N, essentially constrains not just the permissible excitation to be zero, but also the achievable responses to be zero,
irrespective of the excitations at other ports of N,. In this case, the port is
not worth worrying about for further calculations. We can forget about its
presence, assigning neither excitation nor response variables to it for the
purpose of computing a hybrid matrix of N,. We note too that the originally
posedproblem offinding state-space equations for N was illposed in the sense
that the exciting variable u, could not be freely chosen; thus, to state-space
equations derived for Non the basis of neglect of the offending port, we need
to append the constraint equations u, y , =_ 0.
As a notational convention, we shall assume that if now, or at any stage
in the procedure, we agree to neglect a port, then we shall renumber the
ports, advancing their numbering (and the numbering of their excitation and
response variables) by 1. Thus if it turned out that u, 0, we would renumber port 2 as port 1, port 3 as port 2,
. Normally, port 772' would be
--
. ..
*One could perhaps argue that if port 1 looks l i e an open circuit,onemuldailow ul
to be a nonzero current if infinite responw are permitted. To avoid this sort of problem,
weare adopting theconventionthat finite excitations that produce infinite responm are not
permitted.
178
NETWORK ANALYSIS VIA STATE-SPACE EQUATIONS
CHAP. 4
renumbered as port m' - 1 and port m as port m - 1. We shall, however,
assume that the numbers m', m, and n are always redefined-in any port renumbering so that m' is the number of exciting variables originally prescribed
for N after deletions, m is the number of exciting variables in N after the
initial expansion and after deletions, and n is the number of ports of N , that
are reactively terminated in constructing N, again after deletions.
With this convention it follows that (4.4.6) will always be used to assign
the fist exciting variable of N,, though perhaps after a number of port.
deletions.
Next we consider u, and u, and ask the question: Is there a relation of the
form
a , e , ( . ) a,u,(.) = 0
a, f 0
(4.4.7)
+
that holds for arbitrary e.,(.)independently of the terminations on all but the
first two ports of N,? Note that.the existence of a relation like((4.4.7)can be
checked by analysis of N,. Note also that a relation like (4.4.7).for. a , = 0
has already been ruled out; for if a, = 0,then u,(.) is. unconstrained, and
we would have u , ( . ) identically zero for arbitrary excitations pr terminations
at ports of N , past the first.
Two possibilities exist in (4.4.7); either a , = 0 or a, # 0. In the former
case, we have u, = 0 for arbitrary excitation on port 1 and arbitrary terminations on ports of N, other than the fist and second. Lemmas 4.4.3 and
4.4.4 then allow us to conclude by an argument already noted that port 2 is
either an open circuit or a short ciicuit, discomated from the remainder of
the network, and that y, = 0 is the only possible response. In this case we
delete port 2 from those under consideration. If a , f 0, suppose that N, is
terminated so as to produce the original network N. Then for the original
network N we would have Eq. (4.4.7) holding; this would imply that for
the original network N, u, and u, could not reasonablybe taken as independent
excitations, and the problem of generating state-space equations would be ill
posed. Rather than bothering with a technique to deal with this situation,
let us, quite reasonably, disallow it. (Of course, if u, i0 is constrained, this
too implies that the original problem is ill posed. This constraint is however
slightly easier and perhaps worth taking the trouble to deal with, since it is
probably more likely to occur than the second sort of ill-posed problem just
noted.)
Thus, either (4.4.7)does not hold, or, after deletion of one or more ports,
it does not hold. We take
e2 = uz
(4.4.8)
and recognize that no relation of the form
a,e,(.)
+ a,e,(.) = 0
(4.4.9)
SEC. 4.4
STATE-SPAC E EQUATIONS
179
is forced to hold for some a,, a, (not both zero) and arbitrary terminations
at ports past the first two. By Lemma 4.4.5, no relation of this form is forced
to hold for arbitrary excitations at ports past the first two.*
Next, we consider u, and ask if a relation of the form
can hold for arbitrary e , , e, and arbitrary terminations on ports past the first
three. [Note that a, = 0 is impossible, by the remarks associated with (4.4.9).1
We conclude that either port 3 is a decoupled short circuit or open circuit, in
which case we reject it from consideration; or that the problem of finding
statespace equations foc' N is ill posed, and we disallow this; or that (4.4.10)
does not hold. In this case, we set
secure in the knowledge that no relation of the form
is forced to hold for some a , , a,, a,, not all zero, irrespective of terminations
(or, by Lemma 4.4.5, excitations) on ports past the first three.
Clearly, we can proceed in this manner and set
e, = w,
. .. em,= u,.
(4.4.13)
with the obvious generalizations of (4.4.12) and the associated remarks.
If m' = m, we are through with this part of the procedure. But if not, we
now consider e,, e,, . . . , e m .and u,.,, and ask the question: Is there a
relation of the form
that holds for arbitrary e,(.),. . . ,em.(.) independently of the terminations
on all but the first m'
1 ports of N,? There are two answers to this question,
depending whether cc,,
. ,a , are all zero or not. If a , , . ,a , are all zero,
the equation becomes
+
..
..
and, by a familiar argument, we can dispense with further consideration of
this port. So we now consider what happens when one or more of a,, . . ,a ,
are nonzero. In this case, we can still argue that the problem of generating
.
*Thefact that (4.4.9) is not forced to hold means that y , ( . ) and y l ( . ) are finite when
and e ~ ( . ) are finite, by our earlier convention.
ell.)
180
NETL&ORK ANALYSIS VIA STATE-SPACE EQUATIONS
CHAP. 4
state-space equations for N is ill posed. For from (4.4.14), it follows that
urn.,,(.) is uniquely determined by ul(-), u,(.),
,u,,(.). An independent
generator, for the sake of argument a voltage generator if u,, is a voltage,
cannot be connected at port m' $ 1, since the voltage at this port is required
to be
.. .
Because of structural features of N , a voltage equal to the right-hand aide
of (4.4.16) must be developed at port m'
1, irrespective of the excitations
at ports 1 through m' and the terminations at ports after the first m'
I.
If port m'
1 of N,is short circuited, we see from (4.4.16) that the relation
+
+
+
..
will hold with not all of a,, . ,a,, nonzero. Equation (4.4.17) holds irrespective of the terminations on ports of N,past the first m'
1. Accordingly,
suppose that the inductor and capacitor terminations are used that will
yield the original network N.Equation (4.4.17) will then apply for N, under
the conditions that port m'
1 is short circuited, that excitations a t ports
past the first m'
1 are arbitrary, and that y,
is a current.
Now recall that y,.,, is the first response variable for which u,
was not
originally listed as an excitation variable for N; u,, ,urn.were the original
excitation variables, urn.,,, . . . ,u, were added ones. It follows, as explained
in more detail in the previous section, that the originally posedproblem corresponh toforcing u,,, = .. . = u,
= 0 in theproblem with the added excitation
varinbles. Therefore, Eq. (4.4.17) holds when u,,
,u,, are the only excitation variables for N, since it holds under the conditions of urn.+,= 0 and
arbitrary urn.+,, . .,u,. Since the equation says that the originally listed
excitations of N,naturally desired independent, are not really independent,
it means that the orighally posed problem offinding state-space equationsfor
N relating prescribed excitations and responses is not well posed. Again, we
shall disallow this situation, on reasonable grounds.
Thus either (4.4.14) does not hold, or after deletion of one or more ports,
it does not hold. We take
+
+
+
..
...
.
e,+
I
= urn,+
I
and we are guaranteed that there exist no constants a,, a,,
all zero, such that the equation
a,e,(.)+a,e,(.)+
... +am.+,e..,,(~)=O
(4.4.18)
. ..,ad+,, not
(4.4.19)
is forced to bold irrespective of the terminations or excitations at ports
beyond the first m'
1.
+
SEC. 4.4
181
STATE-SPACE EQUATIONS
Next we ask if it is possible for a relation of the following form to hold:
+
a l e , ( . ) a,e,(.) 4- ...
+ ud+pm,+,(.) = 0
umr+,st 0
(4.4.20)
The relation of course is supposed to hold for arbitrary e,(.) through em.+,(.)
independently of the terminations on all but the first m' 2 ports of N,.
Following an earlier argument, we can show that if any of a,, a,, . ,a,.
is nonzero, then the original problem of finding state-space equations for N
with the initially given excitations u , , u,,
,urn,is ill posed. Accordingly,
we suppose that a, = a, = . = a,, = 0. Also, if a , , , = 0, it is easy to
argue that both u , , , and y , , , must be identically zero, and port m'
2
can be omitted from further consideration. Hence we are left with the possibility
(4.4.21)
uM,+,e,,+,(.) a..+,u,,+,(.) = 0
+
..
. ..
..
+
+
+
for nonzero a,,, and .,.,g
We can show that now port m'
1 and port
m' 2 should both be neglected; the argument is as follows. Consider N, with
excitations u,, uz, .. .',v,, but with zero excitations applied at ports m'
1,
m' 3,. , m. (If there were also zero excitation applied at port m' f 2,
this would correspond to the initially given situation before the list of excitation variables was expanded to include u,.,,,
.,u,.) Equation (4.4.21)
implies that urn,+,E 0; i.e., if u,,, is a voltage, the voltage at port m'
2
of N (in fact N,) is constrained to be zero if u,.,, = em.+, is zero-irrespective
of the presence of any external short circuit at port m' $- 2. The voltage u,.,,
will be zero whatever termination is employed at port m' $ 2, and therefore
the current y,.,, = 0. Likewise, if urn.+, is a current, the voltage y,,+= at
port m' $ 2 will be zero irrespective of the termination at port m'
2, so
long as u,.,, is identically zero.
Summing up, if N is terminated so as to force urn.+,,. . . ,u, to equal zero,
then y,,+, will equal zero.
Accordingly, we can drop port m'
2 from further consideration. As far
as N , is concerned, no excitation or response will be prescribed for port
m' 2, and the port will not even be thought of as a port of N,. Then,
when state-space equations are derived for N , we can adjoin to them the
equation y,.,, = 0.
The above argument is symmetric with respect to ports m'
1 and m'
2.
It follows that when N is excited with only the originally specified set of vari1 of N, from further consideraables, y,,, = 0. Again, we drop port m'
tion* and simply adjoin the equation y,.,, 5 0 to the state-space equations
of N.
+
+ ..
+
..
+
+
+
+
+
+
+
+
*It might be thought that one could drop either porl m'
1 or m' f 2 from consideration, but not both. However, the fact that N is excited with only the originally specified
set ofvariables implies thate,.+~= Oand em*+,= 0,which imply y,,+t = 0 andy,.,~ = 0,
independently of all other excitations.
182
NETWORK ANALYSIS VIA STATE-SPACE EQUATIONS
-
CHAP. 4
Conditions under which (4.4.20) can hold are, in summary,
1. When N is excited only by its originally specified excitation variable
ul, uz,. .,u,,, then y,.,
y,,, G 0 and ports m'
1 and m'
2
need be given no further consideration, or
2. The problem of finding state-space equations for N is ill posed.
.
+-
+
+
+
We proceed as before, ruling out case 2, rejecting ports m'
1 andm'
2
in case 1, and renumbering the ports, but retaining the number m.to denote
the number of ports at which excitations are applied or responses measured
When this is done, we choose
In a similar manner, ruling out the possibility of the original problem
being ill posed and rejecting from consideration ports of N at which the
response will be identically zero (with consequential renumbering of the ports
and redefinition of m),we take
Of course, we also take
In addition, we know that e l , e,, . . . , e, are independent in the following
sense: there exist no constants a,, . . a,, not all zero, such that a relation of
the form
.
.
LY forced to hold independently of the terminations or, by Lemma 4.4.5, the
excitations at ports of N, beyond the first m.
The rule for assigning excitation and response variables to the first m
ports of N,is simple, the only complication being that some ports are rejected
from consideration. Excluding this complication, excitation variables for N,
are chosen to agree with the extended set ofexcitationvariables u,, u,,
,u,
of N,and response variables are assigned in the obvious way.
...
Example
4.4.1
Consider the network of Fig. 4.4.la. The initially prescribed excitation
variable is V, and the initially prescribed response variable is IE. The
circuit is redrawn in Fig.4.4.lb to exhibit the effect of reactance extraction and the effect of assigning each response variable to a port.
We take first
STATE-SPACE EQUATIONS
SEC. 4.4
183
FIGURE 4.4.1. Network for Example 4.4.1:
Next, we introduce a voltage generator LIZ at the shortcucuited port
with current I, = y,. It is easy to check that u , and u, are independent,
so we take
e2 = u2
Figure 4.4.2 shows a circuit for which the excitations u, and uz cannot
be chosen independently. The problem of finding state-space equations
for the circuit with these excitations is therefore ill posed.
FIGURE 4.4.2. State-Space Equations Cannot
with
11,
and u2 as Excitations.
Be Found
184
NETWORK ANALYSIS VIA STATE-SPACE EQUATIONS
CHAP. 4
Selection of Remaining Excitation and Response
Variables
The nondynamic network N, resulting from reactance extraction
will be assumed to have rn n ports, with inductor and capacitor terminations at the last n ports yielding N. We assume that excitation variables
e,, e,, . . .,emhave been chosen for the first m pons of N, to agree with the
(extended) set of excitation variables of N, viz., I!,, u,, .. ,u,. We shall
now explain how to choose em+,,e,,,, . . . ,etc.
In our procedure, we shall have occasion to renumber the ports and
perhaps to eliminate some from consideration. If some are eliminated, we
assume, as explained earlier, that n is redefined.
Selection of e m , ,begins by choosing any port from among the remaining
n ports of N,; suppose that in producing N from N, this port is inductively
terminated. Let ;denote the current at this port and 6 the voltage. 1f there is
no relation of the form
+
.
a t e , ( . )S-
+
... amem(.)+a,,,;(.) = o
a,,, # O
(4.4.26)
..
that holds independently of e,, e,, . ,emand of the terminations on ports
other than the first m and the port under consideration, we select
A similar procedure is followed in case the port is capacitively terminated.
The quantities ?and fi are as before, but now we vary (4.4.26). If there is
no relation of the form
a l e , ( . )-5. . .
-
+ amem(.)+ am,,5(-)
.
.
0
a,,, # 0
(4.4.28)
that holds independently of e,,.e2, . . e, and of the terminations on ports
other than the first rn and the port under consideration, we select
Several points should be noted. First, in (4.4.27) and (4.4.29). em+,agrees
with the excitation that we chose in the last section. Second, as the notation
implies, in case (4.4.26) or (4.4.28) do not hold, the port under consideration
becomes the (m 1)th port. Third, in the event that em+,is selected as
described above, we are assured that there do not exist constraints a , , a,, . . . ,
a,,,, not all zero, for which a relation
+
a , e , ( . )+ a 2 e , ( . )
+ ... + a , + , e , + , ( ~ ) = O
(4.4.30)
is forced to hold irrespective of the terminations or excitations on all but the
185
STATE-SPACE EQUATIONS
SEC. 4.4
+
first m I ports of N,. [Failure of (4.4.26) or (4.4.28) assures us of the
nonexistence of the a, if a,,, f 0. If a,,, = 0,ern+,is free, and the termination on port m f 1 can be considered arbitrary. In that case, the material
of the last subsection guarantees nonexistence of the a,.]
Now let us consider what happens if (4.4.26) or (4.4.28) holds. For the
moment, we shall leave the excitation variable of the port under consideration unassigned and select a different port from those to which excitation
variables have not been assigned-always assuming that we have not
exhausted the number of ports. Thus we obtain a new candidate for port
(m f 1). We treat it just like the first candidate; if in constructing N i t is
inductively terminated, we ask whether there is a relation like (4.4.26), and,
if not, we assign em+,as the port current. Likewise, if in wnstructing.N the
port is capacitively terminated, we ask whether there is a relation like (4.4.28).
If not, then we assign em+,as the port voltage.
If the equivalent of (4.4.26) or (4.4.28) is satisfied, then we select another
candidate port from among those not yet considered and proceed as for the
6rst two candidates.
Eventually, one of two things happens.
1. One candidate is successful, and the excitation em+,is assigned as a
current in the case of a port terminated in an inductor when N is
constructed, and a voltage in the contrary case.
2. No candidate is successful.
Let us leave case 2 for the moment, and assume that case 1 applies. Note
that no reIation of the form (4.4.30) is forced to hold for constants a,,not
all zero, and arbitrary terminations (or excitations) on the remaining ports.
If case 1 applies, we again go through the candidate selection process and
attempt to locate a port with the following properties. If the port is inductively terminated when N is constructed from N,, there must not exist a
relation of the form
(where ?is the port current) holding for all el(.) through em,,(.) independently of the terminations on ports other than ports 1 through (m 1) and
the port under consideration. If (4.4.31) does not hold, we select
+
If the candidate port is capacitively terminated in the construction of N,
the property desired is that there exist no relation of the form
a,e,(.)+
... + a,+,e,+,(.)
+a,+,fj(-)~O
am+,+0
(4.4.33)
186
NETWORK ANALYSIS VIA STATE-SPACE EQUATIONS
CHAP. 4
(where O is the port voltage) holding for all e , ( . ) through e m + , ( .independ)
ently of the terminations on ports other than ports 1 through (m 1) and
the port under consideration. If (4.4.33) does not hold, of course we select
+
Assuming that we find a successful candidate, we number this port as
m 2. We note that em+,agrees with the excitation chosen in the last
section, and that there cannot exist constants a, through a,,,, not all zero,
for which a relation of the form
+
is forced to hold irrespective of terminations (or excitations) on ports beyond
the first m 2.
Obviously in this search procedure, one of two things again must happen:
+
1. One candidate is successful, and the excitation em,, is assigned appropriately.
2. No candidate is successful.
Assuming case 1 again, we could proceed to try to assign em+,,em+,,etc.
At some step we shall have assigned all possible porrs, o r i f will be impossible
to find a successful candidate.
Now suppose that after combining together the first m ports to which
excitation and response variables, were assigned and the ports to which
excitation and response variables are assigned by the procedures just described, a total of I < m n ports has assigned variables. [Thusthere will
exist no constants a,, a,, :. ,&,,'not all zero, such that
+
.
is forced to hold irrespective of the terminations or excitations at those
m n - I ports for which no excitation variables have been specified.]
Moreover, let f and O denote the current and voltage at any. one of these
remaining m n - 1 ports.1f the port is inductively terminatedin constructing N,there will exist constants a,, . ,a,,, such that for arbitrary e ,
through e,
+
+
..
irrespective of the terminations or excitations at the other of the last
m + n - 1 ports. (If this were not the case, then this port would be a successful candidate in the sense already described.) Likewise, if the port is
SEC. 4.4
STATE-SPACE EQUATIONS
187
capacitively terminated in the construction of N, there will exist constants
I,, .. ,Blrl such that for arbitrary el through e,
.
irrespective of the terminations or excitations at the other of the last
m f n - [ports.
Suppose for the sake of argument that (4.4.37) applies. We claim that one
of two things can happen:
--
I. f(.) = B(.) 0 for all e,(.) through ek.) and all terminations on the
remaining ports. In this case, we give no further consideration to the
port and do not assign excitation and response variables to it.
2. Equation (4.4.38) will not hold. In this case, we assign
To establish the claim, let us suppose that (4.4.37) and (4.4.38) do hold
simultaneously; we shall prove that ? % O r 0. Suppose that the port is
terminated in a resistance R, forcing B = -R?. Then we must have
This equation holds for all e,(-) through ek,) and all terminations on ports
other than the first 1 and that port terminated in R; the equation also holds
for all R, which implies that
This equation holds for aU el(.) through e i . ) , 811 terminations 6n ports
other than the' first I and that port terminated in R, and for all resistive (and
therefore all) terminations on the port terminated in R; i.e., the equation
holds for all terminations on ports other than the first I. This is a contradiction unless Pi = aj = 0 for all i, j [see (4.4.36) and associated remarks].
Thus (4.4.37) and (4.4.38) will only hold simultaneously if
irrespective of el(.) through e,(.) and port terminations on other than the
first I ports and the port under consideration. The claim is therefore established.
If (4.4.42) holds, the reason for rejecting consideration of the port with
these constraints is obvious. Notice that the presence of a terminating
188
NETWORK ANALYSIS VIA STATE-SPACE EQUATIONS
CHAP. 4
inductor at the port when N is constructed is irrelevant; if the inductor were
removed, or replaced by any other component, the relation between the input
and output variables of N would be unchanged.
The above arguments have been based on an assumption that the port
under consideration is inductively terminated in the construction of N.
Obviously, the only variation in the case of capacitor termination is to say
that either (4.4.42) holds (in which case the port is rejected from consideration) or (4.4.38) holds but (4.4.37) does not hold, and we set
*
el+, = i
(4.4.43)
Clearly, if not all ports are rejected through condition (4.4.42), we will
assign el+,(.). Further, since if el+, = 6, Eq. (4.4.38) does not hold, and if
el+, = f, Eq. (4.4.37) does not hold, it follows that there do not exist constants
a,, a,, . . . ,a,+,, not all zero, such that
a,e,(.)
+ ... + a,+,e,+,(.)
-
(4.4.44)
0
[Of course, (4.4.37) only guarantees nonexistence with a,,, # 0. But the
nonexistence of constants a,,
,a, satisfying (4.4.36) for arbitrary excitation at port I + 1 means that (4.4.44) cannot hold if a,,, = 0.1
Next we must select el+,.The procedure is much the same. Select a port
from those for which no excitation variables have been assigned, and suppose
for the sake of argument that it is inductively terminated when N is constructed from N,. Suppose that the port current is I* and port voltage 6.
There exist constants a,, . . . ,a,+, such that a relation of the form
...
+ ... + a,+,e,+,(.)+ a,+,$.)=
0
a,,, f 0
(4.4.45)
is forced to hold for arbitrary terminations on ports other than the first
I 1 and the port under consideration. [Actually, a,,, = 0 in (4.4.49, for
the port under consideration must have failed to be a "successful candidate"
for port I 1, and the way it would have failed would have been through
(4.4.45) holding without the a,+,e,+,(.) term.] One of two possibilities must
be true:
+
+
1. i*(-) = 6(.) = 0 for all.e,(.) through e l + , ( - )and all terminations on the
remaining ports. In this case, we give no further consideration to the
port and do not assign excitation and response variables to it.
2. There do not exist constants p , , j,, . . ,pi+,, not all zero, such that
a relation of the form
.
B,e,(-)
-
+ .. . + B,+,e,+,(.)+ Blrzo(,)
0
(4.4.46)
is forced to hold irrespective of the terminations or excitations on ports
STATE-SPACE EQUATIONS
SEC. 4.4
other than the first 1
case, we take
189
+ 1 and the port under consideration. In this
The argument establishing this claim is the same m the argument establishing the claim in respect of the selection of an excitation variable for port
I + I.
Of course, if the port under consideration is capacitively terminated in
forming N, either possibility 1 above holds or we can take
For either sort of termination, so long as possibility 1 does not hold, it is
guaranteed that there do not exist constants a,, a,, . . . ,al+, such that the
relation
is forced to hold independently of the terminations or excitations on ports
past the first I 2.
Clearly, we can continue in this fashion and assign response and excitation
variables to all the remaining ports (other than those rejected from consideration).
Notice that thefirsf I - m of the last n ports of N,, i.e., those terminated in
reactances to yield N, will have excitation variables agreeing with those chosen
in the lost section--in fact they will have "natural" excitation variables. The
last tn n - ( p o r t ~will have the opposite excitation variables to those which
the last section suggested slrould have been chosen.
Notice also that the response variable r,,, 1 i m fn I, of any of
the last 111 n - 1 ports would have been the natural excitation variable bu!
for thefact that after the selection of em+,,...,e,, we werefaced with equations
of the form
+
-+
-
+
a,e,(.) -k
.. . + se,(.)+ a,+,,+,(.) = 0
a,,,
#
o
(4.4.50)
[Equations (4.4.37) and (4.4.49, where in the latter equation a,,, = 0 as
we noted, are examples of this equation.] Equation (4.4.50) of course holds
for all terminations and therefore all excitations at ports other than ports 1
through land port I $ i, so it still holds after the selection of e,,,, el+,,
,
em+,.
The last point to note (which is the whole point of the preceding develop
ment) is that e,(.) through em,,(.) have the property that there do not exist
constants a , , al, . . .,u,+~, not all zero, such that the relation
. ..
190
NETWORK ANALYSIS VIA STATE-SPACE EQUATIONS
CHAP. 4
is forced to hold. In other words, e,(.) through em+,(-) may always be arbitrary
and independently assigned.
Example Consider the circuit of Fig. 4.4.3a with excitation Vand response I. This
4.4.2
is redrawn to exhibit a reactance extraction in Fig. 4.4.3b, and port
variables for the associated nondynamic network are shown in Fig.
4.4.3~.
We would first assign e, = V,then ez = VZ (this being the desirable
excitation, as a capacitor termination is used to form the circuit of Fig.
4.4.3aj. Next, we would take e3 = V3(this also being the desirable excitation). We would l i e to take e4 = V4, but we observe the relation
which holds independently of e,, e,, and e,. Therefore, we must take
(b)
FIGURE 4.4.3. Network for Example 4.4.2.
STATE-SPACE EQUATIONS
SEC. 4.4
2 91
(cl
FIGURE 4.4.3. (cont.)
Notice that there is no constraint relation of the form
Hybrid-Matrix Existence
We have so far shown how to select excitation variables e,(.)
through em+,(-)and corresponding response variables r , ( . ) throughr,+,(.)
for the nondynamic network N,. We claim that with the choice of variables
made, a hybrid matrix M exists for N,. The argument establishing existence
is based on the nonexistence of constants cl,, a,, . . ,a,,, suchthat arelation
of the form
.
is forced to hold. This nonexistence implies that arbitrary choice of (finite)
e,(.) through em,,(-) lead to fmite r , ( . ) through r,+.(.). It is then clear how
to define the hybrid matrix M; we take
and m , is guaranteed to be finite; i.e., the hybrid matrix exists.
192
Example
4.4.3
NElWORK ANALYSIS VIA STATE-SPACE EQUATIONS
CHAP. 4
Consider the network discussed in Example 4.4.2 and depicted in Fig.
4.4.3. In Example 4.4.2 we identified
el=Vl
e,=V2
e3=V3
e,=14
Obviously
and it is easy to verify that
[]
Ri'
= [-ti1
v,
-RT'
0
Ri:iG1
0
0
1
Key Property of the Hybrid Matrix
Now we prove a property of the hybrid matrix that is essential
in establishing the state equations. We make use of the passivity of the
nondynamic network N, in proving the property.
We shall define a partitioning of the hybrid matrix in a fairly natural way.
(This partitioning will differ from a more complex one to be used in setting
up the state-space equations.) The hybrid matrix is (m $ n) x (m n);
roughly speaking, the first I excitation variables are desirable, and the last
m n - I are not. Let us partition the hybrid matrix Mas
+
+
where M,, is I
X
I, and write
where Ed is the vector (el,e,, . . . ,e,)', etc. From (4.4.50) we see that each
entry of Xu is a linear combination only of entries of Ed; i.e.,
Since M is the hybrid matrix of a passive network, it follows by Lemma
4.4.2 that
MI, = -Ma:
Notice that (4.4.54) and (4.4.55) hold for Example 4.4.3.
(4.4.55)
SEC. 4.4
STATE-SPACE EQUATIONS
193
Generation of State-Space Equations from the
Hybrid Matrix
Finally, we are in sight of our goal. To set up the statespace
equations, a somewhat complicated partitioning of the hybrid matrix M is
required. This is best described in terms of the partitioning of the excitation
vector (e,, e,,
,em,,)', the end result of the partitioning being illustrated
in Fig. 4.4.4. The first m entries of this vector are, and will now be denoted
...
(inductor
terminations
e2yie1d.N)
[capacitor
terminations
e, yield N)
(inductor
terminations
e4yield N)
+
(capacitor
terminations
e5 yield N)
FIGURE 4.4.4. selection of Excitation Variables Prior to
Generating StateSpace Equations.
by, u, where u is the vector of (the extended set of) inputs of N. It will be
recalled that the next 1 m entries correspond to "desirable" selections of
excitations, so that in the case of an inductively terminated port of N,, the
excitation is a current. Without loss of generality, suppose that the I - m
ports under consideration are renumbered so that all those ports that are
inductively terminated in constructing N from N, occur first, i.e., as ports
m 1, m 2, . ,and that those ports which are capacitively terminated
in constructing N from N, occur second, i.e., as ports. . 1 - 1, I. The
excitation variables for the first group of ports will be denoted by the vector
E, and for the second group by E,. Note that E, is a vector of currents and
E, a vector of voltages.
There remain m n - 1 ports-those where the excitations were "undesirable" selections. Without loss of generality, suppose that the ports are
-
+
+ ..
.
+
194
NETWORK ANALYSIS VIA STATE-SPACE EQUATIONS
CHAP. 4
renumbered so that all those ports inductively terminated in constructing N
from N, occur first, and those capacitively terminated occur second. The
excitation variables for the first group of ports will be denoted by the vector
E, ahd for the second group by E,. Note that E, is a vector of voltages and
E, a vector of currents.
Suppose that the response vectors corresponding to u and E, through E,
are9 and R, through R,; Thehybrid matrix M is partitioned conformably
with e and r, so that
Notice that we have used the structural property of the hybrid matrix
M proved in the last subsection and summarized in (4.4.54) and (4.4.55).
Note, however, that the partitioning of M applying in these equations differs
from the new partitioning of M.
Now we define S,,e,, S,,and C?, as diagonal matrices whose diagonal
entries are the values, respectively, of the inductors in N terminating the ports
of N, with excitation E,, the capacitors in N terminating the ports of N ,
with excitation E,, the inductors in N terminating the ports of N, with
excitation E,, and the capacitors in N terminating the ports of N, with
excitation E,.
When N is formed from N,, the following constraints on the port voltages
and currents of N, are forced. (Note the sign conventions shown in Fig.
4.4.4.)
When theseare combined with (4.4.56) and 3 is identified with y, the output
of N,we have
..
.
SEC. 4.4
195
STATE-SPACE EQUATIONS
and
Using (4.4.59) in (4.4.581, it follows that
and
Equations (4.4.60) and (4.4.61) are almost, but not quite, the final state
equations. We wish to note several points concerning these equations.
in (4.4.60) is ofthe form A BCB',
First, the coefficient matrix of [I!?* E3J'
where A and Care positive definite. Therefore, this matrix is invertible, and
we can solve (4.4.60) for [p2 .&",I'
Second, the input u enters into (4.4.60) both directly and in differentiated
form. Equation (4.4.60) is not therefore the usual form of the state-space
equation, even if solved for [I!?= &]'.
Third, the entries of E, are inductor currents and the entries of E, are
capacitor voltages in N. Also, the entries of R, are inductor currents and those
of R, are capacitor voltages, though R, and R , do not appear explicitly in
(4.4.60). We see from (4.4.59), though, that R, and R, are linear combinations
of u, E,, and E,.
Fourth, (4.4.61) shows that, in general, y depends linearly on E,, E,, u,
and ti. [From (4.4.60) it follows that the apparent dependence of y on E=and
E, in (4.4.61) can be eliminated, since Ez and E~ depend on El,E,, u,
and ti.]
To simplify notation, let us now rewrite (4.4.60) in the form
+
196
NETWORK ANALYSIS VIA STATE-SPACE EOUATIONS
Dl* = D,x
+ D,u - D4D,zi
CHAP. 4
(4.4.62)
where the definition ofall quantities is obvious, save D,, which is [M,,M IJ.
We rewrite Eq. (4.4.61) in the form
y = D,x
+ D,u + D:D,D,li
3.DJY42
(4.4.63)
Now define a new variable
3 =x
+ D;'D,D,u
(4.4.64)
The reason for introducing this variable is to eliminate the zi terms in (4.4.62).
Of course, 2 is the new state variable. When (4.4.64) is substituted into
(4.4.62) and (4.4.63), there results
2 = D;'D13 f D;l(D3 - D,D;'D,D,)u
+
y = (D6 f f 5 D h D i 1 D 3 3
[ D , - D,D;'D,D, $- f f ; D ; D ; l ( D ,
+
+ (D;D,D, - D5PbD;'D,D,)ti
- D2Di1D,D,)]u
(4.4.65)
Equations (4.4.65) constifufe a set of state-space equationsfor N. As noted
in an earlier section, a term involving 5 in the second of (4.4.65) may be
inescapable.
Example Consider the circuit of Fig. 4.4.5a with I as the input and Vas the output
4.4.4
variable. A reactance extraction is exhibited in Fig. 4.4.5b. The exciting
variables for the hybrid matrix of the nondynamic network are chosen
as follows. First, e, = I. This is obvious. Second, el = V2.Next, we
might trye3 = I 3 , the inductor current, but rhea we would have e, e,
0. Also, we might try e3 = V,, but then we would have e, - e3 = 0.
So we take e, = V,,an "undesirable" selection since port 3 is terminated
in an inductor in constructingthe circuit of Fig. 4.4.5a. Also, we make the
"undesirable" selection el = I,.
The hybrid mamx is readily found, and
+
I;[]:-
0 1 1
0
I
0 1 0
0
I,
::
[;]=[I;
As required, the bottom right 2 x 2 submatrix is zero, and the lower left
and upper right 2 x 2 submatrix are the negative transpose ofeach other.
The appropriate version of Eq. (4.4.58) is derived by setting V3 =;
-2i3, I1
-pl. and I , = 3,. The result is
-
-
FIGURE 4.4.5. Network for Example 4.4.4.
198
NETWORK ANALYSIS VIA STATE-SPACE EQUATIONS
CHAP. 4
Similarly, from (4.4.59) we obtain
(These equations are self-evidentfrom Pig 4.45c.) Combining these two
equations and replacing I by 11 and V by y, we have
Propertiesof the State-Space Equations
Now we wish to note some important points arising out of the
development of (4.4.65). In the problems of this section, proof is requested
of the following fact.
Theorem 4.4.1. With the definitions
and with g,,B,, S,, and B, positive definite, the matrix
is positive definite symmetric.
Notice that the coefficient of Ei in the expression for ~in(4.4.65)isD;D,Ds.
Thus the following facts should be clear.
1. The coefficient of 11 in the expression for y in (4.4.65) is nonnegative
definite,symmetric.
2. From (4.4.65), y is independent of ti if and only if D, = [M,,
M , , r =O.
3. If D , = 0,no change of variable from x to i is necessary [see (4.4.6411;
Eq. (4.4.62) for x contains no z i term; MI, and M , , are zero by definition
of D,; and the inductor currents R, and capacitor voltages R, depend
[see (4.4.59)l on1y.on E2 and E3 andnot on zt.
4. It is'idvalid to attempt to define the response of N for initial conditions
E,, E,, R,,and R, and excitation u ( . ) which do not satisfy (4.4.59).
The iniiial valuesof the inductor currents E, and capacitor voltages
E, can be arbitrary. The initial values of R , and R, depend only on
E, and E, and not on u if D, = 0.
The following theorem i s a n interesting consequence of points 2 and 3.
SEC. 4.4
STATE-SPACE EQUATIONS
199
Theorem 4.4.2. Consider an m-port network N with port excitation vector u and port response vector y defining a hybrid
matrix M(s) of N via
Y(s) = M(s)U(s)
Then if M ( m ) < m, there exists a set of state-space equations
for N with the state vector consisting of inductor currents and
capacitor voltages.
Proof. If M ( m ) < m , then we can set up equations of the form of
(4.4.62) and (4.4.63) with the entries of x, being entries of E,
and E,, consisting solely of inductor currents and capacitor
voltages. Using the transformation (4.4.64), we may obtain
(4.4.65). Bur with M ( m ) < 03 it follows that ymust be independent of z i and thus that D, = 0. Hence 2 = x and 2 is a state
vector of the required form.
VVV
From the mere existence of statevariables equations of the form of (4.4.65),
which is a nontrivial fact, we can also establish the following important
resutt.
Theorem 4,4.3. Let M(s) be a passive hybrid matrix, and let N
be any passive network with M(s) as hybrid matrix, perhaps
obtained by synthesis.Then the sum of the number of inductors
and capacitors in N is at least 6[M(s)].
Proof. Let (4.4.65) be state-space equations derived for N.
Reference to the way these equations werederived will show that
the dimension of x is the sum of the dimensions of E, and'E, in
(4.4.58),and that D;= [ M I , M I , ] has a numbei of columns equal
to the sum of the dimensions of E, and E,. Since the sum of the
dimensions of E,, E,, E,, and E, is the number of reactive elements in N (excluding perhaps some associated with deleted ports
of N,), the number of reactive elements in N is bouniled belowby
dimension of x
+ number of columns of D;
Now observe, using (4.4.65) and the definition of D,, that the
residue matrix associated with any pole at infinity of elements
of M(s) is D,D,D,. By the definition of degree, it follows that
6[M(s)]5 dimension of x
5 dimension of x
and the result follows.
+ rank LY,D,D,
+ number of columns of D;
VVV
200
NETWORK
ANALYSIS VIA STATE-SPACE EQUATIONS
CHAP. 4
Problem Prove Theorem 4.4.1.
4.4.1
Problem Let N be a passive network synthesizing a prescribed passive scattering
matrix S(s). Show that the number of capacitors and inductors in N is
at least 6[S(s)l.
Problem Let N be a passive network of m ports, with the transfer-function matrix
4.4.3
relating excitations at m, of these ports to responses at m2 of these ports
denoted by W(s).Show that the number of capacitors and inductors in
N is at least 6[W(s)l.
Problem Find state-space equations for the network of Fig. 4.4.6. Take all com4.4.4
ponents to have unit values.
4.4.2
FIGURE 4.4.6. Network for Problem 4.4.4.
Problem Suppose that in a prescribed networkwith external sourceseach capacitor
and ideal voltage source is revlaced bv a like element with 6-ohms series
resistance, and each inductor and ideal current source is replaced by a
like element with E-mhos shunt conductance. Show that state-space
equations can be derived by the method of this section with all excitation
variables of the hybrid matrix being assigned "dtsimbly." What hapgcns
when E
07
-
4.4.5
-.
4.5
STATE-SPACE EQUATIONS FOR
ACTIVE NETWORKS
Up to this point of the chapter, our aim has been to derive statespace equations of passive networks, the elements in which are resistors,
capacitors, inductors, transformers, and gyrators. Now we wish to extend
SEC. 4.5
STATE-SPACE EQUATIONS FOR ACTIVE NETWORKS
201
this set of elements to include contraNed sources; any of the four types,
current-to-current, current-to-voltage, voltage-to-current, or voltagetovoltage are permitted.
The reader will have observed, particularly in Section 4.4, how much
passivity has been used in the schemes described hitherto for setting up
statespace equations. It should therefore come as no surprise that while
state-space equations can always be set up for a passive network (unless there
is some fairly obvious form of ill posing), this is not the case for networks
containing active elements. One startling illustration of this point is provided
with the aid of the nullator. A nullator is a two-terminal device that may be
constructed with controlled sources, as shown in Fig. 4.5.la. (Note that a
negative resistor is constructible with controlled sources, as in Fig. 4.5.1b.)
The Nullator and (b) the Simulation
of a Negative Resistor.
FlGURE4.5.1. (a)
Its port current and port voltage are simultaneously always zero [15], which
means that it looks simultaneously like an open circuit and a short circuit.
Now consider the circuits of Fig. 4.5.2. These circuits can support neither a
nonzero or independent voltage nor a nonzero or independent current source,
and it is obvious that state-space equations cannot be found.
Results on the existence of state-space equations of active networks are
not extensive, though some are available (see, e.g., [16]). In this section we
shall avoid the existence question largely, concentrating on presenting ad
hoc approaches to the development of equations. We can attempt a state-
202
NETWORK ANALYSIS VIA STATE-SPACE EQUATIONS
CHAP. 4
FIGURE 4.5.2. Circuits for which no Statespace -Equa-
tions Exist
. .
space equation 'derivation.at any of the three levels of complexity applicable.
for passive networks, Thus if the network is particularly simple, straightforward inspection and application of Kirchhoff's laws will enable us to
write down a differential equation for e&h indu.ctor current and capacitor
voltage and an equation relating the output to the inductor currents, capacitor
voltages, and the input. A simple illustration follows.
Example Consider the circuit of Fig. 4.5.3. The input variable is V , the output
variable V. Notice that I, is neither an input nor a state variable; so it
4.5.1
FIGURE 4.55. Network for Example 4.5.1.
is necessary to express it in terms of the input and state. The state x is
obviously the capacitor voltage V,, which we can choose to have the
same sign as V. It is immediate from the circuit that
Also,
v. = x
State-space equation derivation at the second level of complexity, if possible, proceeds in the same way for active networks as passive networks,
the nondynamic network containing resistors, transformers, gyrators, and
controlled sources. Rather than spelling out the procedure in detail, we shall
describe state-space equation derivation at the third level of complexity. This
includes derivation at the second level of complexity as a special case.
SEC. 4.5
STATE-SPACE EQUATIONS FOR ACTIVE NETWORKS
203
Equation Derivation at the Third.Levelof'Complexity
The procedure we shall describe may or may not' work; nevertheless, if the procedure does not work, it is unlikely that state-space equations can be found. The procedure falls into three main steps:
. .
1. A reactance extraction is performed on a prescribednetwork N to
obtain a nondynamic network N,, which contains resistors, transformers, gyrators, and controlled sources.
2. A modified form of hybrid matrix is constructed for N,. (The modifieation will be noted shortly.)
3. State-space equations for N are constructed using the modified hybrid
matrix of N,;
Step 1is of course quite straightforward. It is at step 2 that we run into the
first set of difficulties. For example, because N, is not passive it may possess
no hybrid-matrix description at all. Let us however point out how we try to
form a hybrid matrix for N,.ASwe shall see, it sometimes proves convenient
to form only a submatrix of a hybrid matrix, this being the modification
referred to in step 2.
Suppose that N is an m port, and that the originally prescribed set of
excitations is u,, u,, .. . ,u,, and the originally prescribed set of responses
... ,y,. Notice that m, m,, for otherwise port m, 1
is ym2+,,ym2+,,
and perhaps other ports as well would have neither response nor excitation
assigned to them, and therefore they would be dropped from further consideration.
If N were passive, we would assign, or attempt to assign, the first m
excitation variables for N , by extending the m, excitations of N to a full m,
and then identifying e,, the excitation at the ith port of N,, with u,. However,
here we merely set
+
e,=u,
i = 1 , 2 ,..., m,
(4.5.1)
with the response ri at the ith port of N, defined by
Instead of seeking a full hybrid matrix for N,, we seek a submatrix of it only;
disallowing consideration of excitations em,+, through em means that we
delete columns m , 1 through m of the full hybrid matrix of N,, and disallowing consideration of responses r, through r,, means that we delete rows
1throughm, of the full hybrid matrix of N,. The remaining rows and columns
define the appropriate submatrix.
Now suppose that N contains n reactive elements. We follow the procedures of the last section in attempting to assign excitation and response
+
204
NETWORK ANALYSIS VIA STATE-SPACE EQUATIONS
CHAP. 4
variables to N,.*Thus, for as many ports as possible, we attempt to identify
the excitation of the port with a current if the port is inductively terminated
and with a voltage if the port is capacitively terminated in forming N from
N,.Of course, the identification is not permitted should there exist a relationship of linear dependence between the proposed excitation variable and those
already selected, the relationship holding independently of the terminations
at ports with unassigned excitation variables.
In the event that no further excitation variables can be assigned in the
desired way, we then attempt to assign them in the opposite of the desired
way; for example, if a port of N, is inductively terminated when N is constructed and independent current excitation is not possible, then we try an
independent voltage excitation. For passive networks, this procedure, as we
have shown, will always result in an appropriate excitation. For active
networks, it may not. Not every active network possesses a hybrid matrix,
which implies that for some active networks neither a voltage nor current
excitation will be acceptable at one or more ports.
If neither the desirable nor undesirable allocation of excitation variables
is possible, we must abandon our goal of forming a hybrid matrix or of
forming state-space equations by the method being described.
Assume therefore that all excitation variables can be assigned; response
variables for the last n ports of N, are assigned in the obvious manner as the
duals of the excitation variables. The excitation variables are assumed to be
free of any constraints, so that we can argue the existence of a modified
hybrid matrix M, actually a submatrix of a hybrid matrix, for which
'For the reader who has not studied the last section, the short summary h e n will probably suffice.
SEC. 4.5
STATE-SPACE EQUATIONS FOR ACTIVE NETWORKS
205
In this equation, M is shown in partitioned form with the partitioning
conforming with that shown in the excitation and response variables. The
integer 1 defines the last port at which assignation of an excitation variable
as a desired variable is permissible. Notice that M,, is zero, the argument
being the same as in the last section: at ports I I through m n the
desirable excitation depends on the excitations e, through e,. This desirable
excitation becomes an actual response, and so each of r,,, through r,,,
depends only on e , through el. Note also that because N, is not passive, we
cannot conclude that M I , = - M,; or M,, = -hi,:.
+
+
Example Consider the circuit of Fig. 4.5.4a. The circuit is redrawn in Fig. 4.5.4b
to exhibit a reactance extraction. We describe the generation of M with
the aid of Fig. 4.5.4~.
Obviously, we identify e, = V, and r2 = V.. We do not assign e ,
or r,. Next we assign excitation variables at the ports of N, which are
reactively terminated in forming the original circuit.
Assignation of el = V , and e , = V4 causes no problems, and these
are desirable assignations. We would like to set e , = V,, but this is not
possible since
4.5.2
e3 = e,
+ V,
So we try to set e, = I,. This works; i.e., there is no linear relation
involving el, el, el, and the new e,. The response variables are r3 = I,,
r. = I 4 , and r, = V5.Conventional analysis yields
Step 3 of the procedure for obtaining statespace equations requires us to
proceed from the hybrid matrix to the equations. The technique follows
that of the last section; since some differences arise, including possible inability to form the equations, we shall now outline the procedure.
The starting point is (4.5.3), which we shall rewrite in the form
where the definitions of R,,R,, etc., should be obvious from (4.5.3).
We reiterate the following points: the entries of El and R, do not necessarily correspond to the same set of ports, and the entries of E, constitute
"desirable" excitation variables, while those of E3 constitute "undesirable"
excitation variables.
206
NETWORK ANALYSIS VIA STATE-SPACE EQUATIONS
Nr
(b)
for Examples 4.5.2 and 4.5.3.
The Network Models a Transistor.
FIGURE 4.5.4. Network
CHAP.. 4
SEC. 4.5
STATE-SPACE EQUATIONS FOR ACTIVE NETWORKS
207
When N, is terminated in the appropriate reactive elements that yield N,
this amounts to demanding
. .
where 63, is a diagonal matrix of inductor or capacitor values, and also
where 63, is the same sort of matrix a.i a,.Using (4.5.5) and (4.5.6), (4.5.4)
becomes
208
N W O R K ANALYSIS VIA STATE-SPACE EQUATIONS
CHAP. 4
In particular,
+ M32Ez
(4.5.8)
+ M,,E, - M,,(B,M,,~, - M , , ( B , M , , ~ ,
(4.5.9)
R3 = M,iE,
so that, again from (4.5.7),
- ( B , E ~= M,,E,
R, = M,,EI
+ M,2E2 - M , 3 @ 3 ~ 3 , 2 1- ML3@,M3z&
Consider the first equation in (4.5.9). This can be "solved" for El if and
only if (8, - M,,(B,M,z is nonsingular. ~ o r ~ a s s i vN,,e this is guaranteed
by the fact that 63, and a, are positive definite and that M,, = -MA
(the latter following because M,, = 0). No such guarantee applies ijC N, is
active.
Notice that if @, - M,3(83M,, is singular, a perturbation of (8, to, say,
(B,
d f o r an arbitrary smalle will render the matrix nonsingular. I n other
words, (4.5.9) fails to have a solution for k, only for pt?rticular values of the
inductors and capacitors.
Of course, if (4.5.9) is solvablefork,, then there is no difficultyin constructing state-space equations for N. The input vector ic agrees with E,, the output
vector y with R , . Terms in E, may be eliminated from the differentialequation
for the state vector by appropriate redefinition of the state vector, if necessary.
Terms in 8, may occur in the equation for R , , but of course these may not
be eliminated.
If 63, - M,,(B!M,, is singular, it may or may not be possible to obtain
state-space equahons in which each entry of E,, i.e., each excitation, can be
selected independently. Examination of this point is requested in the problems.
+
Example We return to the circuit of F
i
g
.4.5.4a. With quantities as defined in
4.5.3
Fig. 4.5.4~;we found
0
1
-1
1
-1
Using Fig. 4.5.4b, we see that
f3 =
c
It is then easy to derive
I, = -C2d 4
z5=-C3V5
SEC. 4.5
STATE-SPACE EQUATIONS FOR ACTIVE NETWORKS
209
and then
The wefficient matrix multiplying [v, v4r is nonsingular, and so
we can rewrite this equation in the standard form 3 = Fx Gu. The
equation for V. easily follows from the modified hybrid matrix as
+
LJ'dJ
The characterization in topoIogical terms of the difficulties that can be
encounted in setting up state-space equations'for active networks is very
diffidt. More topologically based approaches to setting up the equations
therefore tend to be ad hoc (more so, in fact, than the present procedure),
relying on presenting sufficient conditions for the generation of equations,
or on presenting algorithms that may or may not work (see, e.g., [S]). We
prefer the procedure given here on the grounds that topological ideas do not
play a key role in the formation of the equations-of course they may or
may not be employed in generating the modified hybrid matrix of N,.
There is one additional danger that the reader should be aware of in
analyzing active networks. If the networks have some form of instability,
they may not behave like linear networks because some elements-generally
active ones-may be forced into nonlinear operation. This movement into
the nonlinear regime is in fact the basis of operation of such circuits as the
multivibrator. Many active networks do not exhibit this kind of instability
of course, and the methods of this section are applicable to them
Problem Develop controlled source models for the transformer and the gyrator
to conclude that the basic elements allowed in networks (active or
passive) can all be assumed two terminal. (This is the approach adopted
in many cases, e.g., 181, in applying topological methods to circuits
4.6.1
containing gyrators and transformers.)
Problem Figure 4.5.5 shows a circuit model of a transistor tuned amplifier based
4.5.2
on they-parameter model of the transistor. Develop statespaceequations
210
NETWORK ANALYSIS VIA STATE-SPACE EQUATIONS
CHAP. 4
FIGURE 4.5.5. Network for Problem 4.5.2.
for the circuit. Assume fint that y , and
~ y l , are real constants. What
happens if y , , and y,, are complex and frequency dependent?
Problem Develop procedures for forming statespace equations from (4.5.9) for
4.5.3
the case when
- M23@1M32
is singular.
as
Problem Suppose that an active network possesses statespace equations derivable
4.5.4
via the procedure of this section. Let W(s) be the associated transfer-
function matrix. Show that 6[W(s)l is a lower bound on the number of
capacitors or inductors in the network.
REFERENCES
'
.
:
[ 1] T. R. BASHKOW,
"The A Matrix, a New Network Description," IRE Traitsaclions on Circuit Theory, Vol. CT-4, No. 3, Sept. 1957, pp. 117-120.
[ 2 ] P. R. BRYANT,
"The Order of Complexity of Electrical Networks,"Proceedings
of the IEE, Vol. 106C, June 1959, pp. 174-188.'
[ 3 ] P. R. BRYANT,
''The Explicit ~ 6 r m ' oBashkow's
f
A.Matrix," IRE ~rhnsacfiom
on Circuir Theory, Vol. CT-9, No. 3, Sept. 1962, p p 303-306.
[41 C. F. PRICE and A. BERS,LIDYnamicali~
Independent Equilibrium Equations
for RLC Networks," IEEE Transactions on Circuit Theory, Vol. CT-12, No. I,
March 1965, pp. 8 2 8 5 .
[ 5 ] R. L. WILSONand W. A. MASSENA,
"An Extension of the Bryant-Bashkow A
Matrix," IEEE Transacfionson Circuit Theory, Vol. CT-12, No. 10, March 1965,
pp. 120-122.
[ 6 ] E. S. KWHand R. A. Romm, "The State Variable Approach to Network
Analysis," Proceedings of the IEEE, Vol. 53, No. 7, July 1965, pp. 672-686.
[ 7 I F. H. BRANIN,
"Computer Methods of Network Analysis," in F. F. Kuo and
W. G. Magnuson, Jr. (eds.), Computer Oriented Circuit Design, Prentice-Hall,
Englewood Cliffs, N.J., 1969, pp. 71-122.
[ 8 ] R. A. ROHRER,
Circuit Theory: An infroductiottto the State-Variable Approach,
McGraw-Hill, New York, 1970.
[91 R. YARLAGADDA,
"An Application of Tridiagonal Matrices to Network
Synthesis," SIAM Journal of Applied Mathematics, Vol. 16, No. 6, Nov. 1968,
pp. 1146-1162.
CHAP. 4
REFERENCES
21 1
I103 I. NAVOT,"The Synthesis of Certain Subclasses of Tridiagonal Matrices with
Prescribed Eigenvalues," Journal of SIAM AppIiedMathematics,Vol. 15, No. 12,
March 1967.
[Ill E. R. FOWLER
and R. YARLAGADDA,
"A State-Space Approach to RLCT
Two-Port Transfer-Function Synthesis," IEEE Transactions on Circuit Theory,
Vol. CT-19, No. 1, January 1972, pp. 15-20.
[I21 T. G. MARSHALL,
"Primitive Matrices for Doubly Terminated Ladder Networks," Proceedrngs 4th Allerton Conference on Circuit and System Theory,
University of Illinojs, 1966, pp. 935-943.
[I31 E. A. GUILLEMIN,
Synthesis of Passiye Networks, W~ley,New York, 1957.
[I41 B. D. 0. ANDERSON,
R. W.NEWCOMB,
and J. K. ZUIDWEG,"On the Existence
of HMatrices," IEEE Transactionson Circuit Theory, Vol. CT-13, No. 4, March
1966, pp. 109-110.
[I51 R.W. N ~ w c o mLinear
,
Muhiport Synthesis, McGraw-Hill, New York, 1966.
[I61 E. J. PUSLOW, "Stability and Analysis of Linear Active Networks by the
Use of the State Equations," IEEE Transactions on Circuit Theory, Vol. CT-17,
No. 4, Nov. 1970, pp. 469-475.
Part IV
THE STATE-SPACE
DESCRIPTION O F PASSIVITY
A N D RECIPROCITY
The major task in this part is to state and prove the posittve
real lemma. This is a statement in state-space terms of the positive
real constraint and is fundamental to the various synthesis
procedures.
We break up this part into five segments:
1. Statement and partial proof of the positive real lemma.
2. Various proofs of the positive real lemma, completing the
partial proof mentioned in 1.
3. Computation of certain matrices occurring in the positive
real lemma.
4. The bounded real lemma.
5. The reciprocity constraint in state-space terms.
Part IV can be read in three degrees of depth. For the reader
who is interested in how to synthesize networks, with scant
regard for computations, 1,4, and 5 above will suffice. This
material is covered in Sections 5.1 and 5.2 and Sections 7.1, 7.2,
7.4, and 7.5. For information regarding computation, see
Chapter 6 and Section 7.3. For the reader who desires the full
story, all of Part IV should be read.
Positive Real Matrices
and the Positive Real Lemma
In this chapter our aim is to present an interpretation of the positive real constrain-usually viewed as a frequency-domain constraint-in
terms of the matrices of a state-space realization of a prescribedpositive real
matrix. In this introductory section we restate the positive real definition and
note several minor wnsequences of it. Then we go on h the next section to
state the positive real lemma, a set of constraints on the matrices of a statespace realization guaranteeing that the associated transfer-function matrix
is positive real. We also prove that the stated conditions are sufficient to
guarantee the positive real property.
The proof that the conditions on the matrices of a state-space realiiation
statedin the positive real lemma are in fact necessary to guarantee the positive
real property is quite difficult. In later sections we tackle this problem from
different points of view. In fact, we present several proofs, each relying on
different properties associated with a positive real matrix. One's familiarity
with these properties is a function of background; therefore, the degree of
acceptability of the different proofs'is also a function of background. It is not
necessary even to read, let alone understand, any of the proofs for later
chapters (though of wnrse an understanding of the positive real lemma
statement is required). So we invite the reader to omit, s k i , or study in
detail as much or as little of the proofs as he wishes. For those unsure as to
21 6
POSITIVE REAL MATRICES
CHAP. 5
which proof may be of interest, we suggest that they be guided by the section
titles.
As we know, an m x m matrix Z(s) of real rational functions is positive
. ....
.
real if
>.
1. All elements of Z(sj are analytic in Re [s] > 0.
2. Any pure imaginary pole jw, of any element of Z(s) is a simple pole,
and the associated residue matrix of Z(s) is nonnegative definite Her-
mitian.
3. For all real w for which jw is not a pole of any element of Z(s), the
matrix Z f ( j w ) Z(jw)is nonnegative definite Hennitian.
+
Conditions.2 and 3 may ,alternatively'be replaced by
4. The matrix Z'*(s)
..
+Z(s)is nonnegative definite Hermitian in Re[s]> 0.
These conditions for positive realness are essentially analytic rather than
algebraic; the positive real lemma, on the other hand, presents algebraic
conditions on the matrices of a state-space realization of Z(s). Roughly
speaking, the positive real lemma enables the translation of network synthesis problems from problems of analysis to problems of algebra.
In the remainder of this section we shall clear two preliminary details out
of the way, both of which will be helpful in thesequel. These details relate to
minor decompositions of Z(s) to isolate theeffect of pure imaginary poles.
We can conceive a partial fraction expansion of each elementof Z(s) [and
thus of Z(s) itself] being made, with a subsequent grouping together ofterms
withpol& on the jw axis, is., poles ofthe f o ~ for
& some
~ ~ real o,,and
polesip the half-planeRe [s] < 0. (Such'a grouping together is characteristic
of the ~ o s t epreamble
r
of classical network synthesis procedures, which may
be'fimiliar to some readers.) This allows us to write Z(s) in the form
where L, C, and the K, are nonnegative definite Hermitian, and the poles
of elements of Z,(s) all lie in Re [s] < 0. We have already argued that L
must be real and symmetric, and it is easy to argue that C must be real and
symmetric. The matrix Kt, on the other hand, will in general be complex.
However, if jw, for a, f 0 is a pole of an element of Z(s), -jw, must also
be a pole, and the real rational nature of Z(s) implies that the associated
residue matrix is KT. This means that terms in the summation
A
F s - jw,
occur in pairs, a pair being
Kt
K"
A$
s--*+=siw,=+w~f
+ B,
SEC. 5.1
INTRODUCTION
21 7
where A, and If, are real matrices; in fact, A, = K. iKT and is nonnegative
definite symmetric, while B, =jw,(K, - K?) and is skew symmetric. So we
can write
if desired. We now wish the reader to carefully note the following points:
+
+
are LPR. The fact that they are
I . sL, s-'C, and (A,s If,)(?
PR is.immediate from the definitions, while the lossless character follows, for example, by observing that
.
.
is LPR. This is immediate from 1.
3. Z,(s) is PR! To see this observe that all poles of eiements of Z,(s) lie in
Re Is] < 0, while Z'?(jw)
Zi-(jo)= Z!*(jm) ZL(jo)t %*(jw)
Z,(jw) = Z1*(jw)t Z(jw) 2 0 on using the lossless character of
ZL(J).
+
+
+
Therefore, we are able [by identifying the jo-axis poles of elements of Z(s)]
to decompose an arbitrary PR Z(s) hto the sum of a LPR matrix, and a PR
marrix such that all poles of all elements lie in Re [s] < 0.
Also, it is clear that Z,(s) = Z(s) - sL is PR, being the sum of Z,(s),
which is PR, and
which is LPR. Further, Z,(w) < co. So we are able to decompose an arbitrary PR Z(s) into the sum of a LPR matrix sL, and a PR matrix Z,(s) with
Z,(M)
< m.
In a similar way, we can decompose an arbitrary PR Z(s) with one or more
elements possessing a poIe at s = jw, into the sum of two PR matrices
with the first matrix LPR and the second matrix simply PR, with no element
possessing a pole at jo,.
218
Problem
5.1.1
POSITIVE REAL MATRICES
CHAP. 5
Show that Z(s) is LPR if and only i f Z ( s ) has the form
Z(s)=sL+Jf
where L
Ai
= L'
2 0, J
Als+Br
Fm
+s-jC
+ J' = 0, C = C' 2 0, A, and B, are real, and
+ Bfiw, is nonnegative definite Hermitian.
5.2
STATEMENT AND SUFFICIENCYPROOF
OF THE POSITIVE REAL LEMMA
In this section we consider a positive real matrix Z(s) of iational
functions, and we assume from the start that Z ( w ) < w . As we explained
in Section 5.1, in the absence of this condition we can simply recover from
Z(s) a further positive real matrix that does satisfy the restriction. The lemma
to be stated will describe conditions on matrices appearing in a minimal
state-space realization that are necessary and sufficient for Z(s) to be positive
real. We shall also state a specialization of the lemma applying to lossless
positive real matrices.
The first statement of the lemma, for positive real functions rather than
positive real matrices, was given by Yakubovic [l] in 1962, and an alternative
proof was presented by Kalman [2] in 1963. Kalman conjectured a matrix
version of the result in 1963 [3], and a proof, due to Anderson [4],appeared
in 1967. Related results also appear in [5].
In the following, when we talk of a realization {F, G, H, J) of Z(s), we
shall, as explained earlier, imply that if
then
where S b ( . ) ] = Y(s), S[u(,)]= U(s). If now an m x m Z(s) is the impedance of an m-port network, we understand it to be the transfer-function
matrix relating them vector of port currents to them vector of port voltages.
In other words; we ideirtfy the vector u of the state-space equations with
the port current and the vector y with the port voltage of the petwork.
The formal statement of the positive real lemma now follows.
Positive Real Lemma. Let Z(.) be an m x m matrix of real
rational functions of a complex variable s, with Z ( w ) < m. Let
[F, G, H,J) be a minimal realization of Z(s). Then Z(s) is positive
real if and only if there exist real matrices P, L,and W,with P
positive definite symmetric, such that
219
POSITIVE REAL LEMMA
PF
+ F'P = -LL'
(The number of rows of Wo and columns of L are unspecified,
while the other dimensions of P, L, and W , are automatically
iixed.)
In the remainder of this section we shall show that (5.2.3) are suficient
for the positive real property to hold, and we shall exhibit a special factorization, actually termed a spectral factorization, of Zi(-s)
Z(s). Finally, we
shall state and prove a positive real lemma associated with lossless positive
real matrices.
+
Sufficiency Proof of the Positive Real Lemma
First, we must check analyticity of elements of Z(s) in Re [s]> 0.
Since Z(s) = J H'(sI - F)-lG, an element of Z(s) will have a pole at s
= s, only if s, is an eigenvalue of F. The first of equations (5.2.3), together
with the positive definiteness of P, guarantees that all eigenvalues of F have
nonpositive real part, by the lemma of Lyapunov. Accordingly, all poles of
elements of Z(s) have nonpositive real parts.
Finally, we must check the nonnegative character of Z'*(s) Z(s) in
Re [s]> 0. Consider the following sequence of equalities-with reasoning
following the last equality.
+
+
Z1*(s)i- Z(s)
= J'
J G1(s*I- P')-'H + H'(sI - f l - 1 ~
= WbW,
G1[(s*I- F1)'lP P(sI - fl-I]G
G'(s*I - F')-'LW,
WbL1(sI- F)-'G
+ +
+
+
+
+
+ G'(s*I - Fp?-l[P(s+ s*) - PF - F ' p ] ( s ~ -F)-IG
+ G1(s*I- F)-'LW, f WLL'(sl- F)-'G
= WLwo + G'(s*~--F)-'LL'(sI - F)-'G + G'(s*I- F1)-lLWo
+ WLLr(sI- F)-'G + G'(s*I - F')-'p(s~- F ) - ' G ( ~+ s*)
[wb+ G'(s*I - fl-'L][W, + L'(sI - G]
= WbWa
=
9 - 1
i- G'(s*I - F')-'P(sI
- F)-lG(s
+ s*)
The second equality follows by using the second and third equation of (5.2.3),
the third by manipulation, the fourth by using the first equation of (5.2.3)
and rearrangement, and the final equation by manipulation.
POSITIVE REAL MATRICES
220
CHAP. 5
Now since any matrix of the form A'*A is nonnegative definite Hermitian,
since s s* is real and positive in Re [s] > 0, and since P is positive definite,
it follows, as required, that
+
To this point, sufficiency only is proved. As remarked earlier, in later
sections proofs of necessity will be given. Also, since the quantities P,L, and
W nprove important in studying synthesis problems, we shall be discussing
techniques for their computation. (Notice that the positive real lemma statement talks about existence, not construction or computation. Computation
b a separate question.)
Example It is generally difficult to compute P, L, and W. from F, G, H, and J.
5.2.1
Nevertheless, these quantities can be computed by inspection in very
simple cases. For example, z(s) = (s f I)-' is positive real, with realiza-
tion (-1.1. 1.0). Observe then that
is satisfied by P = [I], L = [ d l , and W, = [O]. Clearly, P is positive
definite.
A Spectral Factorization Result
We now wish to note a factorization of Z'(-s) fZ(s). We can
carry through on this quantity essentially the same calculations that we
carried through on Z"(s)
Z(s) in the sufficiency proof of the positive real
Iemma. The end result is
+
where W(s) is a real rational transfer-function matrix. Such a decomposition
of Z'(-s)
Z(s) has been termed in the network and control literature
a spectralfactorization, and is a generalization of a well-known scalar result:
if z(s) is a positive real function, then Rez(jw) 2 0 for all w (save those w
for which jw is a pole), and Re z(jw) may be factored (nonuniquely) as
+
Re z(jw) = I w(jo) jZ = w(- jw)w(jw)
where w(s) is a real rational function of s.
POSITIVE REAL LEMMA
SEC. 5.2
221
Essentially then, the positive real lemma says something about the ability
to do a spectral factorization of Z'(-s)
Z(s), where Z(.)may be a matrix.
Moreover, the spectral factor W(s) can have a slate-space realization with
the same F and G matrices as Z(s). [Note: This is not to say that all spectral
factors of Z'(-s)
Z(s) have a state-space realization with the same Fand
C matrices as Z(s)-indeed this is certainly not true.] Though the existence
of spectral factors of Z'(-s)
Z(s) has been known to network theorists
for some time, the extra property regarding the state-space realizations of
some spectral factors is, by comparison, novel. We sum up these notions as
follows.
+
+
+
Existence of a Spectral Factor. Let Z(s) be a positive real
matrix of rational functions with Z(m) < m. Let IF,G, H, J) be
a minimal realization of Z(s), and assume the validity of the
positive real lemma. In terms of the matrices L and W , mentioned therein, the transfer-function matrix
W(s) = W ,
+ L'(sI - F)-'G
(5.2.5)
is a spectral factor of Z'(-s) f Z(s) in the sense that
+
Example For the positive real impedance (s I)-', we obtained in Example 5.2.1
5.2.2
a minimal realization 1-1,1, 1, Oh together with matrices P,L, and WO
satisfying the positive real Iernma equations. We hadL =
and W,
= [O]. This means that
[a]
and indeed
as predicted.
Positive Real Lemma in the Lossless Case
We wish to indicate a minor adjustment of the positive real
lemma, important for applications, which covers the case when Z(s) is lossless. Operationally, the matrices L and W , become zero.
Lossless Positive Real Lemma. Let Z ( . ) be an m x m matrix of real rational functions of a complex variable s, with Z(m)
< m. Let (F,G, H, J ) be a minimal realization of Z(s). Then
222
POSITIVE REAL MATRICES
CHAP. 5
Z(s) is lossless positive real if and only if there exists a (real)
positive definite symmetric P such that
If we assume validity of the main positive real lemma, the proof of the
lossless positive real lemma is straightforward.
Sufficiency Proof. Since (5.2.6) are the same as (5.2.3) with L
and W eset equal to zero, it follows that Z(s) is at least positive
real. Applying the spectral factorization result, it follows that
Z(s) Z'(-s) 3 0. This equation and the positive real condition
guarantee Z(s) is lossless positive real, as we see by applying the
definition.
+
Necessity Proof. If Z(s) is lossless positive real, it is positive
real, and there are therefore real matrices P,L, and W osatisfying
(5.2.3). From L and W owe can construct a W(s) satisfyingZ1(-s)
Z(s) = W'(-s)W(s). Applying the lossless constraint, we see
that W'(-s)W(s) = 0. By setting s =jw, we conclude that
W'*(jo)W(jo) = 0 and thus W(jw) = 0 for all real o. Therefore,
W(s) = W, L'(s1- n - ' G = 0 for all s, whence W o= 0 and,
by complete controllability,* L = 0.
VVV
+
+
Example
5.2.3
+
+
The impedance function z(s) = s(s2 2)/(s2 f l)(sz
3) is lossless
positive real, and one can check that a realization is given by
A matrix P satisfying (5.2.6) is given by
r6
0 3 01
(Procedures for Computing P will be given subsequently.) It is easy to
*If L'(sI - F)-'G = 0 for all.^, then L' exp (F1)G = 0 for all t. Complete controllability
of IF,G] jmplics that L' = 0 by one of the complete controlIability properties.
SEC. 5.3
POSITIVE REAL LEMMA
223
establish, by evaluating the leading minors of P, that P is positive
definite.
Problem Prove via the positive real lemma that positive realness of Z(s) implies
positive realness of Z'(s), [301.
Problem Assuming the positive real Jemma as stated, prove the following exten5.2.2
sion of it: let Z(s) be an m x m matrix of real rational transfer functions
of a complex variables, with Z(m) < w. Let IF,G, H, J ) be a completely
controllable realization of Z(s), with [F,HI not necessarily compIetely
observable. ThenZ(s) is positive real if and only if thereexist real matrices
P,L, and WO,with P nonnegative definite symmetric, such that PF F'P
= - L C PO = H - LW,,J
J' = W k K . [Hinl: Use the fact that
if [F, HI is not completely observable, then there exists a nonsingular T
such that
5.2.1
+
+
,,
with [F, H I ] completely observable.]
Problem With Z(s) pqsitive real and [F, G, H, J f a minimal realization, establish
5.2.3
the existence of transfer-function matrices V(s) satisfying Z'(-s) f Z(s)
= V(s)V'(-s), and possessing realizations with the same F and ff
matrices as Z(s). [Hint:Apply the positive real lemma to Z'(s), known
to be positive real by Problem 5.2.1.1
Problem Let Z(s) be PR with minimal realization [F, G, H, J). Suppose that J
5.2.4
is nonsingular, so that Z-l(s) has a realization, actually minimal,
( F - C J - ' N : GJ-I, -H(J-I): J-I). Use the positive real lemma to
establish that Z-'(s) is PR.
Problem Let Z(s) be an m x m matrix with realization {F, G, H, Jf,and suppose
5.2.5
that there exist real matrices P, L, and Wosatisfying the positive real
Jemma equations, except that P is not necasarily posifive definite,
though it is symmetric. Show that Z ' * ( j o ) + Z ( j w ) 2 0 for all real
o with j o not a pole of any element of Z(s).
5.3
POSITIVE REAL LEMMA: NECESSITY
PROOF BASED ON NETWORK THEORY
RESULTS*
Much of the material in later chapters wiU be concerned with
the synthesis problem, i.e., passing from a prescribed positive real impedance
matrix Z(s) of rational functions to a network of resistor, capacitor, inductor,
ideal transformer, and gyrator elements synthesizing it. Nevertheless, we shall
*This section may be omitted at a first, and even a second, reading.
224
POSITIVE REAL MATRICES
CHAP. 5
base a necessity proof of the positive real lemma.(i.e., a proof of the existence
of P,L,and W,satisfying certain equations) on the assumption that given
a positive real impedance matrix Z(s) of rational functions, there exist networks synthesizing ~ ( s ) .In fact, we shall assume more, viz., that there exist
networks synthesizing Z(s) wing precisely 6[Z(s)] reactive elements. Classical
network theory guarantees the existence of such minimal reactive el&t
syntheses [q. The reader may feel that we are laying the foundations for
acircular argument, in the sense that we are Gsuming a result is true in order
to prove the positive real lemma, when we &e planning to use the positive
real lemma to establish the result. In this restricted context, it is true that
a circular argument is being used. However,.we note that
I. There are classical synthesis procedures that establish the synthesis
result.
2. There are other procedures for proving the positive real lemma, presented in following sections, that do not use the synthesis result.
Accordingly, the reader can regard this section as giving fresh insights into
properties and results that can be independently established.
In this section we give two proofs dueto Layton [7] and Anderson 181.
Alternative proofs, not relying on synthesis results,. will be found in later
sections.
.
.
.
.
Proof Based o n Reactance Extraction
Layton's proof proceeds as follows. Suppose that Z(s) has Z(w)
< oo, and that Nis a network synthesizing Z(s) with 6[Z(s)] reactive elements.
Suppose also that Z(s) is m x m, and G[Z(s)] = n.
In general, both inductor and capacitor elements may beused in N. But
without altering the total number of suchelements, we can conceive first
that all reactive elements have values of 1H or 1 F as the case may be (using
transformer normalizations if necessary), and then that all 1-F capacitors are
replaced by gyrators of unit transimpedance terminated in 1-H inductors, as
shown in Fig. 5.3.1.
This means that N is composed of resistors, transformers, gyrators, and
precisely n inductors.
.
.
FIGURE 5.3.1. Elimination of Capacitors.
SEC. 5.3
POSITIVE REAL
+
LEMMA
225
Resistors
Transformers
FIGURE5.3.2. Redrawing of Network as Nondynamic
Network Terminated in Inductors.
In Fig. 5.3.2 N is redrawn to emphasize the presence of the n inductors.
Thus N is regarded as an (m + n) port No-a coupling network of nondynamic elements, i.e., resistors, transformers, and gyrators-terminated in
unit inductors at each of its last n ports. Let us now study N,.We note the
following preliminary result.
Lemma 5.3.1. With N, as defined above, N, possesses an impedance matrix
the partitioning being such that a,,, is m x m and I,, is n x n.
Further, a minimal statespace real~zationfor Z(s) is provided by
I - Z ~ ~at,.
. -zlZ, zI13;i.e..
Proof. Let i, be a vector whose entries are the instantaneous
currents in the inductors of N (see Fig. 5.3.2). As established in
Chapter 4, the fact that Z(w) < m means that ifworks as a state
vector for a state-space description of N.It is even a state vector
of minimum~ossibledimension, viz., 6[Z(s)]. Hence N has a
description of the form
where i and v are the port current and voltage vectors for N.
Let v, be the vector of inductor voltages, with orientation chosen
226
CHAP. 5
POSITIVE REAL MATRICES
so that v, = di,]dt (see Fig. 5.3.2). Thus
These equations describe the behavior of N , when it is terminated
in unit inductors, i.e., when v, = dil/dt. We claim that they also
describe the performance of N , when it is not terminated in unit
inductors, and v, is not constrained to be di,/dt. For suppose that
i,(O) = 0 and that a current step is applied at the ports of N.
Then on physical grounds the inductors act momentarily as open
circuits, and so v, = GNi(O+) and v = JNi(O+) momentarily, in
the presence or absence of the inductors. Also, suppose that
i(t) = 0 , i@) # 0 . The inductors now behave instantaneously
like current sources i,(O) in parallel with an open circuit. So,
irrespective of the presence or absence of inductors, i.e., irrespective of the mechanism producing i f l ) , v, = F,i,(O) and v =
Hhi,(O) momentarily.
By superposition, it follows that (5.3.3) describes the performance of N, at any fixed instant of time; then, because N , is
constant, the equations describe N, over all time. Finally, by
observing the orientations shown in Fig. 5.3.2, we see that N ,
possesses an impedance matrix, with z,, = JN, z,, = -Hh, z,,
= GN,and ,z, = -FN.
V V \7
With this lemma in hand, the proof of the positive real lemma
is straightforward. Suppose that {F, G, H, J ] is an arbitrary minimal realization of Z(s). Then because (-z,,, z,,, -z',,, z,,)is
another minimal realization, there exists a nonsingular matrix T
such that z,, = -TFT-', z,, = TG, z',, = -(T-')'H, and I,, =
J. The impedance matrix 2,of N , is thus
Now N. is a passive network, and a necessary and sufficient
condition for this is that
z,+z:20
or equivalently that
( I , 4- T'XZ, t Z:)(I,
i
T )2 0
(5.3.4)
where -k denotes the direct sum operation. Writing (5.3.4) out in
POSITIVE REAL LEMMA
SEC. 5.3
terms of F,G, H, and J, and setting P = TT,there results
227
,
'
(Note that because T is nonsingular, P will be positive definite.)
Now any nonnegative definite matrix can be written in the form
MM', with the number of columns of M greater than or equal to
the rank of the matrix. Applying this result here, with the partition M' = [-W, i El, we obtain
JtJ'
(-H+PG)']
- [WOWo
L W .
[-H + PD -(PF + P P )
or
PF
+ F'P = - L C
PG=H-LW,
J + J ' = WLW,
These are none other than the equations of the positive real
lemma, and thus the proof is complete.
VVV
We wish to stress several points. First, short of carrying out a synthesis of
Z(s), there is no technique suggested in the above proof for the computation
ofP, L, and W,, given simply F, G,H, and J. Second, classical network theory
exhibits an infinity of minimal reactive element syntheses of aprescribed Z(s);
the above remarks suggest, in a rough fashion, that to each such synthesis
will correspond a P, in general differing from synthesis to synthesis. Therefore, we might expect an infinity of solutions P, L, W , of the positive real
lemma equations. Third, the assumption of a minimal reactive synthesis
aboveis critical; without it, the existence of T(and thusP)camot be guaranteed.
Proof Based on Energy-Balance Arguments
Now we turn to a second proof that also assumes existence of a minimal
reactive element synthesis.
As before, we assume the existence of a network N
synthesizingZ(s). Moreover, we suppose that Ncontains precisely
S[Z(s)] reactive elements (although we do not require that all
reactive elements be inductive). Since Z(m) < m, there is a statespace description of N with state vector x, whose entries are all
inductor currents or capacitor voltages; assuming that all induc-
228
CHAP. 5
POSITIVE REAL MATRICES
tors are 1 H and capacitors 1 F (using transformer normalizations
if necessary), we observe that the instantaneous energy stored in
N is $YNxw
Now we also assume that there is prescribed a state-variable
description i= Fx Gu, y = H'x Ju, of minimal dimension.
-Therefore,there exists a nonsingular matrix Tsuch that Tx = x,
and then, with P = T'T, a positive definite symmetric matrix, we
have that
+
.
.
+
energy itoredin N
='$fix
(5.3.5)
We shall also compute the power flowing into Nand the rate
of energy dissipation in N. These quantities and the derivative of
(5.3.5) satisfy an energy-balance (more properly, a power-balance)
equation from which we shall derive the positivereal lemma equations. Thus we have, first,
+ Jlc)'u
+ &'(J + J')u (5.3.6)
(Recall that u is the port current and y = H'x + Ju is the port
power flowing into N = (H'x
= x'Hu
voltage of N.) Also, consider the currents in each resistor of N.
At time t these must be linear functions of the state x(t) and input
current u(t). With i,(t) denoting the vector of such currents, we
have therefore
for some mairices A and B. Let R be a diagonal matrix, the
diagonal entries of which are the values of the network resistors.
These values are real and nonnegative. Then
rate pf energy dissipation in N = (Ax
+ Bu)'R(Ax + Bu)
(5.3.7)
Equating the power inflow with the sum of the rate of dissipation and rate of increase of stored energy yields
1
Tu'(J
.
.
+ J')U + x'Hu = (Ax + Bu)'R(Ax
We compute the derivative using f = Fx
+ Bu) + d(Lxjpx)
dt 2
+ Gu and obtain
Since this equation holds for all x and u, we have
POSITIVE REAL LEMMA
SEC. 5.4
229
By setting W,= f l R " l B and L = ,/TA'R''Z, the equations
PF+F'P=-LL',PG=H-LW,,andJ+J'=WkW,are
recovered.
V 'i7 7
Essentially the same comments can be made regarding the second proof
of thissection as were made regarding the first proof. There is little to choose
between them from the point of view of the amount ofinsight they offer into
the positive real lemma, although the second proof does offer the insight of
energy-balance ideas, which is absent from the first proof. Both proofs~assume
that Z(s) is a positive real impedance matrix; Problem 5.3.2 seeks a straightforward generalization to hybrid matrices.
Problem. Define a lossless nehvork as one containing no resistors. Suppose that
5.3.1
Z(s) is positive real and the impedance of a lossless network. Supposethat
'
Z(s) is synthesizable using S[Z(s)] reactive elements. Show that if [F, G,
H, J ) is a minimal realization of Z(s),there exists a real positive definite
symmetric P such that
PF+F'P=O
PG=H
J+J'=O
(Use an argument based on the second proof of this section.) This proves
that Z(s) is a lossless positive real impedance in the sense of our earlier
definition.
Problem Suppose that M(s) is an n X n hybrid matrix with minimal realization
5.3.2
(F, G, H, J ] . Prove that if M(s) is synthesizable by a network with
6[M(s)l reactive elements, then there exist a positive definite symmetric
P and matrices L and Wo, all real, such that PF F'P = - L C PG
=H-LWo,andJfJ'=W~WO.
+
5.4
POSITIVE REAL LEMMA: NECESSITY
PROOF BASED ON A VARIATIONAL
PROBLEM*
In this section we present another necessity proof for the positive
real lemma; i.e., starting with a rational positive real Z(s) with minimal
'This section may be omitted at a first, and even a second,reading.
230
CHAP. 5
POSITIVE REAL MATRICES
realization (F,G, H, J), we prove the existence of a matrix P = P' > 0 and
matrices L and W , for which
PF + F'P = -LL'
PG=H-LW.
J + J ' = WbW,
(5.4.1)
Of course, Z(oo) < 03. But to avoid many complications, we shall impose
the restrictionfor most of this section that J J' be nonsingular. The assumption that J J' is nonsingular serves to ensure that the variational problem
we formulate has no singular solutions. When J + J' is singular, we are
forced to go through a sequence of manipulations that effectively allow us
to replace J + J' by a nonsingular matrix, so that the variational procedure
may then be used.
To begin with, we shall pose and solve a variational problem with the
constraint in force that J + J' is nonsingular; the solvability of this problem
is guaranteed precisely by the positive real property [or, more precisely,
a time-domain interpretation of it involving the impulse response matrix
JS(t) + H'ePrGl(t), with S(.) the unit impulse, and 1(t) the unit step function]. The solution of the variational problem will involve a matrix that essentially gives us the P matrix of (5.4.1); the remaining matrices in (5.4.1) then
prove easy to construct once P is known. Theorem 5.4.1 contains the main
result used to generate P, L,and W..
In subsequent material of this section, we discuss the connection between
Eq. (5.4.1) and the existence of a spectral factor W(s) = W, L'(sI - F)"G
of Z'(-s)
Z(s). We shall note frequency-domain properties of W(.) that
follow from a deeper analysis of the variational problem. In particular, we
shall consider conditions guaranteeing that rank W(s) be constant in Re [s]
> 0 and in Re [s] 2 0. Finally, we shall remark on the removal of the nonsingularity constraint on J J'.
+
+
-+
+
+
Time-Domain Statement of the Positive Real
Property
The positive real property of Z(s) is equivalent to the following
inequality:
for all to, t , , and u ( - ) for which the integrals exist. (It is assumed that t,
> to.) We assume that the reader is capable of deriving this condition from
the positive real condition on Z(s). (It may be done, for example, by using
a matrix version of Parseval's theorem.) The inequality has an obvious
SEC. 5.4
POSITIVE REAL LEMMA
231
physical interpretation: Suppose that a system with impulse response JS(t)
$ H'emGl(t) is initially in the zero state, and is excited with an input u(.).
If y(.) is the corresponding output,
yr(t)u(t) dt is the energy fiow into
ro
the system up to time t,. This quantity may also be expressed as the integral
on the left side of (5.4.2); its tlonnegativity corresponds to passivity of the
associated system.
In the following, our main application of the positive real property will be
via (5.4.2). We also recall from the positive real definition that Fcan have
no eigenvalues with positive real part, nor multiple eigenvalues with zero
real part.
Throughout this section until further notice we shall use the symmetric
matrix R, assumed positive definite, and defined by
-s'
(Of course, the positive real property guarantees nonnegativeness of J f J'.
Nonsingularity is a special assumption however.)
A Variational Problem. Consider the followtng minimization
problem. Given the system
find u(-) so as to minimize
[Of course, u(.) is assumed constrained a priori to be such that
the integrand in (5.4.5) is actually integrable.]
The positive reaI property is related to the variational problem in a simple
way:
Lemma 5.4.1. The performance index (5.4.5) is bounded below
for all x,, u(-), t, independently of u(-) and t, if and only if Z(s)
= J $. W(sI - F)-'G is positive real.
Proof First, suppose that Z(.)is not positive real. Then we can
prove that V is not bounded below, as follows: Let u,(.) be a coniro1 defined on [0, t.] taking (5.4.4) to the zero state at an arbitrary
fixed time r. 2 0. Existence of u.(.) follows from complete controllability. Let up(.) be a control defined on [t,, t#],when tp and
up(.) are chosen so that
232
POSITIVE REAL MATRICES
CHAP. 5
Existence oft, and ua(-) follows by failure of the positive real
property. Denoting the above integral by I, we observe that
Now define u,(-) as the concatenation of u, and kus, where k is
a constant, and set t, = t,. Then
The first term on the right side is independent of k, while the
second can be made as negative as desired through choice of
appropriately large k. Hence V(x,, u,(.), t,) is not bounded below
independently of k, and V(x,, u(.), t,) cannot be bounded below
independently of u(-) and t , .
For the converse, suppose that Z(s) is positive real; we shall
prove the boundedness property of the performance index. Let
t, < 0 and u,(.) defined on [t,, 01 be such that u,(.) takes the zero
state at time t, to the state xo at time zero. Then if u(-) is defined
on It,, t , ] , is equal to u, on It,, 01,and is otherwise arbitrary, we
have
.
since, by tlie constraint (5.4.2), the firstintegral o n the right of
the equality is nonnegative for all u(.) and t,. The integral on
the right of the inequality is independent of u(-) on [0, t,] and
of t,, but depends on x,.since uy(.) depends on x,. That is,
V(xo;u(.); t,) 2 k(x,) for some scalar function k taking finite
values.
V T) V
Although the lemma guarantees a lower bound on any performance index
value, including therefore the value of the optimal performance index, we
have not obtained any upper bound. But such is easy to obtain; Reference to
(5.4.5) shows that V(x,, u(t) = 0, t,) = 0 for all x, and t , , and so the optimal
performaice index is bounded above by ziro for all x, and t,.
Calculation of the optimal performance index proceeds much as in the
standard quadratic regulator problem (see, e.g., [9, 101). Since a review of
this material could take us too far afield, we merely state here the consequences of carrying through a standard argument.
POSITIVE REAL LEMMA
SEC. 5.4
233
The optimal performance index associated with (5.4.5), i.e., min.,.,
V(x,, u(.), t,) = Vo(x,, t,), is given by
(5.4.6)
v O ( x ot,, ) = i 0 n ( 0 ,tl)x0
where II(., t , ) is a symmetric matrix defined a$ the solution of the Riccati
equation
-fi
=
II(F - GR-1H') + (F' - HR-'G')II
- IIGR-'G'II - HR-'H'
n ( t , , t,) = O
(5.4.7)
.'.
The associated optimal control, call it uO(.),is given by
since (5.4.7) is a nonlinear equation, the questionarises as to whether it has
a solution outside a neighborhood o f t The answer is yes; Again the reasoningis similar to the standard quadratic re'gulator problem. Because we know
that
,.
k(x,) l
v'(&,t,) = x'JI(0, t l ) x , < 0
for some scalar function k(.) of x,, we can ensure that as t , increases away
from the origin, II(0, t , ) cannot have any element become unbounded; I.e.,
n ( 0 , t,) exists for all t , 2 0. Since the constant coefficient matrices in (5.4.7)
guarantee II(t, t , ) = II(0, t , - t), it follows that if t , is fixed and (5.4.7)
solved backward in time, H(t, t , ) exists for all t 5 t,, since n ( 0 , t , - t )
exists for aU t , - t 2 0.
Having now established the existence of n ( t , t,), we wish to establish the
limiting property contained in the following lemma:
Lemma 5.4.2 [Limiting Property of n ( t , t,)]. Suppose that Z ( . )
is positive real, so that the matrix II(t, t , ) defined by (5.4.7)
exists for all t $ t,. Then
Iim n ( t , t,) = fi
I,--
exists and is independent of t ; moreover, fi satisfim a limiting
version of the differential equation part of (5.43, viz.,
i i ( F - GR-'H')
+ (F' - H R - ~ G ' ) ~
- ~ G R - I G ' R- HR-'H' =
o
(5.4.9)
234
POSITIVE REAL MATRICES
CHAP. 5
Proof. We show first that II(0, t , ) is a monotonically decreasing
matrix function oft,. Let u; be a control optimum for initial
state x,, final time t,, and let u(.) equal uz on [0, t,], and equal
zero after t,. Then for arbitrary t , > t,,
The existence of the lower bound k(x.) on x''n(0, t,)x,, independent of t , , and the just proved monotonicity then show after
a slightly nontrivial argument that lirn,,,, n ( 0 , t , ) exists. Let this
limit be fl. Since II(t, t , ) = II(0,t , - t), as noted earlier, lim,,
XI(& t , ) = I
7 for arbitrary k e d t . This proves the first part of
the lemma.
Introduce the notation n ( t , t , , A) to denote the solution of
(5.4.7) with boundary condition II(t,, t , ) = A, for any A = A'.
A standard differential equation property yields that
Also, solutions of (5.4.7) are continuous in the boundary condition, at least in the vicinity of the end point. Thus for t close to
f1,
lim n ( t , t , , n ( t , , t,)) = n ( t , t , , lim W t , , t,))
,,-te-m
But also
l i i n ( t , t , , II(t,, t,))
= lim n ( t , t2)
It-
1e-
n
So the boundary condition II(t,, t , ) = leads, in the vicinity of
I , , to a constant solution Then must be a global solution;
i.e., (5.4.9) holds.
VVV
n.
n
n
The computation of via the formula contained in the lemma [i.e., compute U ( t , t , ) for fixed t and a sequence of increasing values o f t , by repeated
solutions of the Riccati equation (5.4.711 would clearly be cumbersome. But
POSITIVE REAL LEMMA
SEC. 5.4
since.lI(t,
t , ) = II(0, t , - t), it
235
follows that
lim n ( t , t l ) = lim n(t,
f
I+__
It--
and so in practice the formula
n = ,--lim n(t,t , )
(5.4.10)
h.
function zfs) = & + (s + I)-',
would be more appropriate for computation of
Example Consider the positive real
5.4.1
minimal realization is provided
(5.4.7) becomes, with R = 1,
for which a
by (-1, 1, I,&). The Riaati equation
of which a solution may be found to be
Observe that n ( t , t l ) exists for all t < 21, because the denominator
never becomes zero. Also, we see that U(t, t l ) .= II(0,t , - r), and
lim n(t,1,) = lim n(t,2,) = f i
,-.--
I,--
Finally, notice that
n
=,
A
-2
- 2 satisfies
Before spelling out the necessity proof of the positive real lemma, we need
one last fact concerning A.
Lemma 5.4.3. If
is defined as described in Lemma 5.4.2, then
n is negative definite.
is nonpositive definite. For as remarked
Proof Certainly,
before, x'JI(0, t,)x, < 0 for all x,, and we found it 2 l'l (0, t , )
in proving Lemma 5.4.2. Suppose that R is singular. We argue
by contradiction. Let x, be a nonzero vector such that x',nx,
= 0. Then x',Il(O, t,)x, = 0 for any t , . Now optimaEcontro1
theory tells us that the minimization of IT(x,,u(.), r,) is achieved
with a unique optimal control 19, 101. The minimum value of the
index is 0, and inspection of the integral used in computing the
index shows that u(r) z 0 leads to a value of the index of zero.
Hence u(t) = 0 must be the optimal control.
236
POSITIVE REAL MATRICES
CHAP. 5
By the principle of optimality, u(t) E 0 is optimal if we restrict
attention to a problem over the time interval [t,, t , ] for any t ,
> t, > 0,provided we take as x(t.) the state at time t, that arises
when the optimal control for the original optimal-control problem is applied over [O, t.]; i.e., x(t.) = exp (&)x,. Therefore, by
stationarity, u(t) r 0 is the optimal control if the end point is
t , - t. and the initial state exp (Ft.)x,. The optimal performance
index is also zero; i.e.,
x; exp (Frt,)II(O, t , - t,) exp (Ft,)x,
= 4 exp (F't.)Wt,, t , ) exp ( F t . 2 ~ ~
=0
This implies that II(t., t , ) exp (FtJx, = 0 in view of the nonpositive definite nature of n(t., t,).
Now consider the original problem again. As we know, the
optimal control is given by
and is identically zero. Setting x(t) = exp (Ft)x, and using the
fact that II(f., t , ) exp (Fi.)x, = 0 for all 0 < t,
t , , it follows
that H' exp (Ft)x, is identically zero. This contradicts the minimality of {F, G, H, J ) , which demands complete observability of
[F, HI.
0
v v
Let us sum up what we have proved so far in a single theorem.
Theorem 5.4.1. (Existence, Construction, &Properties of li)
Let Z(s) be a positive real matrix of rational functions of s, with
Z(m) < m. Suppose that {F, G, H, J } is a minimal realization of
Z(.), with J J' = R nonsingular. Then there exists a negative
definite matrix jl satisfying the equation
+
n(F - GR-'HI)
+ (F' - H R - I G ' ) ~
- nGR-lG'n - HR-1H' = 0 (5.4.9)
II(t, t , ) = lim,--,n(t,t,), where II(., 1,)
Moreover, R = lim,,,
is the solution of the Riccati equation
-fI
- GR-'H') 4-(F' - HR-'G')II
- IIGR-'G'II - HR-IH'
= II(F
with boundary condition ll(t,, t , ) = 0.
(5.4.7)
SEC. 5.4
POSITIVE REAL LEMMA
237
Necessity Proof of the Positive Real Lemma
The summarizing theorem contains all the material
we need to establish the positive real lemma equations (5.4.1),
under, we recall, the mildly restrictive condition of nonsingularity
onJfJ'=R.
We define
Observe that P = P' > 0 follows by the negative definite property
of n . The first of equations (5.4.1) follows by rearranging (5.4.9)
as
+ ~ '=n( ( n +~ H)R-'(FIG + HY
~TF
and making the necessary substitutions. The second and third
equations follow immediately from the definitions (5.4.1 1). We
have.therefore constructed a set of matrices.P, L, and W, that
satisfy the positive real lemma equations.
VVV
Example
5.4.2
+ +
For z(s) = & (s If-' in Example 5.4.1, we obtained a minimal
realization [-I, 1, 1, $1 and found the associated
to be JT - 2.
According to (5.4.11), we should then take
a
Observe then that
PF
+ F'P
= -4
+ 2 4 7 = -(+n- 1)2
PC=2-J3=1
-LL'
-(a-1)=H-LW,
=
and
In providing the above necessity proof of the positive real lemma, we have
also provided a constructive procedure leading to one set of matrices P, L,
and W,satisfying the associated equations. As it turns out though, there are
an infinity of solutions of the equations. Later we shall study techniques for
their computation.
One might well ask then whether the P, L, and W , we have constructed
have any distinguishing properties other than the obvious property of constructability according to the technique outlined above. The answer is yes,
and the property is best described with the aid of the spectral factor R1/'
C(sI - F)-'G of Z'(-s)
Z(s).
+
+-
238
CHAP. 5
POSITIVE REAL MATRICES
spectral- actor Properties
It will probably be well known to control and network theorists
that in general the operation of spectral factorization leads to nonunique
spectral factors; i.e., given a positive real Z(s), there will not be a unique
W(s)such that
Z1(-s)
+ Z(s) = W'-s)W(s)
(5.4.12)
Indeed, as we shall see later, even if we constrain W(s) to have a minimal
realization quadruple of theform IF, G, L, W,] i.e., one with the same Fand
G matrices as Z(s), W(s)is still not uniquely specified. Though the poles of
elements of W(s) are uniquely specified, the poles of elements of W-'(s),
assuming for the moment nonsingularity of W(s),turn out to be nonuniquely
specified.
In both control and network theory there is sometimes interest in constraining the poles of elements of W-'(s); typically it is demanded that entries of
W-l(s) be analytic in one of the regions Re [s] > 0,Re [s] 2 0, Re [s] < 0,or
Re [s] 5 0.In the case in which W(s)is singular, the constraint becomes one
of demanding constant rank of W(s)in these regions, excluding points where
an entry of W(s) has a pole.
Below, we give two theorems, one guaranteeing analyticity in Re [s] > 0
of the elements of the inverse of that particular spectral factor W(s) formed
via the procedure spelled out earlier in this section. The second theorem, by
imposing extra constraints on Z(s), yields a condition guaranteeing analyticity in Re [s]2 0.
Theorem 5.4.2. Let Z(s) be a positive real matrix of rational
functions of s, with Z(w) < m. Let IF, G, H, J ) be a minimal
realization with J J' = R nonsingular. Let
be the matrix
whose construction is described in the statement of Theorem
5.4.1. Then a spectral factor W(s) of ZC(-s) Z(s) is defined by
+
+
W(s) = RI1'
+ R-'ff(fiG+ H)'(sI
- F)-'G
(5.4.13)
Moreover, W-'(s) exists in Re [s] > 0;i.e., all entries of W-'(s)
are analytic in this region.
The proof will follow the following lemma.
Lemma 5.4.4. With W(s) defined as in (5.4.13), the following
equation holds:
det [sZ- L:
+ GR-'(RG + H)1
= det R-''2
det (sI - F) det W(s)
(5.4.14)
SEC. 5.4
POSITIVE REAL L E M M A
239
Proof*
+
det [sZ - F + GR-'(ilG
H)']
= det (sZ - F ) det [I
(sI - F)-'GR-'(nG
H)']
= det (sI - F ) det [I R-'(nG
H)'(sZ - F)-'GI
= det (sI - F) det R-'I2 det [R'12fR-112(i%G H)'(sZ
+
+
+
+
= det R-l/Z det (sI - F)det W(s)
+
- F)-'GI
VVV
This lemma will be used in the proofs of both Theorems 5.4.2 and 5.4.3.
Proof of Theorem 5.4.2. First note that the argument justifyZ(s) depends on
ing that W(s) is a spectral factor of 2 ' - s )
+
1. The definition of matrices P,L, and W , satisfying the positive real lemma equations in terms of F, G, H,R, and il, as
contained in the previous subsection.
2. The observation made in an earlier section that if P, L, and
W. are matrices satisfying the positive real lemma equations,
then Wo L1(sI - flF3-IGis a spectral factor of Z'(-s)
Z(s).
+
+
In view of Lemma 5.4.4 and the fact that F can have no eigenvalues in Re [s]> 0, it foILows that we have to prove that all
eigenvalues of F - GRW'@G H)' lie in Re [s] < 0 in order to
establish that W-'(s) exists throughout Re [s]> 0. This we shall
now do. For convenience, set
+
From (5.4.7) and (5.4.9) it follows, after a IiLLla manipulation, and
with the definition V(t) = n ( t , t,) - R,that
The matrix V(t)is defined for all t 5 I , , and by definition of ii as
lim,,_, II(t, t,), we must have lim ,-_,V(t) = 0. By the munotonicity property of n ( 0 , t,), it is easily argued that V(t) is nonnegative definite for all t. Setting X(t) = V(t, - t), it follows that
lim,-, X(t) = 0, and that
*This proof uses the identity det [I
+ ABI = det [ I + BA].
240
POSITIVE REAL MATRICES
CHAP. 5
with X(t) nonnegative definitefor all t.
Since is nonsingular by Theorem 5.4.1, X-'(t) exists, at least
near t = 0. Setting Y(t) = X-'(t), it is easy to check that
n
Moreover, since this equation is linear, it follows that Y(I)=
X-'(t) exists for all t 2 0 . The matrix Y(t) must be positive definite, because X(t) is positive definite. Also, for any real vector m,
it is known that [m'X-l(tjm][m'X(t)m]r(m'rn)=, and since
lirn,?.,, X(t) = 0, m'Y(t)m must diverge for any nonzero m.
Using these properties of YO), we can now easily establish that
P has no eigenvalue with positive real part. Suppose to the contrary, and for the mo+t
suppose that E has a real positive
eigenvalue 1. Let m be the associated eigenvector of P. Multiply
(5.4.16) on.the left by m' and on the right by m ; set y(t) =
m'Y(t)m and q = m'GR-'G'm. Then
j = -2ny
+q
with y(0) = -rn'n-'h f 0 . The solution y(t) of this equation is
obviously bounded for all t, which is a contradiction. If E has no
real positive eigenvalue, but a complex eigenvalue with positive
real part, we can proceed similarly to show there exists a complex
nonzero m = m , jm,, m , and m, real, such that m',Y(t)m, and
m',Y(t)rn, are bounded. Again this is a contradiction. V V V
+
k t us now attempt to improve on this result, to the extent of guaranteeing
existence of W - ' ( s ) in Re Is] 2 0, rather than Re [s] > 0.The task is a simple
one, with Theorem 5.4.2 and Lemma 5.4.4 in hand. Recall that
Z(s)
+ Z'(-s)
= W'(-s)W(s)
(5.4.12)
and so, setting s =jw and noting that W(s)is real rational,
Z(jw)
+ Z'(-jco)
= W'*(jw)W(jw)
(5.4.17)
where the superscript asterisk denotes as usual complex conjugation.
If jw is not a pole of any element of Z(.), W ( j w )is nonsingular if and only
if Z(jw) Z'(-jw) is positive definite, as distinct from merely nonnegative
definite.
+
POSITIVE REAL L E M M A
SEC. 5.4
241
Ifjw is a pole of some element ofZ(.) for w = o , , say, i.e., jw, is an eigenvalue of F, we would imagine from the definition W(s)= W, L'(s1- E'-'G
that W(s) would have some element with a pole at jo., and therefore care
would need to be exercised in examin~ngW-'(s). This is slightly misleading:
As we now show, there is some sort of cancellation between the numerator
and the denominator of e&h element of W(s), which means that W(s)
does not have an element with a pole at s =jw,.
As discussed in Section 5.1, we can write
+
where no element of Z,(s) has a pole at jo., Z,(s) is positive real, A is real
symmetric, and B is real and skew symmetric. Immediately,
-
Therefore, a i s jo,, though elements of each summand on the left side
behave unboundedly, the sum does not--every element of Z,(s) and Z;(-s)
being analytic at s = jw,. Therefore [see (5.411711, n o element of W(s) can
have a pole at s = jw,, and W - ' ( j o ) exists again precisely when Z(jw)
Z'(-jw), interpreted us in (5.4.18),is nonsingular;
We can combine Theorem 5.4.2 with the above remarks to conclude
+
Theorem 5.4.3. Suppose that the same hypotheses apply as in
the case of Theorem 5.4.2, save that alsoZ(jw) Z'(-jm) is
positive definite for all o , including those w for which jw is a pole
of an element of Z(s). Then the spectral factor W(s) defined in
Theorem 5.4.2 is such that W-'(s) exists in Re [s] 2 0; i.e., all
..
.
entries of W-'(s) are analytic in this region.
+
Example
5.4.3
We have earlier found that for the impedance &
factor resulting from computing is,
n
+ (s + I)-',
a spectral
We observe that W-'(s) is analytic in Re [s] > 0, as required, and in
fact in Re [sI> 0. The fact that W-'(s)has no pole on Re [s] = 0 results
from the positivity of z(jw) f z(-jw), verifiable as follows:
242
CHAP. 5
POSITIVE REAL MATRICES
Extension to Case of Singular J
+ J'
In the foregoing we have given a constructive proof for the existence of solutions to the positive real lemma equations, under the assumption
that J + J' is nonsingular. We now wish to make several remarks applying
to the case when J J' is singular. These remarks will he made in response
to the following two questions:
+
I. Given a problem of synthesizing a positive real Z(s)for which Z(-)
Z'(-co)or J -+ J' is singular, can simple preliminary synthes~ssteps
be carried out to reduce the problem to one of !ynthesizing another
positive real impedance, 2(s)say, for which ..f + J' is nonsingular?
2. Forgetting the synthesis problem for the moment, can we set up a variational problem for a matrix Zfs)
with J + J' singular along the lines of
the case where J 3.J' is nonsingular, compute a P matrix via this procedure, and obtain a spectral factor W(s)with essentially the same
properties as above?
+
The answer to both these questions is yes. We shall elsewhere in this book
give a reasoned answer to question 1;the preliminary synthesis steps require
techniques such as series inductor and gyrator extraction, cascade transformer extraction, and shunt capacitor extraction-all operations that are standard in the (multiport generalization of the) Foster preamble.
We shall not present a reasoned answer to question 2 here. The reader
may consult [LII. Essentially, what happens is that a second positive real
impedance Z,(s)
is formed from Z(s), which possesses the same set of P
matrix solutions of the positive real lemma equationd. The matrix Z,(s)
has
a minimal realization with the same F and G as Z(s),
but different H and J.
Also, Z,(-) Z,(-w) is nonsingular.
The Riccati equation technique can be used on Z,(s),
and this yields a P
matrix satisfying the positive real lemma equations for Z(s).This matrix P
enables a spectral factor W(s)to be defined, satisfying Z(s) Z'(-s) =
W'(-s)W(s),and with W(s)of constant rank in Re [s] > 0.
The construction of Z,(s)from Z(s)proceeds by a sequence of operations
on Z(s) Z'(-s),which multiply this matrix on the left and right by constant matrices, and diagonal matrices whose nonzero entries are powers of s.
The algorithm is straightforward to perform, but can only be stated wrth
a great deal of algebra~cdetail.
A third approach to the singular problem appears in Problem 5.4.3. This
approach is of theoretical interest, but appears to lack any computational
utility.
+
+
+
+
- I/(s 1) is positive real and possesses a
minimal realization (-I,], -1, 1). Apply the technique outl~nedin the
statements of Theorems 5.4.1 and 5.4.2 to find a solution of the posltlve
Problem The impedance z(s) = I
5.4.1
POSITIVE REAL LEMMA
SEC. 5.5
243
real lemma equations and a spectral factor with "stable" inverse, i.e.,
an inverse analytic in Re [sl > 0. Is the inverse analytic in Re [s]2 O?
Problem Suppose that a positive real matrix Z(s)has a minimal real~zation
5.4.2
[F, G, H,J ) with J J'nonsingular. Apply the techniques ofthis section
with
to Z'(s) to find a V(s)satisfying Z'(-s) Z(s)= V(s)V'(-s),
V(s)= J, H'(sI F)-'L, and V-'(s) existing in Re [s]> 0.
Problem Suppose that a positive real matnx Z(s)has a minimal realization
5.4.3
(F, G, H, J] with J J' singular. Let 6 be a positive constant, and let
Z,(s)= Z(s) €1.Let
be the matrix associated with Z,(s)in accordance with Theorem 5.4.1, and let P,,L,, and Woe
be solutions of the
m accordanm
positive real lemma equations for Z,(s)obtained from
with Eq. (5.4.11). Show that P, varies monotonically with E and that
lim,,, P, exists. Show also that
+
-
+
+
+
+
n,
n,
lim
e-0
[W k L :
I
WLeWoc
Lcwk
exists; deduce that there exist real matricesp, Land W,which satisfy the
positive real lemma equations for Z(s),
with P positive dekite symmetric.
Discuss the analyticity of the inverse of the spectral factor associated with
as).
Problem Suppose that Z(s)has a minimal realization [F, G, H, J ) with R = J+J'
5.4.4
nonsingular. Let (Q, M, Ri") be a solution of the positive real lemma
obtained by the method of this section. Show 1301
equations for Z'(s),
that {Q-1, Q-'M, R"Z} is a solution of the positive real lemma equations
with the property that all eigenvalues of F - GR-'(H' - G'Q-')
for Z(s),
lie in Re [s]> 0,i.e. that the associated spectral factor W(s)has W-'(s)
analytic in Re[sl < 0. (Using arguments like those of Problem 5.4.3,
this result may be extended to the case of singular R).
5.5 POSITIVE
REAL LEMMA: NECESSITY
PROOF BASED ON SPECTRAL
FACTORIZATION*
In this section we prove the positive real lemma by assuming the
existence of spectral factors defined in transfer-function matrix form. In
the case of a scalar positive real Z(s), it is a simple task both to exhibit spectral factors and to use their existence to establish the positive real lemma.
I n the case of a matrixZ(s), existence of spectral factors is not straightforward
to establish, and we shall refer the reader to the literature for the relevant
proofs, rather than present them here; we shall prove the positive real lemma
by taking existence as a starting point.
'This section may be omitted at a first, and even a second, reading.
244
POSITIVE REAL MATRICES
CHAP. 5
Although the matrix caseZ(s) obviously subsumes the scalar case, and thus
a separate proof for the latter is somewhat supefiuous, we shall nevertheless
present such a proof on the grounds of its simplicity and motivational wntent. For the same reason, we shall present a simple proof for the matrix
case that restricts the F matrix in a minimal realization of Z(s). This will be
in addition to a general proof for the matrix case.
Finally, we comment that it is helpful to restrict initially the poles of
elements of Z(s) to lie in the half-plane Re [s] < 0. This restriction will be
removed at the end of the section.
The proof to be presented for scalar impedances appeared in [2], the proof
for a restricted class of matrix Z(s) in [12], and the full proof, applicable for
any Z(s), in [4].
Proof for Scalar Impedances
We assume for the moment that Z(s) has a minimal realization
{F,g,h,j } (lowercase letters denoting vectors or scalars). Moreover, we
assume that all eigenvalues of Flie in Re [s] < 0. As noted above, this restriction will be lifted at the end of the section. Let p(s) and q(s) be polynomials
such that
and
p(s)
= det (sl-
F)
As we know, rninimality of the realization (F,g, h,j ) is equivalent to the
polynomials p(s) and q(s) being relatively prime. Now*
It follows from the positive real property that the polynomial ?.(w) =
q(jw)p(-jco) i-p(jwlg(-jw),
which has real coefficients and only even
powers of w, is never negative.
Hence &IJ), regarded as a polynomial in w, must have zeros that are either
complex conjugate or real and of even multiplicity. I t follows that the polynomial p ( ~ =
) q(s)p(-s) +p(s)q(-s) has zeros that are either purely imaginary and of even multiplicity or are such that for each zero soin Re [s] > 0
there is a zero --so in Re [s] < 0. Form the polynomial i(s) = (s so),
where the product is over (1) all zeros soin Re [s] > 0, and (2) all zeros on
Re [sl = 0 to half the multiplicity with which they occur in p(s). Then i(s)
-
*When j appears as j,, ii denotes
m,and when it appears by itself, it denotes Z(m).
POSITIVE REAL LEMMA
SEC. 5.5
245
will be a real polynomial and p(s) = ai(s)i(-s) for some constant a. Since
A(-) = a ( f ( j ~ ) it( ~follows
,
that a > 0; set r(s) = f i P ( s ) to conclude
that
q(s)p(-s)
+ p(s)q(-s)
=
(5.5.3)
r(-s)r(s)
and therefore
+
Equation (5.5.4) yields a spectral factorization of Z(s) Z(-s). From it we
can establish the positive real lemma equations.
Because [F, g] is completely controllable, it follows that for some nonsingular T
r
o
1
. . . .
01
where the a, and b, are such that
q(s) --
P(S)
+
bg-I
sn
+ b._,sm-' + . . . + 6,
+ a,P-' + - .- + a ,
From (5.5.4) and (5.5.5) we see that lim,,,
--
r(s)/p(s)= f i ,and thus
l " ~ n - 1 + i . - , ~ ' - l+ .. . + i,
for some set of coefficients i,. Now set i = [I,
i,
-'(s) P(S)
(5.5.5)
- ..
i J , to obtain
f i + ?[$I - T F T - l ] - l ( T g )
246
CHAP. 5
POSITIVE REAL MATRICES
a)
Therefore, with I = T'f, it follows that {F,g, I,
is a realization for
~(s)/P(s).
This realization is certainly completely controllable. It is also completely
observable. For if not, there exists a polynomial dividing both r(s) and p(s)
of degree at least 1 and with all zeros possessing negative real part. From
(5.5.3) we see that this polynomial must divide q(s)p(-s), and thus q(s),
since all zeros of p(-s) have positive real part. Consequently, q(s) and p(s)
have a common factor; this contradicts the earlier stated requirement that
p(s) and q(s) be relatively prime. Hence, by contradiction, IF, I] is completely
observable.
Now consider (5.5.4). This can be written as
Define P as the unique positive definite symmetric solution of
PF
+ F'P = -11'
(5.5.7)
The fact that P exists and has the stated properties follows from the lemma
of Lyapunov. This definition of P allows us to rewrite the final term on the
right side of (5.5.6) as follows:
g'(-sl - F')-'lZ'(s1-
F)-'g
- F')-l[P(sI - F) + (-sI - F')P](sI
= g'(-slF')-'Pg 4-g'P(s1- F)-'g
= g'(-s1
- F)-Ig
Equation (5.5.6) then yields
Because all eigenvalues of F are restricted to Re [s] < 0,it follows that the
two terms on the left side of this equation are separately zero. (What is the
basis of this reasoning?) Then, because [F,g] is completely controllable, it
follows that
Equations (5.5.7), (5.5.8), and the identification W, = f
W6 W,) then constitute the positive real lemma equations.
i
(so that 2j =
VVV
SEC. 5.5
Example
5.5.1
POSITIVE REAL LEMMA
Let z(s) = 4
Evidently,
+ (s + I)-',
247
so that a minimal realization is [-I, I , 1, il.
so that a spectral factor is given by w(s) = (s + JTXs + I)-'. A minimal realization for w(s) is [-1,1, J,T - 1, I), which has the same F
matrix and g vector as the minimal realization for z(s). The equation
PF F'P = -/I' yields simply P = 2 - +/T.
Alternatively, we might have taken the spectral factor w(s) =
(-s
+/T)(s
I)-'. This spectral factor has minimal realization
(-1, 1, ,/3
1, -1). It still has the same Fmatrixandg W o r as the
minimal realization for z(s). The equation PF F'P = -Il' yields now
P =2
So we see that the positive real lemma equations do not
yield, in general anyway, a unique solution.
+
+
+
+
+
+ a.
Before proceeding to the matrix case, we wish to note three points concerning the proof above:
1. We have exhibited only one solution to the positive real lemma equations. Subsequently, we shall exhibit many.
2. We have exhibited a constructive procedure for obtaining a solution to
the positive real lemma equations. A necessary part of the computation
is the operation of spectral factorization of a transfer function, performed by reducing the spectral factorization problem to one involving
only polynomials.
3. We have yet to permit the prescribed Z(s) to have elements with poles
that are pure imaginary, so the above does not constitute a complete
proof of the positive real lemma for a scalar Z(s). As earlier noted,
we shall subsequently see how to incorporate jw-axis, i.e., pure imaginary, poks.
Matrix Spectral Factorization
The problem of matrix spectral factorization has been considered
in a number of places, e.g., [13-11. Here we shall quote a result as obtained
by Youla [14].
Youla's Spectral Factorization Statement. Let Z(s) be an
m X m positive real rational matrix, with no element of Z(s)
possessing a pure imaginary pole. Then there exists an r x nz
matrix W(.) of real rational functions of s satisfying
Z(s)
+zy-s)
= w'(-s)W(s)
(5.5.9)
248
CHAP. 5
POSITIVE REAL MATRICES
+
where r is the normal rank ofZ(s) Z'(-s), i.e., the rank almost
everywhere.* Further, W(s)has no element with a pole in Re [s]
2 0, W(s)has constant, as opposed to merely normal, rank in
Re Is] > 0, and W(s)is unique to within left multiplication by
an arbitrary real constant orthogonal matrix.
We shall not prove this result here. From it we shall, however, prove
the existence, via a const~ctiveprocedure, of matrices P;L, and W,satisfying the positive real lemma equations. This constructive procedure uses as
its starting point the matrix W(s),above; inreferences [14-16];techniques
for the construction of W(s)may be found. Elsewhere in the book, construction of P, L, and We'willbe require% Hawever,. other than & this section,
the procedure of r1.4-161 will almostnever be needed to carry out this construction, and this is whywe omit presentation of these procedures.
Example
Suppose that Z(s)is the 2 x 2 matrix
5.5.2
. .. .
.
The positive real nature of Z(s) is not hard t o check. Also,
:
..
,
Thus we take as W(s)the matrix 12 2(s - I)/(s.+
I)]. ~ o t i c that
e W(s)
has the correct size, viz., one row and-twocolumns, 1 being liormal rank
of Z ' ( - s )
Z(s).Further, W(s)has no element with a pole in Re [s]> 0,
and rank W(s)= 1 throushout Re [sl > 0. The only 1 x I real constant
orthogonal matrim are +1 and -1, and so W(s)is unique up to a &I
multiplier.
+
Before using Youla's result to prove the positive real lemma, we shall
isolate an important property of W(s).
Important Properiy of Spectral Factor Minimal
Realizations
In this subsection we wish to establish the following result.
+
*For example, the 1 x 1 matrix [(s - l)/(s 111 has normal rank 1 throughout Re Is]
> 0, but not constant rank throughout Re [s]> 0 ; that is, it has rank 1 almost everywhere,
but not at s = 1.
POSITIVE REAL LEMMA
SEC. 5.5
249
Lemma 5.5.1. Let Z(s) be an m X m positive real matrix with
no element possessing a pure imaginary pole, and let W(s) be
the spectral factor of Z(s) Z'(-s) satisfying (5.5.9) and fulfilling the various conditions stated in association with that equation.
Suppose that Z(s) has a minimal realization {F, G,H, J ] . Then
W(s) has a minimal reahation (F, G,L, W,].
+
We comment that this lemma is almost, but not quite, immediate when
Z(s) is scalar, because of the existence of a companion matrix form for F
with g' = [0 0 . 0 11. In our proof of the positive real lemma for scalar
Z(s), we ~0II~tmcted
a minimal realization for W(s) satisfying the constraint
of the above lemma, with the aid of the special fonns for F and g. Now we
present two proofs for the lemma, one simple, but restricting F somewhat.
..
Simple proof for restricted F [I21. We shall suppose that all
eigenvalues of F are distinct. Then
(I-
-
1
2%-
=
k=rS
- Sk
where the s, are eigenvalues of F and all have negative real part.
It is immediate from (5.5.9) that the entries of W(s) have only
simple poles, and so, for some constant matrices W,, k = 0, 1,
...,n,
Equation (5.5.9) now implies that
J+J'=
WiW,
and
HIFkG= Wf(-sk)W,
We must now identify L. The constraints on W(s) listed in association with (5.5.9) guarantee that W'(-sk) has rank r, and thus,
being an n X r matrix, possesses a left inverse, which we shall
write simply as W ( - s k ) ] - ' . Define
Hi
= [Wf(-sk)]-'H'
so that
Now suppose T is such that TFT-I = diag Is,, s,, . ..,s,]. Then
250
CHAP. 5
POSITIVE REAL MATRICES
F, = T-'E,T, where E, is a diagonal matrix whose only nonzero
entry is unity in the (k, k ) entry. Define
It follows, on observing that &.El
that
= O,,
i f j, and E: = .E.,
and so
Finally,
Actually, we should show that [F,L] is completely observable to complete
the proof. Though this is not too difficult, we shall omit the proof here since
this point is essentially covered in the fuller treatment of the next subsection.
Proof for arbitrary F. The proof will fall into three segments:
1. We shall show that G[W'(-s)W(s)] = 2S[W(s)].
2. We shall sliow that d[Z(s)]= d[W(s)] and d[Z(s)
= S[W'(-s)W(s)] = 2S[Z(s)].
3. We shall conclude the desired result.
+ Z'(-s)]
First, then, part (1). Note that one of the properties of the degree
operator is that 6[AB]< 6[A] 6[B].Thus we need to conclude
equality. We rely on the following characterization of degree,
noted earlier: for a real rational X(s)
+
where the s, are poles of X(s), and where 6[X(s);sk]is the maximum multiplicity that s, possesses as a pole of any minor of X(s).
Now let s, be a pole of W(s), so that Re [s,] < 0, and consider
6[W'(-s) W(s);s,]. We claim that there is a minor of W'(-s)W(s)
possessing sk as a pole with multiplicity 6[W(s);s,];i.e., we claim
that 6 v ' ( - s ) W(s); sk$2 6[W(s);s,]. The argument is as follows.
Because W(s)has rank r throughout Re [s] > 0,W'(-s) has rank
SEC.
5.5
POSITIVE REAL LEMMA
251
r throughout Re [s] < 0 and must possess a left inverse, which
we shall denote by .[W'(-s)]-'. Now
[w'(-s)]-'wy-s)
W ) = W(s)
and so
Therefore, there exists a minor of [W'(-$1-W'(-s)W(s)
that
has s, as a pole of multiplicity S[W(s); s,]. By the Binet-Cauchy
theorem [IS], such a minor is a sum of products of minors of
[W'(-s)]-'
and W'(-s)W(s). Since Re Is,] < 0, s, cannot be
a pole of any entry of [w'(-s)]-I, and it follows that there is at
least one minor of W'(-s)W(s) with a pole at s, of multiplicity
at least b[W(s); s,]; i.e.,
If skis a pole of an element of W(s), -s, is a pole of an element of
W(-8). A similar argument to the above shows that
Summing these relations over all s, and using (5.5.1 1) applied to
W'(-s)W(s), W'(-s) and W(s) yields
But as earlier noted, the inequality must apply the other way.
Hence
+
S[W'(-~)W(~)l = m'(s)l
SW(-s)I
= 2S[W(s)]
(5.5.12)
since clearly S[W(s)] = S[W'(-s)].
Part (2) is straightforward. Because no element of Z(s) can have
a pole in common with an eIement of Z'(-s), application of
(5.5.11) yields
+
Then (5.5.12). (5.5.13), and the fundamental equality Z(s)
Z(-s) = W'(-s)W(s) yield simply that S[Z(s)] = S[W(s)] and
S[Z(S) zl(-s)i = a[wy-~)w(~)l= S[Z(S)I.
+
252
POSITIVE REAL MATRICES
CHAP. 5
Now let {F,, G,, H,, W,} be a minimal realization for W(s).
It is straightforward to verify by direct computation that a realization for W'(-s) W(s) is provided by
(Verification is requested in the problems.)
This realization has dimension 26[W(s)] and is minimal by
(5.5.12). With {F,G, H,3) a minimal realization of Z(s), a realization for Z'(-s)
Z(s) is easily seen to be provided by
+
By (5.5.13), this is minimal. Notice that (5.5.14) and (5.5.15)
constitute minimal realizations of the same transfer-function
matrix. From (5.5.14) let us construct a further minimal realization. Define P,, existence being guaranteed by the lemma of
Lyapunov, as the unique positive definite symmetric solution of
PwF,
+ FLPv = -H,H;
(5.5.16)
Set
and define a minimal realization of W'(-s) W(s) by F, = TF,T-',
etc. Thus
Now (5.5.15) and (5.5.17) are minimal realizations of the same
transfer-function matrix, and we have
SEC. 5.5
POSITIVE REAL LEMMA
253
Now Fwand F both have eigenvalues restricted to Re [s] < 0 and
therefore
The decomposition on the left side of (5.5.18) is known to be
minimal, and that on the right side must be minimal, having the
same dimension; so, for some T,,
Consequently, W(s), which possesses a minimal realization
{Fw,G,, H,, W,],
possesses also a minimal realization of the form.
IF, G, L, W,]with L = (T:)-'Hw.
VVV
Positive Real Lemma Proof-Restricted Pole
Positions
With the lemma above in hand, the positive real
lemma proof is very straightforward. With Z(s) positive real and
possessing a minimal realization {F, C, H, J) and with W(s) an
associated spectral factor with minimal realization {F, G, L, W,],
define P as the positive definite symmetric solution of
PF
+ F'P = -LL'
Then
Equating this with Z(s)
and
+ Z'(-s), it follows that
(5.5.20)
254
CHAP. 5
POSITIVE REAL MATRICES
By complete controllability,
Equations (5.5.20) through (5.5.22) constitute the positive real
lemma equations.
VVV
We see that the constructive proof falls into three parts:
1. Computation of W(s).
2. Computation of a minimal realization of W(s) of the form (F, G, L, W,].
3. Calculation of P.
Step 1 is carried out in references 1141 and [15].For step 2 we can form any
minimal realization of W(s) and then proceed as in the previous section to
obtain a minimal realization with the correct F and G. Actually, all that is
required is the matrix T, of Eq. (5.5.19). This is straightforward to compute
directly from F,G, F,, and G,,, because (5.5.19) implies
..
as direct calculation will show. The matrix [G, F,G,
. F;-'G.] possesses
a right inverse because [F,, G,] is completely controllable. Thus T, follows.
Step 3 requires the solving of (5.5.20) [note that (5.5.21) and (5.5.22) will
be automatically satisfied]. Solution procedures for (5.5.20)are discussed in,
e.g., [IS] and require the solving of linear simultaneous equations.
Example Associated with the positive real
matrix
5.5.3
we found the spectral factor
Now a minimal realization for Z ( s ) is readily found to be
A minimal realization for W(s) is
F, = [-l]
G, = [0 -41
L, = [I]
W o= 12 21
We are assured that there exists a realization with the same F and G
POSITIVE REAL LEMMA
SEC. 5.5
255
matrices as the realization for Z(s). Therefore, we seek T. such that
-
whence T, = -f. It follows that L = (T;)-lL, -4 so that W(s)has
a minimal realization {-I, [O I],[-41, [2 211. The matrix P is given
from (5.5.20) as P = 8 and, as may easily be checked, satisfies PC =
H - LW,.Also, Wk WO J J' follows simply.
-+
Positive Real Lemma Proof-Unrestricted Pole
Positions
Previously in this section we have presented a constructive proof
for the positive real lemma, based on spectral factorization, for the case
when no entry of a prescribed positive real Z(s) possesses a purely imaginary
pole. Our goal in this section is to remove the restriction. A broad outline of
our approach will be as follows.
I. We shall establish that if the positive real lemma is proved for a particular state-space coordinate basis, it is true for any coordinate basis.
2. We shall take a special coordinate basis that separates the problem of
proving the positive real lemma into the problem of proving it for two
classes of positive real matrices, one class being that for which we have
already proved it in this section, the other being the class of losslss
positive real matrices.
3. We shall prove the positive real lemma for lossless positive real matrices.
Taking 1, 2, and 3 together, we achieve a full proof.
G,, H,,
1. Effect of Different Coordinate Basis: Let {F,, G,, H , , J ] and (F2,
J ) be two minimal realizations of apositive real matrixZ(s). Suppose that we
know the existence of a positive definite PI and matrices L , and W,for which
P,F, +$-P,= -LIL:, P I G , = H I - L, Wo,J $ J' = WLW,. Thereexists
T such that TF,T-' = F,, TG, = G , , and (T-')'Hz = H I . Straightforward
calculation w~llshow that if P, = T f P , T and L, = T'L,, then P,F, F;P,
= -L,L;, P,G, = H, -L,W,, andagain J + J ' = WLW,.
Therefore, if the positive real lemma can be proved in one particular
coordinate basis, i.e., if P,L, and Wosatisfying the positive real lemma equations can be found for one particular minimal realization of Z(s), it is trivial
to observe that the lemma is true in any coordinate basis, or that P, L, and
W, satisfying the positive real lemma equations can be found for any minimal realization of Z(s).
+
256
CHAP. 5
POSITIVE REAL MATRICES
2. Separation of the Problem. We shall use the decomposition property
for real rational Z(s), noted earlier in this chapter. Suppose that Z(s) is positive real, with Z(-) < w. Then we may write
z(s)
=
zo(s) +
F A s + B,
(5.5.23)
with, actually,
(5.5.24)
The matrix Z,(s) is positive real, with no element possessing a pure imaginary
pole. Each A, is nonnegative definite symmetric and B, skew symmetric, with
lossless and, of course, positive real. Also, A, and B,
B,)(si
are related to the residue matrix at a,.
Equation (5.5.24) results from the fact
that the elements of the different, summands in (5.5.23) have no common,
poles.
Suppose that we find a minimal realization (Fo,G,, H,,J,] for Zo(s) and
minimal realizations [F,, G,,HI}for each Zj(s) = (A,s B,)(s2
a:)-'.
Suppose also that we find solutions Po, Lo,and W,of the positive real lemma
equations in the case of Z,(s), and solutions P,(the L, and W,,being as we
shall see, always zero) in the case of Z,(s). Then it is not hard to check that
a realization for Z(s), minimal by (5.5.24), is provided by
IA,S +
+
+
+
and a solution of the positive real lemma equations is provided by W,and
Because the P, are positive definite symmetric, so is P.
Consequently, choice of the special coordinate basis leading to (5.5.25)
reduces the problem of proving the positive real lemma to a solved problem
[that involving Zo(s) alone] and the so far unsolved problem of proving the
positive real lemma for a lossless Z,(s).
SEC. 5.5
POSITIVE REAL LEMMA
257
3. Proof of Positive Real Lemma for Lossless Z,(s). Let us drop the subscript i and study
Rewrite (5.5.27) as
where KO is nonnegative definite Hermitian. Then if k = rank KO,Z(s) has
degree 2k and there exist k compIex vectors xi such that
By using the same decomposition technique as discussed in 2 above, we can
replace the prohIem of finding a solution of the positive real lemma equations
for the above Z(s) by one involving the degree 2 positive real matrix
Setting y,
= (x
+x*)/fi,
y, = ( x - x * ) / f l , it is easy to verify that
Therefore, Z(s) has a minimal realization
For this special realization, the construction of a solution of the positive real
lemma equations is easy. We take simply
Verification that the positive real lemma equations are satisfied is then immediate.
When we build up realizations of degree 2 Z(s) and their associated P to
obtain a realization for Z(s) of (5.5.27), it follows still that P is positive definite, and that the L and Wbof the positive real lemma equations for the Z(s)
258
POSITIVE REAL MATRICES
CHAP. 5
of (5.5.27) are both zero. A further building up leads to zero L and W, for
the lossless matrix
where the A, and B, satisfy the usual constraints, and J is skew. Since (5.5.28)
constitutes the most general form of lossless positive real matrix for which
Z(w) < w, it follows that we have given a necessity proof for the lossless
positive real lemma that does not involve use of the main positive real lemma
in the proof.
V V
Problem Let Z(s)be positive real, Z(m)< m, and let V(s) be such that
5.5.1
Z(s)+ Z'(-s) = V(s)V'(-s)
Prove, by applying Youla's result to Z'(s),
that there exists a V(s),
with
all elements analytic in Re [s]2 0, with a left inverse existing in Re [sl
2 0, with rank [Z(s)
f Z'(-s)]columns, and unique up to right multiplication by an arbitrary real constant orthogonal matrix. Then use the
theorems of the section to conclude that b[V(s)] = S[Z(s)and
] that V
has a minimal realization with the same Fand H matrix as Z(s).
+
+
Problem Let Z(s)= 1 l/(s 1).Find a minimal realization and a solution of
the positive real lemma equations using spectral factorization. Can you
h d a second set of solutions to the positive real lemma equations by
using a different spectral factor?
Problem Let P be a solution of the positive real lemma equations. Show that if
5.5.3
elements of Z(s)have poles strictly in Re [s]< 0,x'Px is a Lyapunov
function establishing asymptotic stability, while if Z(s)is lossless, x'Px
is a Lyapunov function establishing stability but not asymptotic stability.
Problem Let (Fw,G
.
, H,, Wo]be a minimal realuatiog of W(s).Prove that the
5.5.4
matrices FI,GI,HI, and J1 of Eq. (5.5.14) constitute a realization of
5.5.2
W'(-s)W(s).
5.6
THE POSITIVE REAL LEMMA: OTHER
PROOFS AND APPLICATIONS
In this section we shall briefly note the existence of other proofs
of the positive real lemma, and then draw attention to some applications.
Other Proofs of the Positive Real Lemma
The proof of Yakubovic [I] was actually the first proof ever to be
given of the lemma. Applying only to a scalar function, it proceeds by induc-
SEC. 5.6
THE POSITIVE REAL LEMMA
259
tion on the degree of the positive real function Z(s), and a key step in the
argument requires the determination of the value of w minimizihg Re [Z(jw)].
The case of a scalar Z(s) of degree n is reduced to the case of a scalar Z(s) of
degree n - 1 by a procedure reminiscent of the Brune synthesis procedure
(see, e.g., [19]). To the extent that a multipart Brune synthesis procedure is
available (see, e.g., [6D,one would imagine that the Yakubovic proof could
be generalized to positive real matrices Z(s).
Another proof, again applying only to scalar positive real Z(s), is due to
Faurre 1121. It is based on results from the classical moment problem and
the separating hyperplane theorem, guaranteeing the existence of a hyperplane separating two disjoint closed convex sets. How one might generalize
this proof to the matrix Z(s) case is not clear; this is unfortunate, since, as
Section 5.5 shows, the scalar case is comparatively easily dealt with by other
techniques, while the matrix case is difficult.
The Yakubovic proof implicitly contains a constructive procedure for
obtaining solutions of the positive real lemma equation, although it would
be very awkward to apply. The Faurre proof is essentially an existence proof.
Popov's proof, see [5], in rough terms is a variant on the proof of Section
5.5.
Applications of the Positive Real Lemma
Generalized Positive Real Matrices. There arise in some systems
theory problems, including the study of instability [20], real rational functions or square matrices of real rational functions Z(s) with Z(m) < c o for
which
for all real o with jw not a pole of any element of Z(s). Poles of elements of
Z(-) are unrestricted. Such matrices have been termed generalized posifive
real matrices, and a "generalized positive real lemma" is available (see [Zl]).
The main result is as follows:
Theorem 5.6.1. Let Z(.) be an m x m matrix of rational functions of a complex variables such that Z(m) < m. Let (F, C , H, J ]
be a minimal realization of Z(s). Then a necessary and sufficient
condition for (5.6.1) to hold for all real w with jw not a pole of
any element of Z(.) is that there exist real matrices P = P', L,
and Wo,
with P nonsingular, such that
260
CHAP. 5
POSITIVE REAL MATRICES
Problem 5.2.5 required proof that (5.6.2) imply (5.6.1). Proof of the converse takes a fair amount of work (see [211). Problem 5.6.1 outlines a somewhat speedier proof than that of [21].
The use of Theorem 5.6.1 in instability problems revolves around construction of a Lyapunov function including the term x'Px; this Lyapunov
function establishes instability as discussed in detail in [20].
In~yerseProblem of Linear Optimal Control. This problem, treated in 19,
22,231, is stated as follows. Suppose that there is associated a linear feedback
law u = -K'x with a system f = ET Gu. When can one say that this
law arises through minimization of a performance index of the type
+
where Q is some nonnegative definite symmetric matrix? Leaving aside
the inverse problem for the moment, we recall a property of the solution of
the basic optimization problem. This is that if [F, G] is completely controllable, the control law K exists, and i = (F - GK')x is asymptotically stable.
Also,
The inequality is known as the return difference inequality, since I +
K'(s1- F)-'G is the return difference matrix associated with the closed-loop
system. The solution of the inverse problem can now be stated. The control
law K results from a quadratic loss problem if the inequality part of (5.6.4)
holds, if [F, GI is completely controllable, and if t = (F - GK')x is asymp
totically stable.
Since the inequality part of (5.6.4) is essentially a variant on the positive
real condition, it is not surprising that this result can be proved quite efficiently with the aid of the positive real lemma. Broadly speaking, one uses
the lemma to exhibit the existence of a matrix L for which
(Problem 5.6.2 requests a proof of this fact.) Then one shows that with Q
= LL' in (5.6.3), the associated control law is precisely K. It is at this point
that the asymptotic stability of i = (F - GK')x is used.
Circle Criterion 124,251. Consider the single-input single-output system
.t = Fx
+ gu
y = h'x
(5.6.6)
SEC. 5.6
THE POSITIVE REAL LEMMA
261
and suppose that u is obtained by the feedback law
The circle criterion is as follows 1251:
Theorem 5.6.2. Let [F,g, h] be a minimal realization of W(s)
= h'(s1- F)-'g, and suppose that F has p eigenvalues with posi-
tive real parts and no eigenvalues with zero real part. Suppose
that k(.) is piecewise continuous and that u E k(t) < - E
for some small positive 6. Then the closed-loop system is asymp
totically stable provided that
+ <
1. The Nyquist plot of W(s) does not intersect the circle with
diameter determined by (-a-', 0) and (-p-', 0) and encircles it exactly p times in the counterclockwise direction, or,
equivalently,
is positive real, with Re Z(jw) > 0 for all finite real w and
possessing all poles in Re [s] < 0.
Part (1) of the theorem statement in effect explains the origin of the term
circle criterion. The condition that the Nyquist plot of W(s) stay outside
a certain circle is of course a nonnegativity condition of the form I W(jw) - c 1
2 p or [W(-jw) - c][W(jw) - c] - ppZ2 0, and part (2) of the theorem
serves to convert this to apositive real condition. Connection between(1)and
(2) is requested in the problems, as is a proof of the theorem based on part (2).
The idea of the proof is to use the positive real lemma to generate a Lyapunov
function that will establish stability. This was done in [25,26]. It should be
noted that the circle criterion can be applied to infinite dimensional systems,
as well as finite dimensional systems (see [24]). The easy Lyapunov approach
to proving stability then fails.
Popov Criterion [24,27,28]. The Popov criterion is a first cousin to the
circle criterion. Like the circle criterion, it applies to infinite-dimensional as
well as finite-dimensional systems, with finite-dimensional systems being
readily tackled via Lyapunov theory. Instead of (5.6.6). we will, however,
have
i=Fx+Gu
y=H'x
(5.6.9)
We thereby permit multiple inputs and outputs. We shall assume that the
dimensions of u and y are the same, and that
POSITIVE REAL MATRICES
262
CHAP. 5
with
E ~ & < k ,-E
(5.6.1 1)
Y*
for some set of positive constants k,, for some small positive 6, and for all
nonzero y,. The reader should note that we are now considering nonlinear,
but not time-varying feedback. In the circle criterion, we considered timevarying feedback (which, in the particular context, included nonlinear feedback).
The basic theorem, to which extensions are possible, is as follows [28]:
'
Theorem 5.6.3. Suppose that there exist diagonal matrices A =
diag (a,, . . . ,a,) and 3 = diag (b,, . . . ,b,), where m is the
dimension of u and y, a, 2 0, b, 2 0, a, b, > 0, -a./b, is not
a pole of any of the ith row elements of H'(sI - flF'C, and
+
.
is positive real, where K = diagik,, k,, . . ,k,}. Then the
closed-loop system defined by (5.6.9) and (5.6.10) is stable.
The idea behind the proof of this theorem is straightforward. One obtains
a minimal realization for Z(s), and assumes that solutions (P, L, W,) of
the positive real lemma equations are available. A Lyapunov function
V(x) = xlPx
+ 2 C rflj@,)b, dp,
i
(5.6.13)
0
is adopted, from which stability can be deduced. Details are requested in
Problem 5.6.5.
Spectral Factoruation by Algebra 1291. We have already explained that
intimately bound up with the positive real lemma is the existence of a matrix
W(s) satisfying Z'(-s)
Z(s) = W'(-s) W(s). Here, Z(s) is of course positive real. In contexts other than network theory, a similar decomposition is
important, e.g., in Wiener filtering [IS, 161. One is given an m x m matrix
@(s) with W(-s) = 9(s) and 9(jw) nonnegative definite for all real w. It is
assumed that no element of @(s) possesses poles on Re Is] = 0, and that
@(m)< m. One is required to find a matrix W(s) satisfying
+
Very frequently, W(s) and W-'(s) are not permitted to have elements with
poles in Re [s] 2 0.
When @(s) is rational, one may regard the problem of determining W(s)
SEC. 5.6
THE POSITIVE REAL LEMMA
263
as one of solving the positive real lemma equations-a topic discussed in
much detail subsequently. To see this, observe that @(s)may be written as
where elements of Z(s) are analytic in Re Is] 2 0. The writing of @(s) in this
way may be achieved by doing a partial fraction expansion of each entry of
@(s) and grouping those summands together that are analytic in Re Is] 2 0
to yield an entry of Z(s). The remaining summands give the corresponding
entry of Z'(-s). Once Z(s) is known, a minimal realization {F,G, H, J) of
Z(s) may be found. [Note that a(-) < m implies Z(m) < w.] The problem
of finding W(s)with the properties described is then, as we know, a problem
of finding a solution of the positive real lemma equations. Shortly, we shall
study algebraic procedures for this task.
Problem (Necessity proof for Theorem 5.6.1): Let K be a
5.6.1
F - GK' possesses all its eigenvalues in Re[sJ
matrix such that FK=
< 0 ; existence of K is
a.
guaranteed by complete controllability of [F,
(a) Show that if X(s) = I K'(sZ - FK)-'G, then Z(s)X(s) = J
(H KJY(SI - FK)-'G.
(b) Show that X'(-s)[Z(s)
Z'(-s)]X(s) = I"(-s)
Y(s), where
Y(s) = J Hk(s1- FK)-'G with H, defined by
+
-
+
+
+
Hx=H-K(J+J~-PKG
P&
+ F~PK
= -(KH' + HK' - KJK' - KJ'K')
(c) Observe that Y(s) is positive real. Let P,, L,., and W , be solutions
of the positive real lemma equation associated with Y(s). (Invoke
the result of Problem 5.2.2 if [FK,HKlis not completely observable.)
+
+
Show that P = P, P,, L = L, KW,, and W, satisfy (5.6.2).
(d) Suppose that P is singular. Change the coordinate basis to force
P = [I,] f[-I,]
LO,]. Then examine (5.6.2) to show that certain
submatrices of F, L, and H are zero, and conclude that F, H is
unobservable. Conclude that P must be nonsingular.
Problem (Inverse problem of optimal control):
6.6.2
(a) Stan with (5.6.4) and setting FK= F - GK', show that
+
I - [ I - G'(-st
- Fi)-lKI[I - K'(sI - FK)-'GI 2 0
+
@> Take PK as the solution of PxFx FLPK= -KK', and HX =
K PxG. Show that Hk(s1 - FK)-'G is positive real.
(c) Apply the positive real lemma to deduce the existence of a spectral
Fi)-lffK of the form
factor of Hh(s1- Fx)-'G G'(-sI
L'(s1- F,)-'G, and then trace the manipulations of (a) and (b)
backward to deduce (5.6.5). (Invoke the result of Problem 5.2.2 if
[FK,HK]is not completely observable.)
-
+
-
264
POSITIVE REAL MATRICES
CHAP. 5
(d) Suppose that the optimal control associated with (5.6.3) when
Q = LL' is u = -Kbx; assume that the equality in (5.6.4) is satisfied
with K replaced by KO and that F - GKb has all eigenvalues with
negative real parts. It follows, with the aid of (5.6.5), that
[ I t KXsI
- IF)-'G][I + K'(sl- E)-'GI-'
= [I + G'(-sl - F')''KoI"[Z
+ C'(-SI
- Fj-'KI
Show that this equality leads to
Conclude that K = KO.
Problem Show the equivalence between parts (I) and (2) of Theorem 5.6.2.
5.6.3
(You may have to recall the standard Nyquist criterion, which will be
found in any elementary control systems textbook.)
Problem (Circle criterion): With Z(s) as defined in Theorem 5.6.2;sbow that
Z(s) has a minimal realkation ( F - flgh', g, -(@/a)@- a ) h , ,!?la).
Write down the positive real lemma equations, and show that xrPx is a
Lyapunov function for the closed-loop system defined by (5.6.6) and
(5.6.7). (Reference [261 extends this idea to provide a result incorporating
a degree of Stability in the closed-loop system)
5.6.4
Problem Show that the matrix Z(s) in Eq. (5.6.12) has a minimal realization
[F, G, HA fF'HB, AK-I
BH'G]. Verify that V(x) as defined in
(5.6.13) is a Lyapunov function establishing stability of the closed-loop
defined by (5.6.9) and (5.6.10).
.. . . . system
....
5.6.5
+
[ 1 ] V. A. Y ~ ~ u s o v r"The
c , Solution of Certain Matrix Inequalities in Automatic
Control Theory," Doklady Akademii Nauk, SSSR, Vol. 143,1962, pp. 1304-1307.
[ 2 ] R. E. KALMAN,
"Lyapunov Functions for the Problem of Lur'e in Automatic
Control," Proceedings of the National Academy of Sciences, Vol. 49, No. 2, Feb.
1963, pp. 201-205.
[ 3 ] R. E. KALMAN,
"On a New Characterization of Linear Passive Systems,"
Proceedings of the First Allerton Conference on Circuit and System Theory,
University of Illinois, Nov. 1963, pp. 456-470.
[ 4 ] B. D. 0. ANDERSON,
"A System Theory Criterion for Positive Real Matrices,"
SIAMJournul of Control, Vol. 5, No. 2, May 1967, pp. 171-182.
[ 5 ] V. M. P o ~ o v ,"Hyperstability and Optimality of Automatic Systems with
Several Control Functions," Rev. Roumaine Sci. Tech.-Elektrofechn. et Energ.,
Vol. 9, No. 4, 1964, pp. 629-690.
[ 6 ] R. W. NEWCOMB,
Linear Multipart Synthesis, McGraw-Hill, New York, 1966.
CHAP. 5
REFERENCES
265
M. LAYION,
"Equiv3lcnt Rcprescntations for Linear Systems, with Application to N-Port Nctuork Synthcsis." 1'h.D. Uisscrtation, University of California,
Berkeley, August 1966.
[ 8 ] B. D. 0.ANDERSON,
"ImpBdance Matrices and TheirStateSpaceDescriptions,"
lEEE Transactions on Circuit Theory, Vol. a - 1 7 , No. 3, Aug. 1970, p. 423.
[9] B. D. 0. ANDERSON
and J. B. MOORE,Linear Optimal Control, Prentice-Hall,
Englewood Cliffs, N.J., 1971.
[lo] A. P. SAGE,Optimum Systems Control, Prentice-Hall, Englewood Cliffs, N.J.,
1968.
[ll] B. D. 0.ANDERSON
and L. M. SILVERMAN,
Multivariable CovarianceFactorizalion, to appear.
[I21 P. FAURRE,=Representation of Stochastic Processes," Ph.D. Dissertation,
Stanford University, Feb. 1967.
[I31 N. WIENER
and L. MASANI,
"The Prediction Theory of Multivariate Stochastic
Processes," Acta fifathematrica, Vol. 98, Pts. 1 and 2, June 1958.
[I41 D. C. YOULA,
*On the Factorization of Rational Matrices," IRE Transactions
on fnformation Theory, Vol IT-7, No. 3, July 1961, pp. 172-189.
[ 7 ] D.
[I51 M. C. DAVIS,
"Factoring the Spectral Matrix," IEEE Transactions on Automatic
Control, Vol. AC-8, No. 4, Oct. 1963, pp. 296-305.
[ l a E. WONGand J. B. THOMAS,
"On the Multidimensional Prediction and Filtering
Problem and the Factorization of Spectral Matrices," Journal of the Franklin
Institute, Vol. 272, No. 2, Aug. 1961, pp. 87-99.
[17l A. C. RIDDLE
and B. D. ANDERSON,
"Spectral Factorization, Computational
Aspects," IEEE Transactions on Automatic Control, Vol. AC-11, No. 4, Oct.
1966, pp. 766765.
I181 F R. GANTMACHEX.
The neory of Matrices, Chelsea, New York, 1959.
[I91 E. A. GUILLEMIN,
Synthesis of Passive Networks, Wiley, New York, 1957.
[201 R. W. BRocKem and H.B. LEE, "Frequency Domain Instability Criteria
for Time-Varying and Nonlinear Systems," Proceedings of the IEEE, Vol. 55,
No. 5, May 1967, pp. 644419.
[21] B. D. 0.ANDERSON
and J. B. Moo=, "Algebraic Structure of Generalized
Positive Real Matrices," SIAM Journal on Control, Vol. 6, No. 4, 1968, pp.
615424.
[221 R. E. KALMAN,"When is a Linear Control System Optimal?", Journal of
Basic Engineering, Transactions of the American Society of MechanicdEngineers,
Series D, Vol. 86, March 1964, pp. 1-10.
I231 B. D. 0 . ANDERSON,
The Inverse Problem of Optimal Control, Rept. No.
S E M 6 4 3 8 (TR. No. 6560-3), Stanford Electronics Laboratories, Stanford,
Calif., May 1966.
[24] G. ZAMES,"On the Input-Output Stability of Timevarying Nonlinear Feedback Systems-Parts I and 11," IEEE Transactions on Automatic Control, Vol.
AC-11, No. 2, April 1966, pp. 228-238, and No. 3, July 1966, pp. 405-476.
266
POSITIVE REAL MATRICES
CHAP. 6
[25]R.W.B R o c ~ r nFinite
,
DimensionalLinear Systems, Wiley, New York, 1970.
[261 B. D.0.ANDERSON
and J. B. MOORE,
"Lyapunov Function Generation for a
Class of Time-Varying Systems," IEEE Transactions on Automatic Control,
Vol. AG13, No. 2, April 1968, p. 205.
1271 V. M. Po~ov,"Absolute Stabilityof Nonlinear SystemsofAutomaticContro1,"
Automation and Remote Confrol, Vol. 22, No. 8, March 1962, pp. 857-875.
[281 J. B. MOOR6 and B. D. 0.ANDEXSON,
"A Generalization of the Popov Criterion," Jourtra( of the Franklin Insfiute, Vol. 285,No. 6, June 1968, pp. 488-492.
991 B. D.0. ANDERSON,
"An Algebraic Solution to the Spectral Factorization
Problem," IEEE Transactions on Aufomatic Control, VoI. AG12, No. 4, Aug.
1967,pp. 41M14.
1301 B.D.0.ANDERSON
and 3. B. MOORE,"Dual F o m of a Positive Real Lemma",
Proceedings of the IEEE, Vol. 53, No. 10, Oct. 1967, pp. 1749-1750.
Computation of Solutions
of the Positive
Real Lemma ~quationi*
If Z(s) is an m x rn positive red matrix of rational functions
with minimal realization (F, C, H, J), the positive real lemma guarantees
the existence of a positive definite symmetric matrix P, and real matrices
L and W,, such that
P F + F ' P = -LL'
PC=H-LW,
J + J' = W,'W,
(6.1.1)
The logical question then arises: How may we compute all solutions
{P,L,W,)of (6.1.1)1
We shall be concerned in this chapter with answering this question. We
do not presume a knowledge on the part of the reader of the necessity proofs
of the positive real lemma~coveredpreviously; we shall, however, quote
results developed in the course of these proofs that assist in the computation
process.
In developing techniques for the solution of (6.1.1), we shall need to derive
theoretical results that we have not met earlier. The reader who is interested
'This chapter may be omitted at a first reading
268
COMPUTATION OF SOLUTIONS
CHAP. 6
solely in the computation of P, L, and W, satisfying (6.1.1). as distinct from
theoretical background to the computation, will be able to skip the proofs of
the results.
A preliminary approach to solving (6.1.1) might be based on the following
line of reasoning. We recall that if P, L, and W, satisfy (6.1.1), then the
transfer-function matrix W(s) defined by
W(s) = W , -1- L1(sI- F)-'G
(6.1.2)
satisfies the equation
Z(-s)
+ Z(s) = W(-s)
W(s)
(6.1.3)
Therefore, we might be tempted to seek all solutions W(s)of (6.1.3) that are
representable in the form (6.1.2), and somehow generate a matrix P such
that P, together with the L and Wo following from (6.1.2), might satisfy
(6.1 .I). There are two potential difficultieswith this approach. First, it appears
just as difficult to generate aN solutions of (6.1.3) that have the tbrm of (6.1.2)
as it is to solve (6.1.1) directly. Sewnd, even knowing Land Woit is sometimes very difficult to generate a matrix P satisfying (6.1.1). [Actually, if
every element of Z(s) has poles lying in Re [s] < 0,there is no problem; the
only difficulty arises when elements of Z(s) have pure Imaginary poles.] On
the grounds then that the generation of aU W(s) satisfying (6.1.3) that are
expressible in the form of (6.1.2) is impractical, we seek other ways of
solving (6.1.1).
The procedure we shall adopt is based on a division of the problem of
solving (6.1.1) into two subproblems:
1. Find one solution triple of (6.1.1).
2. Find a technique for obtaining all other solutions of (6.1.1) given this
one solutfon. This subproblem, 2, proves to be much easier to solve than
finding directly all solutions of (6.1.1).
The remaining sections of this chapter deal separately with subproblems
1 and 2.
We shall note two distinct procedures for solving subproblem 1. The first
procedure requires that J J' be nonsingular (a restriction discussed below),
and deduces one solution of (6. I .I) by solving a matrix Riccati differential
equation or by solving an algebraic quadratic matrix equation. In the next
section we state how the solutions of (6.1.1) are defined in terms of the
solutions of a Riccati equation or quadratic matrix equation, in Section 6.3
we indicate how to solve the Riccati equation, and in Section 6.4 how to
solve the algebraic quadratic matrix equation. The material of Section 6.2
will be presented without proof, the proof being contained earlier in a necessity proof of the positive real lemma.
+
SEC. 6.1
INTRODUCTION
269
The second (and less favored) procedure for solving subproblem 1 uses
a matrix spectral factorization. In Section 6.5 we first find a solution for
(6.1.1) for the case when the eigenvalues of F are restricted to lying in the
half-plane Re [s] < 0, and then for the case when all eigenvalues of F lie on
Re [s] = 0; finally we tie these two cases together to consider the general
case, permitting eigenvalues of F both in Re Is] < 0 and on Re [s] = 0.
Much of this material is based on the necessity proof for the positive real
lemma via spectral factorization, which was discussed earlier. Knowledge of
this earlier material is however not essential for an understanding of Section
6.5.
Subproblem 2 is discussed in Section 6.6. We show that this subproblem
is equivalent to the problem of finding all solutions of a quadratic matrix
inequality: this inequality has sufficient structure to enable a solution. There
is however a restriction that must be imposed in solving subproblem 2,
identical with the restriction imposed in providing the first solution to subproblem 1: the matrix J J' must be nonsingular.
We have already commented on this restriction in one of the necessity
proofs of the positive real lemma. Let us however repeat one of the two
remarks made there. This is that given the problem of synthesizing a positive
real Z(s) for which J J,- J' is singular, there exists (as we shall later see) a
sequence of elementary manipulations on Z(s), corresponding to minor
synthesis steps, such as the shunt extraction of capacitors, which reduces
the problem of synthesizing Z(s) to one of synthesizing a second positive real
matrix, Z,(s) say, for which J , J: is nonsingular. These elementary manipulations require no difficult calculations, and certainly not the solving of
(6.1.1). Since our goal is to use the positive real lemma as a synthesis tool,
in fact as a replacement for the difficultparts of classical network synthesis,
we are not cheating if we choose to restrict its use to a subclass of positive real
matrices, the synthesis of which is essentially equivalent to the synthesis of
an arbitrary positive real matrix.
Summing up, one solution to subproblem I-the determination of one
solution triple for (6.1.1)-is contained in Sections 6.2-6.4. Solution of a
matrix Riccati equation or an algebraic quadratic matrix equation is required;
frequency-domain manipulations are not. A second solution to subproblem
1, based on frequency-domain spectral factorization, is contained in Section
6.5. This section also considers separately the important special case of F
possessing only imaginary eigenvalues, which turns out to correspond with
Z(s) being lossless. Section 6.6 offers a solution to subproblem 2. Finally,
although the material of Sections 6.24.4 and Section 6.6 applies to a r e
stricted class of positive real matrices, the restriction is inessential.
The closest references to the material of Sections 6.24.4 are [I] for the
material dealing with the Riccati equation (though this reference considers
a related nonstationary problem), and the report [2]. Reference [3] considers
+
+
270
COMPUTATION OF SOLUTIONS
CHAP. 6
the solution of (6.1.1) via an algebraic quadratic matrix equation. The
material in Section 6.5 is based on 141, which takes as its starting point a
theorem (stated in Section 6.5) from 151. The material in Section 6.6 dealing
with the solution of subproblem 2 is drawn from [6].
Problem SupposethatZ(s) t Z'(-s) = W'(-s)W(s), Z(s) being positive real with
minimal realization [F, G,H, J ] and W(s) possessing a realization
(F, G, L, W.]. Suppose that Fhas all eigenvalues with negative'ceal part.
Show that W; W. = J J'. Define P by
6.1.1
+
PF
+ F'P = -LL'
and note that P is nonnegative definite symmetric by the lemma of
Lyapunov. Apply this definition of P, rewritten as P(s1- F)
(-st - F')P = -LL', .to deduce that W'(-s)W(s) = W;W,
(PC LWo)'(sl- F)-'C G'(-sI - F')-'(PG + L W,).
Conclude
that
+
+
+
+
Show that if L'e"xo = 0 for some nonzero xo and all t, the defining equation for P implies that PeF'xo = 0 for all t, and then that H'ePrq, = 0
for all t. Conclude that P is positive definite. Explain the difficultyin
applying this procedure for the generation of P if any element of Z(s)
has a purely imaginary pole.
6.2
SOLUTION COMPUTATION VIA RICCATI
EQUATIONS AND QUADRATIC MATRIX
EQUATIONS
In this section we study the construction of a particular solution
triple P,L, and W,to the equations
P F + FIP= -LC
P C = H-LW,
J + J ' = w;w,
. (6.2.1)
under the restriction that J f J' is nonsingular. The implications of this
restriction were discussed in the last section.
The following theorem was proved earlier in Section 5.4. We do not assume
the reader has necessarily read this material; however, if he has not read it,
he must of course take this theorem on faith.
Theorem. Let Z(s)be a positive real matrix of rational functions
of s, with Z(m) < w. Suppose that (F, G, H,J ] is aminimal real-
SEC. 6.2
SOLUTION COMPUTATION VIA RlCCATl EQUATIONS
27 1
ization of Z(s), with J + J' = R, a nonsingular matrix. Then
there exists a negative definite matrix satisfying the equation
n
n ( F - GR-'H')
+ (F' - HR-'G')fi
- fiGR-'G'fi - H R - f H = 0
(6.2.2)
Moreover
Tr = lim n ( t , t , ) =:elimn(t, t , )
113-
where II(-, t , ) is the soIution of the Riccati equation
-fr
= II(F
- GR-'H') + (F' HR-'G1)l'I
- IIGR-'G'II - H K I H '
-
(6.2.3)
with boundary condition n(t,,t , ) = 0.
The significance of this result in solving (6.2.1) is as follows. We define
where V is an arbitrary real orthogonal matrix; i.e., V'V = VV' = I. Then
we claim that (6.2.1) is satisfied. To see this, rearrange (6.2.2) as
and use (6.2.4) to yield the first of Eqs. (6.2.1). The second of equations
(6.2.1) follows from (6.2.4) immediately, as does the third.
Thus one way to achieve a particular solution of (6.2.1) is to solve (6.2.3)
to determine Tr, and then define P, L, and W,by (6.2.4). Technical procedures
for solving (6.2.3) are discussed in the following sections.
We recall that the matrices L and W,define a transfer-function matrix
W(s) via
[Since V is an arbitrary real orthogonal matrix, (6.2.5) actually defines a
family of W(s).]This transfer-function matrix W(s) satisfies
Although infinitely many W(s) satisfy (6.2.6), the family defined by (6.2.5)
has a distinguishing property. In conjunction with the proof of the theorem
just quoted, we showed that the inverse of W(s)above with V = Iwas defined
272
COMPUTATION OF SOLUTIONS
CHAP. 6
throughout Re [s] > 0. Multiplication by the constant orthogonal V does
not affect the conclusion. In other words, with W(s) as in (6.2.5),
det W(s) f 0
Re[s] > 0
(6.2.7)
Youla, in reference [5], proves that spectral factors W(s) of (6.2.6) with the
property (6.2.7), and with poles of ail elements restricted to Re [s] 2 0, are
uniquely defined to within left multiplication by an arbitrary real constant
orthogonal matrix. Therefore, in view of our insertion of the matrix V in
(6.2.5), we can assert that the Riccati equation approach generates all spectral
factors of (6.2.6) that have the property noted by Youla.
Now we wish to note a minor variation of the above procedure. If the
reader reviews the earlier argument, he will note that the essential fact we
used in proving that (6.2.4) generates a solution to (6.2.1) was that fi satisfied
(6.2.2). The fact that R was computable from (6.2.3) was strictly incidental
to the proof that (6.2.4) worked. Therefore, if we can solve the algebraic
equation (6.2.2) directly-without the computational aid of the differential
equation (6.2.3)-we
still obtain a solution of (6.2.1). Direct solution of
(6.2.2) is possible and is discussed in Section 6.4.
There is one factor, though, that we must not overlook. In general, the
cilgebraic equation (6.2.2) will possess more thun one negative definite solution.
Just as a scalar quadratic equation may have more than one solution, so may
a matrix quadratic equation. The exact number of solutions will generally be
greater than two, but will be finite.
Of course, only one solution of the algebraic equation (6.2.2) is derivable by
solving the Riccatidr%fe~entialequation
(6.2.3). But the fact that asolution of
the algebraic equation (6.2.2) is not derivable from the Riccati differential
equatipn(6.2.3) does not debar it from defining a solution to the positive
real lemma equations(6.2.1) via thedefining equations (6.2.4).
Any solution of the algebraic equation '(6.2.2) 'will generate a spectral
factor family (6.2.5), different members of the family resulting from different
choices of the orthogonal V. However, since our earlier proof for the invertibility of W(s) in Re[s] > 0 relied on using that particular solution fI of
the algebraic equation (6.2.2) that was derived from the Riccati equation
(6.2.3), it will not in general be true ghat a spectralfactor W(s) formed as in
(6.2.5) by using any solution of the quadratic equation (6.2.2) will be invertible
in Re[s] > 0. Only that W(s) formed by using the particular solution of the
quadratic equation (6.2.2) derivable as the limit of the solution of the Riccati
equation (6.2.3) wilt have the invertibility property.
In Section 6.4 when we consider procedures for solving the quadratic
matrix equation (6.2.2) directly, we shall note how to isolate that particular
solution of (6.2.2) obtainable from the Riccati equation, without of course
having to solve the Riccati equation itself.
Finally, we note a simple property, sometimes important in applications,
SEC. 6.3
273
SOLVING THE RlCCATl EQUATION
of the matrices L and W, generated by solving the algebraic or Riccati
equations (6.2.4). This property is inherited by.the spectral factor W(s)
defined by (6.2.5), and is that the number of rows of L',of W,, and of W(s)
is m, where Z(s) is an m x m matrix.
Examination of (6.2.1) or (6.2.6) shows that the number of rows of L',
W,, and W(s) are not intrinsically determined.. However, a lower bound is
imposed. Thus with J + J' nonsingular i n (6.23); W o must have a t least
m rows to ensure that the rank of W; W, is equalto m, the rank of J f J'.
Since L' and W(s) must have the same number of rows as W,, it follows
that the procedures for solving (6.2.1) specged hitherto lead to L' and W.
[and, as a consequence, W(s)] possessing the minimum number of rows.
That there exist L' and W, with a nonminimal number of 115s is easy to
see. Indeed, let V be,a matrix of m' > m rows and m columns satisfying V'V
= I, (such V always exist). Then (6.2.4) now definei C and Wo with a nonminimal number of rows, and (6.2.5) defines ~ ( s also
) with onmi minimal
number of rows. This is a somewhat trivial example. Later we shall come
upon instances in which the matrix P in (6.2.1) is such that PF F'P has
rank greater than m. [Hitherto, this has not been the case, since rank LL',
with. L as in(6.2.4). and V m! x m with V'V = I,, is independent of V.]
With PF F'P possessing rank greater than m, it follows that L' must have
more than m rows for (6.2.1) to hold.
+
+
Problem Assume available a solution triple for (6.2.1), with J + J' nonsingular
and W, possessing m rows, where m is the size of Z(s). Show that L and
W. may be eliminated to get an equation for P that is equivalent to (6.2.2)
given the substitution P =
Problem Assuming that F has all eigenvalues in Re[s] < 0, show that any solu6.2.2
tion ii of (6.2.2) is negative definite. First rearrange (6.2.2) as
-(fiF f
= -(nC -t H)R-~[~=Ic
H)'. Use the lemma of Lyapunov to show that is nonpositive definite. Assume the existence of a
nonzerox, such t h a t ( n ~ H)'errx, = Oforallt,showthat fIe~'x, = 0
for all I, and deduceacontradiction. Conclude that ii is negative definite.
Problem For the positive real impedance Z(s) = 4 l/(s I), find a minimal
6.2.3
realization IF, G, H, J ) and set up the equation for
This equatioa,
being scalar, is easily Solved. Find two spectral factors by solving this
equation, and verify that only one is nonzero throughout Re[sl> 0.
6.2.1
-n.
Fn)
n
+
+
+
+
n.
In this section we wish to comment on techniques for solving the
Riccati differential equation
274
COMPUTATION OF SOLUTIONS
CHAP. 6
where F,G, H, and R are known, n ( t , t , ) is to be found for t 5 t , , and the
boundary condition is II(t,, t , ) = 0.In reality we are seeking link,,, lT(t, t , )
= lim,.._, U ( t , f , ) ; the latter limit is obviously mucheasier to compute than
the former.
One direct approach to solving (6.3.1) is simply to program the equation
directly on a digital or analog computer. It appears that in practice the
equation is computationally stable, and a theoretical analysis also predicts
computational stability. Direct solution is thus quite feasible.
An alternative approach to the solution of (6.3.1) requires the solution of
Smear differential equations, instead of the nonlinear equation (6.3.1).
Riccati Equation Solution via Linear Equations
The computation of U ( . , t , ) is based on the following result,
established in, for example, 17, 81.
Constructive Procedure for Obtaining U ( t , t , ) . Consider
the equations
where X(t) and Y(t) are n x n matrices, n being the size of F,
the initial condition is X(t,) = I, Y(t,) = 0, and the matrix M is
defined in the obvious way. Then the solution of (6.3.1) exists on
[t, t , ] if and only if X-I exists on [t, t , ] , and
We shall not derive this result here, hut refer the reader to [7, 81. Problem
6.3.1 asks for a part derivation.
Note that when F, G,H, and R = J J' are derived from a positive real
Z(s), existence of IT@,t , ) as the solution of (6.3.1) is guaranteed for all
t < t , ; this means that X''(t) exists for all t 2 t , , abd accordingly the
formula (6.3.3) is valid for all t 5 t , .
An alternative expression for II(t, t , ) is available, which expresses U ( t , t , )
in terms of exp M(c - T). Call tllis 2n x 2rr matrix rp(t - r) and partition
it as
+
where each @,,(t - T) is n x n. Then the definitions of X(t) and Y(t) imply
SOLVING THE RlCCATl EQUATION
SEC. 6.3
275
that
and so
Because of the constancy of F, G, H, and R, there is an analytic formula
available for @(t - t , ) . Certainly this formula involves matrix exponentiation, which may be difficult. In many instances, though, the exponentiation
may be straightforward or perhaps amenable to one of the many numerical
techniques now being developed for matrix exponentiation. It would appear
that (6.3.2) is computationaUy unstable when solved on a digital computer.
Therefore, it only seems attractive when analytic formulas for the @,,(I - 1)
are used-and even then, difficulties may he encountered.
Example
6.3.1.
+
+
[-:
-:I.
Consider the positive real Z ( s ) = 5 I/(s 1). A minimal realization
is provided by F = -1, G = H = 1, and J = 2, and the equation for
@ becomes accordingly
Now @ is given by exponentiating
One way to compute the
exponential is to diagonalize the matrix by a similarity transformation.
Thus
Consequently
276
COMPUTATION OF SOLUTIONS
CHAP. 6
From (6.3.5),
and then
In Section 6.4 we shall see that the operations involved in diagonalizing
F- GR-'H' -GR-'G'
may be used to compute
the matrix
HR'IH'
-F'
HR'IG'
iT = lim,,_, l l ( t , t , ) directly, avoiding the step of computing @(t - f,)
and n(t,t,) by the formula (6.3.5).
Problem Consider (6.3.1) and (6.3.2). Show that if X-'(t) exists for all t S ti,
then n ( t , t , ) satisfies (6.3.3).
6.3.1
Problem For the example discussed in the text, the Riccati equation becomes
+
6.3.2
-R = -4n - nz - I
Solve this equation directly by using
and check the limiting value.
6.4
SOLVING THE QUADRATIC MATRIX
EQUATION
In this section we study three techniques for solving the equation
n ( F - GR"H') f (P' - H R - I G ' ) ~- RGR-~G'I~
- HR-'H'
=0
(6.4.1)
The first technique involves the calculation of the eigenvalues and eigenvectors of a Zn X 2n matrix, n being the size of F. It allows determination of
all solutions of (6.4.0, and also allows us to isolate that particular solution
of (6.4.1) that is the limit of the solution of the associated Riccati equation.
The basic ideas appeared in [9, 101, with their application to the problem
in hand in [3].The second procedure is closely related to the first, but represents a potential saving computationally, since eigenvalues alone, rather
than eigenvalues and eigenvectors, of a certain matrix must be calculated.
277
SOLVING THE QUADRATIC MATRIX EQUATION
SEC. 6.4
The third technique is quite different. Only one solution of (6.4.1) is calculated, viz., that which is the limit of the solution of the associated Riccati
equation, and the calculation is via a recursive di@erenee equation. This
technique appeared in [I 11. A fourth technique appears in the problems and
involves replacing (6.4.1) by a sequence of linear matrix equations.
Method Based on Eigenvalue and Eigenvector
Computation
From the coefficient matrices in (6.4.1) we construct the matrix
This can be shown to have the property that if 1 is an eigenvalue, -1 is
also an eigenvalue, and if d is pure imaginary, 1 has even multiplicity
(Problem 6.4.1 requests. this verification).
To avoid complications, we shall assume that the matrix M is diagonalizable. Accordingly, there exists a matrix T such that
where A is the direct sum of 1 x 1 matrices
matrices
E, -P1l
1,
[Ad with 1,real,
and 2 x 2
with &, c real.
-
Let T be partiGoied into four n
x n submatrices as
n e n if T,,is nonsingular, a solufion of(6.4.1) irprovjded by
If = T,,T;:
(6.4.5)
Let us verify this result. Multiplying both sides of (6.4.3) on the left by T,
we obtain
(F - GRW'H')T,,- GR-'G'T,,
HR-'HIT,, - (F-HR-'G')T,,
-T, ,A
= -T,,A
=
From these equations we have
T,,T;,'(F-
GR-'H') - T,,T;:GR-'G'T,,T,/
= -T,,AT;;1
HR-'H' - (F'- HR-'G?T2,T,> = -T,,AT,,'
(6.4.6)
278
CHAP. 6
COMPUTATION OF SOLUTIONS
The right-hand sides of both equations are the same. On equating the left
sides and using (6.4.5), Eq. (6.4.1) is recovered.
Notice that we have not restricted the matrix A in (6.4.3) to having all
nonnegative or all nonpositive diagonal entries. This means that there are
many different A that can arise in (6.4.3). If $2 (and thus -2) is an eigenvalue of M, then we can assign f 2 to either +A or -A; so there can be up
to 2* different A, with this number being attained only when M has all its
eigenvalues real, with no two the same. Of course, different A yield different
T and thus different ii, provided Ti,!exists.
One might ask whether there are solutions of (6.4.1) not obtainable in the
manner specified above. The answer is no. Reference [9]shows for a related
problem that any fl satisfying (6.4.1) must have the form of (6.4.5).
The above technique can be made to yield that particular ii which is the
limit of the associated Riccati equation solution. Choose A to have nonnegative diagonal entries. We claim that (6.4.5) then defies the required fl. To
see this, observe from the first of Eq. (6.4.6) that
F- GR-'H'
- GR-'G'T,,Ti:
= -T,,AT;,!
Of
F - GR-'(H' $ G'fl) = -T, ,AT;,'
(6.4.7)
Since A has nonnegative diagonal entries, all eigenvaiues of F - GR"(H1
G'fl) must have nonpositive real parts.
Now the spectral factor family defined by R is, from an earlier section,
+
where VV' = Y'V = I. Problem 6.4.2 requests proof that if F - GR-'(H'
G'n) has all eigenvalues with nonpositive re.ai parts, W-'(s) exists in
Re[s] > 0. (We have noted this result also in an earlier necessity proof of
the positive real lemma.) As we know, the family of W(s) of the form of
(6.4.8) with the additional property that W-'(s) exists in Re[s] > 0 is unique,
and is also determinable via a fl that is the limit of a Riccati equation solution. Clearly, different ii could not yield the same family of W(s). Hence
the fl defined by the special choice of A is that which is the limit of the
solution of the associated Riccati equation.
+-
n
Example
6.4.1
Consider Z(s) = $
The matrix M is
+ ll(s f I),
for which F = -1, G = H
=R =
1.
279
SOLVING THE QUAORATIC MATRIX EQUATION
SEC. 6.4
Alternatively, to obtain that fi which is the limit of the Riccati equation
solution. we have
C
0 + 2
-1
&-2-'
1
I=[-"[ d[
-2
-1
."'3+z
1
f l - 2
1
-1
I
O]
0
(Thus A
=
fl
JT.) But now
This agrees with the calculation obtained in the previous section.
Method Based on Eigenvalue Computation Only
As before, we form the matrix M. But now we require only its
eigenvalues. If A s ) is the characteristic polynomial of M , the already mentioned properties of the eigenvalues of M guarantee the existence of a monic
polynomial r(s) such that
If desired, r(s) can be taken to have no zeros in Re[s] > 0.
Now with r(s) = S" a I s V 1 . $ a,, let r ( M ) = M" a,Mn-'
a.1. We claim that a solution ii of the quadratic equation (6.4.1) is defined
by
+
+
+. .
+
+.
Moreover, that solution which is also the limit of the solution of the associated Riccati equation is achieved by taking r(s) to have no zeros in Re[s]
> 0.
280
CHAP. 6
COMPUTATION OF SOLUTIONS
We shall now justify the procedure. The argument is straightforward; for
convenience, we shall restrict ourselves to the special case when r(s) has no
zeros in Re[s] > 0.
Let T be as in (6.4.3), with diagonal entries of A possessing nonnegative
real parts. Then
The last step follows because the zeros of r(s) are the nonpositive real part
eigenvalues of M,which are also eigenvalues of -A. The Cayley-Hamilton
theorem guarantees that r(-A) will equal zero. Now
whence
n,
The first method discussed established that T,,T;: =
with % the limit
of the Riccati equation solution.
Equation (6.4.10)is straightforward to solve, being a linear matrix equation.
For large n, the computational difficulties associated with finding eigenvalues and eigenvectors increase enormously, and almost certainly direct
solution of the Riccati equation or use of the next method to be noted are
the only feasible approaches to obtaining
n.
Example With
=
6.4.2
the same positive real function as in Example 6.4.1. viz., Z(s)
G = H = R = 1, we have
+ l/(s + 1). with F = -1,
andp(s) = s,
- 3. Then r(s) = s f JT and
SEC. 6.4
Consequently
yielding, as before,
n=--2+"
Method Based on Recursive Difference Equation
The proof of the method we are about to state relies for its justification on concepts from optimal-control theory that would take us too
long to present. Therefore, we refer the interested reader to [ll] for background and content ourselves here with a statement of the procedure.
Define the matrices A, B, C, and U by
U = R + C'(A
+ I)-'B + B'(A' + 1)-'C
(6.4.1 1)
and form the recursive difference equation
= 0. The significance of (6.4.12) is that
with initial condition fl@)
=i i i
d(n)
(6.4.13)
n-
where il is that solution of (6.4.1) which is also the limit of the associated
Riccati equation solution. This method however extends to singular R.
As an example, consider again Z(s) = 4 l/(s I), with F = -1,
G = H = R = 1. It follows that A = 0, B = C = 1 / n , U = 2, and
(6.4.12) becomes
+
+
The iterates are fI(0) = 0, fl(1) = -.25,
fi(2) = -.26667, A(3) =
-.26785, n(4)= h.26794.
As we have seen, i
i = -2 JT = -.26795 to five figures. Thus four
or five iterations suffice in this case.
Essentially, Eqs. (6.4.11) and (6.4.12) define an optimal-control problem
+
282
CHAP. 6
COMPUTATION OF SOLUTIONS
and its solution in discrete time. The limiting solution of the problem is set
up to he the same as li.
The reader will notice that.if F has an eigenvalue at + I , the matrix A
in (6.4.11) will not exist. The way around this difficulty is to modify (6.4.11),
defining A, B, C, and U by
Here a is a real constant that may be selected arbitrarily, subject to the
restriction that a1 - F must be nonsingular. With these new definitions,
(6.4.12) and (6.4.13) remain unchanged.
Experimental simulations suggest that a satisfactory value of a is n-' tr F,
where F i s n x n. Variation in a causes variation in the rate of convergence
of the sequence of iterates, and variation in the amount of roundoff error.
Problem, Show that if 1 is an eigenvalue of M in (6.4.2), so is -1. (From an eigen6.4.;
vector u satisfying Mu = Au, construct a row vector d such that u'M
= -lvZv'.)Also show that if A is an eigenvalue that is pure imaginary,-
it has even multiplicity. Do this by first proving
Take determinants and use the nonnegativity of Z(s) $- Z'(-s)
jw.
+
for s =
+
Problem Suppose that W(s) = R112 R - ' I ~ ( ~ G H)('sZ - F)-lG.Prove that
6.4.2
if F- GR-'(H'
G'm has no eigenvalues with positive real parts,
+
then W-'(s) exists throughout Re[sl> 0.
problem Choose a degree 2 positi<e real Z(s) and construct a minimal reaIi,zation
6.4.3
for it. Apply the three methods of this section to solve (6.4.1). Compare
the computational efficiencies of each method.
Problem Use the eigenvalue-eigenvector computation method to construct a spectral factor W(s) for which W-'(s) exists everywhere in Re[s] < 0.
6.4.4
Problem With a positive real matrix Z(s) of realization [F,G,H,J] with R =
6.4.5
J J'nonsingular we can associate a quadratic equation for the matrix
Show that the inverse of each such fT satisfies the quadratic equation
.
.
associated with the realiition (F', H, G, J'] of Z'(s). If F' HR-'(G'
H'fT-') has eigenvalues with negative real parts [which
were determined as the limiting solution of the
would bethe case if
Riccati equation associated withZ'(s)], show that F - GR-'(H' -t G'n)
has eigendues with positive real parts.
+
n.
+
n-'
SEC. 6.5
COMPUTATION VIA SPECTRAL FACTORIZATION
283
Problem Show that an attempted solution of the quadratic equation (6.4.1) with
6.4.6
a Newton-Raphson procedure leads to the iterative equation
~ + I [-FOR-I(H'
+ O'na)I f [F'- (H + ~.G)R-'G'I~~.+,
= HR-'H' - ~ , , G R - ' G ' ~ ~
(If you are unfamiliar with Newton-Raphson procedures, pass immediately to the remainder of the problem.) The algorithm is initialized by
taking
= 0. D&e
no
As the basis of an inductive argument to establish convergence, show that
Fn
Re lI,[Eol< 0 and assume that Re &[FA < 0.-Evaluate
F'.(&+, - and show that Ha+,- 95 0 . Show also that
R - 4 + 1 r 0 by evaluating @ - fI.,,IK
C ( f i -it.+,)
where ft
is that solution of (6.4.1) derivable by solving the Riccati equation (6.3.1).
Use the inequality fi 5 0 and anexpressionfor(n - n.,,) Fn2,
fi+?(n to show that Re L,[F.+,I < 0. Conclude that TI.
converges monotonically to a.
+
+
n.,,
+
6.5
nn)
n,J
SOLUTION COMPUTATION VIA
SPECTRAL FACTORIZATION
In this section we study the derivation, via spectral factorization,
of a particular solution triple P, L, and W , to the positive real lemma equations
PF
+ F'P
= -LL'
PG=H-LW.
J + J' = WLWo
(6.5.1)
We consider first the case when F has all negative real part eigenvalues, and
then the case when F has pure imaginary eigenvalues. Finally, we link both
cases together. As will be seen, the case when F has pure imaginary eigenvalues corresponds to the matrix Z(s) = H'(s1- F)-'G being lossless (the
presence or absence of J is irrelevant), and the computation of solutions to
(6.5.1) is especially straightforward. This case is therefore important in its
own right.
Case of F Possessing Negative Real Part Eigenvalues
In considering this case we shall restate a number of results, here
without proof, that we have already discussed in presenting a necessity proof
for the positive real lemma based on spectral factorization.
284
CHAP. 6
COMPUTATION OF SOLUTIONS
First note that the eigenvalue restriction on Fimplies that no element of
Z(s) can have a pole on Re[s] = 0 (or, of course, in Re[s] > 0). A procedure
specified in [5] then yields a matrix W(s) such that
with W(s) possessing the following additional properties:
+
1. W(s) is r x m, where Z(s) is m x m and Z'(-s)
Z(s) has rank r
almost everywhere.
2. W(s) has no element with a pole in Re[$] 2 0.
3. W(s) has constant rank r in Re[s] > 0; i.e., there exists a right inverse
for W(s) defined throughout Re[s] > 0.
4. W(s) is unique to within left multiplication by an arbitrary real constant
orthogonal r x r matrix.
In view of property 4 we can t h i i of the result of [q as specifying a family
of W(s) satisfying (6.5.2) and properties 1,2, and 3, members of the family
differing by left multiplication by an arbitrary real constant orthogonal
r x r matrix. In the case when Z(s) is scalar, the orthogonal matrix degenerates to being a scalar 1 or - 1.
To solve (6.5.1), we shall proceed in two steps: first, 6nd Ws) as described
above; second, from W(s) define P, L, and W,.
For the execution of the first step, we refer the reader to [5] in the case of
matrix Z(s), and indicate briefly here the procedure for scalar Z(s). This
procedure was discussed more fully in Chapter 5 in the necessity proof of
the positive real lemma by spectral factorization.
We represent Z($) as the ratio of two polynomials, with the denominator
polynomial monic; thus
+
Then we form q(s)p(-s)
+.p(s)q(-s)
and factor it in the following fashion:
where r(s) is a real polynomial with no zero in Re[s] > 0 (the eadier
discussion explains why this factorization is possible). Then we set
Problem 6.5.1 asks for verification that this W(s) satisfies all the requisite
conditions.
SEC. 6.5
COMPUTATION VIA SPECTRAL FACTORIZATION
285
The second step required to obtain a solution of (6.5.1) is to pass from
a W(s)satisfying (6.5.2)and the associated additional constraints to matrices
P,L, and W, satisfying (6.5.1).
We now quote a further result from the earlier necessity proof of the
positive real lemma by spectral factorization:
If W(s) satisfies (6.5.2) and the associated additional conditions,
then W(s) possesses a minimal realization of the form {F,G, L,
W,) for some matrices L and W,. Further, if the first of Eqs.
(6.5.1) is used to define P, the second is automatically satisfied.
The third is also satisfied.
Accordingly, to solve (6.5.1) we must
1. Construct a minimal realization of W(s)with the same F and G matrix
as Z(s); this defines L and W, by W(s) = W, L'(s1- 3')-'G.
+
2. Solve the first of Eqs. (6.5.1)to give P. Then P, L, and
(6.5.1).
W,jointly satisfy
Let us outline each of these steps in turn. Both turn out to be simple.
From W(s)we can construct a minimal realization by any known method.
In general, this realization will not have the same F and G matrix as Z(s).
Denote it therefore by {Fw,G,, L,, WOj.We are assured that there exists a
minimal realization of the form (F, G, L, WoL and therefore there must
exist a nonsingular T such that
TF,T-'
=F
TG, = G
(T-')'L, = L
(6.5.6)
If we can find T, then construction of L is immediate. So our immediate
problem is to find T, the data at our disposal being knowledge of F,, G., L,,
and W, from the minimal realization of W(s),and F and G from the minimal
realization of Z(s).
But now Eqs. (6.5.6) imply that
where n is the dimension of F. Equation (6.5.7) is solvable for T , because
the matrix [G, F,G, - . F:-lGG] has rank n through the minimality, and thus
complete controllability, of IF,, G., L,, W,}. Writing (6.5.7) as TV, = V,
where V, and V are obviously defined, we have
.
the inverse existing because V. has rank n, equal to the number of its rows.
In summary, therefore, to obtain a minimal realization for W(s) of the
form {F,G, L, W,B first form any minimal realization (F,, G,, L,,, W,].
286
CHAP. 6
COMPUTATION OF SOLUTIONS
Then with V = [G FG ...F"-'G] and V, similarly defined, define Tby (6.5.8).
Then the first two equations of (6.5.6) are satisfied, and the third yields L.
Having obtained a minimal realization for W(s)of the form (F, G, L, W,],
all that remains is to solve the first of Eqs. (6.5.1).This, being a linear matrix
equation for P, is readily solved by procedures outlined in, e.g., 1121.
The most difficult part computationally of the whole process of finding
a solution triple P, L, W , to (6.5.1) is undoubtedly the determination of
W(s)in frequency-domain form, especially when Z(s) is a matrix. The manipulations involved in passing from W(s)to P,L, and Woare by comparison
straightforward, involving operations no more complicated than matrix
inversion or solving linear matrix equations.
It is interesting to compare the different solutions of (6.5.1) that follow
from different members of the family of W(s)satisfying (6.5.2)and properties
1,2, and 3. As we know, such W(s)differ by left multiplication by a real constant orthogonal r x r matrix. Let V be the matrix relating two particular
W(s),say W,(s)and WW,(s).
With Wl(s)and W,(s) possessing minimal realizations (F, G,Ll, W,,) and {F, G,L,, W.,), it follows that Li = VL; and
W,, = VW,,. Now observe that LIL: = L,L',andL, W,, = L,W., because
Vis orthogonal. It follows that the matrix P in (6.5.1) is the same for W,(s)
and Wl(s), and thus the mutrix P is independent of the particular member
of the fmily of W(s) satisfying (6.5.2) and the associated properties I , 2,
and 3. Put another way, P is associated purely with the family.
Example
6.5.1
+
Consider Z(s) = (s 2)(sz + 4s + 3)-1, which can be verified to be
positive real. This function possesses a minimal realization
Now observe that
+ +
Evidently, ~ ( s=) (2s + 2,/37(sl 4s 3)-1 and a minimal real&tion for W(s),with the same Fand G matrices as the minimal realization
for Z(s), is given by
The equation for P is then
SEC. 6.5
COMPUTATION VIA SPECTRAL FACTORIZATION
287
That is.
Of course, L
' = [2JT
+ 21 and W, = 0.
Case of F Possessing Pure Imaginary
Eigenvalyes-Louless Z(s)
We wish to tackle the solution of (6.5.1) with Fpossessing pure
imaginary eigenvalues only. We shall show that this is essentialIy the same
as the problem of solving (6.5.1) when Z(s) is lossless. We shall then note
that the solution is very easy to obtain and is the only one possible.
Because F has pure imaginary eigenvalues, Z(s) has pure imaginary poles.
Then, as outlined in an earlier chapter, we can write
where the w, are real, and the matrices J, C, A,, and B, satisfy constraints
that we have listed earlier. The matrix
is lossless positive real and has a minimal realization {F, G, H}.
Now because F has pure imaginary eigenvalues and P is positive definite,
the only L for which theequation PF F'P = -U'can be valid is L = 0,
by an extension of the lemma of Lyapunov described in an earlier chapter.
Consequently, our search for solutions of (6.5.1) becomes one for solutions
of
+
PF+F'P=O
PG=H
J + J ' = w;wo
(6.5.11)
Now observe that we have two decoupled problems: one requires the determination of Wo satisfying the last of Eqs. (6.5.1 1); this is trivial and will
. :
288
COMPUTATION OF SOLUTIONS
,
CHAP. 6
not be further discussed. The other requires the determination of a positive
definite P such that
These equations, which exclude J, are the positive real lemma equations
associated with the lossless positive real matrix Z,(s), since Z,(s) has minimal
realization {F, G, HI.
Thus we have shown that essentially the problem of solving (6.5.1) with
Fpossessing pure imaginary eigenvaluesis one of solving (6.5.12), the positive
real lemma equations associated with a lossless Z(s). We now solve (6.5.12),
and in so doing, establish that P i s unique.
The solution of (6.5.12) is very straightforward. One, way to achieve it is
as follows. (Problem 6.5.2 asks for a second technique, which leads to the
same value of P.) From (6.5.12) we have
PEG = -FIPG = -F'H
PF'G = -FIPFG = (F1)aH
PF3G = --F'PF2G = -(F1)'H
etc.
So
Set
Then V , has rank n if F is n x n because of complete controllability. Consequently (V,V:)-' exists and
P
= V , V:(V,V:)-
(6.5.15)
The lossless positive real lemma guarantees existence of a positive dehite
P satisfying (6.5.12). Also, (6.5.15) is a direct consequence of (6.5.12).
Therefore, (6.5.15) defines a solution of (6.5.12) that must be positive definite
symmetric, and because (6.5.15) defines a unique P, the solution of (6.5.12)
must be unique.
SEC. 6.5
COMPUTATION VIA SPECTRAL FACTORIZATION
289
Case of F Possessing Nonpositive Real Part
Eigenvalues
We now consider the solution of (6.5.1) when Fmay have negative
or zero real part eigenvalues. Thus the entries of Z(s) may have poles in
Re[s] < 0 or on Re[s] = 0.
The basic idea is to apply an earlier stated decomposition property to
break up the problem of solving (6.5.1) into two separate problems, both
requiring the solutions of equations like (6.5.1) but for cases that we know
how to handle. Then we combine the two separate solutions together.
Accordingly we write Z(s) as
Here the o,are real, and the matrices C, A,, and B, satisfy constraints that
were listed earlier and do not concern us here, but which guarantee that
Z,(s) = C, (A?
BJ(sZ a;)-'
s-lC is lossless positive real. The
matrix Z,(s) is also positive real, and each element has all poles restricted to
Re[s] < 0.
Let IF,,Go,H,,J,) be a minimal realization for Z,(s) and (F,, G,, H , )
a minimal realization for Z,(s).Evidently, a realization for Z(s) is provided
by
+
+
+
This realization is moreover minimal, since no element of Z,(s) has a pole
in common with any element of Z,(s). In general, an arbitrary realization
(F, G, H, J ) for Z(s) is such that J = J,, although it will not normally be
true that F =
l].
etc But there exists a T, such that
Further, T is readily computable from the controllability matrices
V=[GFG...Fn-'GI
and
290
COMPUTATION OF SOLUTIONS
which both have full rank. In fact, TV =
CHAP. 6
v and so
T = ?V'(VV')-I
(6.5.18)
And now the determination of P, L, and W, satisfying (6.5.1) for an arbitrary realization {F, G, H, J} is straightforward. Obtain Po,Lo, and W,
satisfying
PoFo
+ GPO
=
-LOLL
POGO
=H,
- LOWo
J fJ' =
W;W,
(6.5.19)
and PI satisfying
Of course, Poand PI are obtained by the procedures already discussed in this
section. Then it is straightforward to verify that solutions of (6.5.1) are provided by W, and
+
Of course, the transfer-function matrix: W(s) = W, L'(sZ- F)-'G is
a spectral factor of Z(s) Z'(-s). In fact, slightly more is true. Use of
(6.5.21) .and (6.5.27) shows that
+
W(s) = W,
+ L;(sI - F,):'G,
(6.5.22)
+
Z,(s).
Thus W(.s) is a spectral factor of both Z'(-s) 4- Z(s) andZ;(-s)
This is hardly surprising, sin& the factthat Z,(s)
. . islossless guarantees that
' '
Z(-S)It. Z(s) = Z,'(--s) fZo(s).
'
(6.5.23)
The realization {F,G,L,w,} of W(s) is clearly nonminimal because
W(s) and F, has smaller dimension than
F. Since [F,GI is completely controllabie, [F, L] must not be completely
observable (this is easy to check directly). We still require this nonminimal
realization of W(s), however, in order to generate a P, L, and W, satisfying
the positive real lemma equations. The matrices Po,La, and W. associated
with a minimal realization of W(s) simply are not solutions of the positive
real lemma equations. This is somewhat surprising-usually minimal realizations are the only ones of interest.
The spectral factor W(s) = Wb fL'(sZ - F)-lG of course can be calculated to have the three properties 1,2, and 3 listed in conjunction with
Eq. (6.5.2); it is unique to within multiplication on the left by an arbitrary
real constant orthogonal matrix.
IF,, Go,Lo, W,} is a realition of
COMPUTATION VIA SPECTRAL FACTORIZATION
SEC. 6.5
291
We recall that the computation of solutions of (6.5.1) via a Riccati equation
solution allowed us to define a family of spectral factors satisfying (6.5.2)
and properties 1,2, and 3. I t follows therefore that the procedures just
presented and the earlier procedure lead to the same family of solutions of
(6.5.1). As we have already noted, the matrix P is the same for all members
of the family, while L' and W, differ through left multiplication by an
orthogonal matrix. Therefore, the technique of this section and Section 6.2
lead to the same matrix P satisfying (6.5.1).
Example
6.5.2
We shall consider the positive real function
+1s + m
S
as)=&
which has a minimal realization
Because Z(s) has a pole with zero real part, we shall consider separately
the positive real functions
For the former, we have as a minimal realization
[-I,
1,L &I
Now
We take as the spectral factor
throughout Rets] > 0. Now
and so W(s) has a realization
(n+ s)/(l +sf, whose inverse exists
292
COMPUTATION OF SOLUTIONS
CHAP. 6
= f l - I, WO= 1, and a solution Po of PoF,
is readily checked to be 2 - f l .
For Z,(s), we have a minimal realization
Thus Lo
+ FiP.
= -LOLL
and thesolution ofP, F,
+ F:P, = O,P,C, = H , is easily checked to be
Combining the individual results for Zo(s) and Z1(s) together, it follows that matrices P, L, and Wo, which work for the particular minimal
realization of Z(s) quoted above, are
Problem Verify that Eqs. (6.5.3) through (6.5.5) and the associated remarks define
6.5.1
a spectral factor W(s) satisfying (6.5.2) and the additional properties
listed following this equation.
Problem Suppose that P satisfies the lossless positive real lemma equations
6.5.2
PF P P = 0 and PC = H. From these two equations form a single
where [p,',i]is completely
equation of the form P$ PP = ---ii',
observable. The positive definite nature of P implies that P has all
eigenvalues in the left half-plane, which means that the linear equation
for P is readily solvable to give a unique solution.
+
6.6
+
FINDING ALL SOLUTIONS OF THE
POSITIVE REAL LEMMA EQUATIONS
In this section we tackle the problem of finding all, rather than
one, solution of the positive real lemma equations
PF + F'P
= -LL'
PG=H-LWo
J + J ' = w;w,
(6.6.1)
We shall solve this problem under the assumption that we know one solution
triple P, L, Wo of (6.6.1).
A general outline of our approach is as follows:
SEC. 6.6
FINDING ALL SOLUTIONS
293
1. We shall show that the problem of solving (6.6.1) is equivalent to the
problem of solving a quadratic matrix inequality (see Theorem 6.6.1 for
the main result).
2. We shall show that, knowing one solution of (6.6.1), the quadratic
matrix inequality may be replaced by another quadratic matrix inequality of simpler structure (see Theorem 6.6.2 for the main result).
3. We shall show how to find all solutions of this second quadratic matrix
inequality and relate them to solutions of (6.6.1).
+
Throughout, we shall assume that J J' is nonsingular.
This section contains derivations that establish the validity of the computational procedure, as distinct from forming an integral part of the
computational procedure. They may therefore be omitted on a first reading
of this chapter. The derivations falling into this category are the proofs of
Theorem 6.6.1 and the corollary to Theorem 6.6.2.
The technique discussed here for soIving (6.6.1) first appeared in [61.
A Quadratic Matrix Inequality
The following theorem provides a quadratic matrix inequality
equivalent to Eqs. (6.6.1).
Theorem 6.6.1. With [F, G, H, J] a minimal realization of a
positive real Z(s) and with J J' nonsingular, solution of (6.6.1)
is equivalent to the solution of
+
PF
+ F'P 2 -(PC - H)(J + J')-'(PC - H)'
(6.6.2)
in the sense that if [P,L, W,] satisfies (6.6.1), then P satisfies
(6.6.2), and if a positive definite symmetric P satisfies (6.6.2),
then there exist associated real matrices L and W, (not unique)
such that P, L, and W, satisfy (6.6.1). Moreover, the set of all
associated L and W , are determinable as follows. Let N be a real
matrix with minimum number of columns such that
PF
+ F'P + (PG - H)(J + J')-'(PG
- H)'= -NN'
(6.6.3)
Then the set of L and W, is defined by
L = [-(PC - H)(J 3.7)-'1' j N]V
= [(J
J')"' / OIV
w;
+
(6.6.4)
where V ranges over the set of real matrices for which VV' = I,
and the zero block in the definition of W.' is chosen so that L
and W,' have the same number of columns.
294
COMPUTATION OF SOLUTIONS
CHAP. 6
Proof.' First we shall show that Eqs. (6.6.1) imply (6.6.2).
As a preliminary, observe that WLW, is nonsingular, and
so M = W,(W; W,)-lW; (I. For if x is an eigenvector of M,
Ax = Mx for some A. Then AWLx = WiMx = WAX since
WLM = W; by inspection. Hence either 1 = 1 or Wix = 0,
which implies Mx = 0 and thus A = 0. Thus the eigenvalues of
M are all either 0 or 1. The eigenvalues of I - M are therefore
all either 1 or 0.Since I - M is symmetric, it follows that I - M
2 0 or M I I. Next it follows that LML' I L L ' or -LC
I -LMLf. Then
PFf F'P= - L C <
-LW,(W;W,)-'W;L'
= -(PC - H)(J
J')-'(PC - H)'
+
which is Eq. (6.6.2).
Next we show that (6.6.2) implies (6.6.1).
From (6.6.2) it follows that
PF f F'P
+ (PC - H)(J + J')-'(PC
- H)'
I0
and so there exists a real constant matrix N such that
PF + F'P
+ (PC - H)(J <J')-'(PG
- H)' = -NN1
(6.6.3)
We assume that N has the minimum number of columns-a
number equal to the rank, q say, of the left-hand side of (6.6.3).
This determines N uniquely to within right multiplication by an
arbitrary real constant orthogonal matrix V.
Then in order that (6.6.1) hold, it is clearly sufficient, and as
we prove below also necessary, that
where V is any real constant matrix for which YV' = I, and the
zero block in WA is chosen so that L and W; have the same
number of columns. Note that V is not necessarily square.
To see that Eqs. (6.6.4) are necessary, observe from (6.6.1)
that any L and W, satisfying (6.6.1) must satisfy
-(PF
+ F'P)
PG - H
*This proof may be omitted in a first reading of this chapter.
[-L'
Wol
(6.6.5)
FINDING ALL SOLUTIONS
SEC. 6.6
.
.
:
295
Let L , and W i , denote the values of L and Wi achieved i n
(6.6.4) by taking V = I. Now (6.6.5) implies that any L and
W; must have a number of columns equal to or greater than the
rank, call it p, of the left-hand side of (6.6.5); if L, and WA, have
p columns, then (6.6.4) will define all solutions of (6.6.1) for the
prescribed P as V ranges over the set of matrices for which VV'
= I,, with V possessing a number of columns greater than or
equal top.
Let us now check that L, and W,', do actually havepcolumns.
The following equality is easily verified:
+ P'P)
J')-1 -(PF
][(PG-If)'
-[PF
PG - H
J+J'
+ F'P + (PG - H)(J + J')-'(PG
I
- H)7 0
I
0
+
+
Therefore, the rank of the left side of (6.6.5) is rank NN'
m,
where m is the size of the square matrix J ; i.e., p = q m.
Now recall that N has a number of columns equal to the rank of
NN', viz., q. Then from (6.6.4) it is immediate that L, and W i ,
have a number of columns equal to q m = p, as required.
+
VVV
We wishto maketwo
concerning the above theorem; the first is
sufficiently important to be given status as a corollary. . . .
.
.
corollary; I f {P, L, W,] is a triple satisfying (6.6:1),.~is m x m ,
and W; hasm rows and columns, then P satisfies the inequality
(6.6.2) withequality..Conversely,ifpsatisfies (6.6.2) with equality,
among the family of L 'and W, satisfying together with P Eq.
(6.6.1), there exist L' and W, with m rows.
Proof is by immediate verification, Notice from the third of 9 s . . (6.6.1)
that it is never possible for W, to have'fewer than,m rows with J J' nonsingular.
The second point is this: essentially, Theorem 6.6.1 explains how to eliminate the matrices L and W , from (6.6.1) to obtain a single equation,
actually an inequality, for the matrix P. [Of course, it is logical to try to
replace the three equations involving three unknowns by a single equation,
although we have yet to show that this inequality is any simpler to soIve
+
296
COMPUTATION OF SOLUTIONS
CHAP. 6
than the original equations.] The inequality (6.6.2) has the general form
PAP+PB+B'P+C<O
and is apparently very difficult to solve. However, were the matrix C equal
to zero, one would expect by analogy with the scalar situation that solution
would be an easier task. In the next subsection we discuss such a simplification.
A Simpler Quadratic Matrix Inequality
Instead of attempting to solve (6.6.2) directly, we shall instead try
to k d all values of Q = P P, where P is a known solution of (6.6.2)
actually satisfying (6.6.2) with equality, and P is any other solution of the
inequality. The defining relation for Q is the subject of the next theorem
-
Theorem 6.6.2. Let a matrix P satisfy (6.6.2)with equality, and
let P be any other symmetric matrix satisfying the inequality
(6.6.2). Define the matrix Q by Q = P - P. Then Q satisfies
QE+ F Q I -QG(J+ J ~ G Q
(6.6.6)
E = F + G(J+ J ' ) - ~ ( ~ G - H ) '
(6.6.7)
where
Conversely, if Q satisfies (6.6.6) and P satisfies (6.6.2) with
equality, P = Q P satisfies (6.6.2). Finally, Q satisfies (6.6.6)
with equality if and only if P satisfies (6.6.2) with equality.
+
The proof of this theorem is straightforwad, proceeding by simple
manipulation. The details are requested in Problem 6.6.1.
Evidently, if we can find all solutions of the inequality (6.6.6), and if we
know one solution of (6.6.2) with equality, then we can completely solve the
positive real lemma equations (6.6.1). (Recall tfiat from Theorem 6.6.1 the
generation of all the possible Land W, associated with a given P is straightforward, knowing P.)
The previous sections of this chapter have discussed techniques for the
computation of a matrix P satisfying (6.6.2) with equality. In the next sub. .
section, we discuss the solution of (6.6.6).
Let us note now an interesting property, provab1e.using (6.6.6).
Corollary. Under the condition of Theorem 6.6.2, suppose that
P is taken a s that particular solution of (6.6.2) which satisfies
the equality and is the negative of the limit of the solution of
the associated Riccati equation. Then the eigenvalues of I? all
297
FINDING ALL SOLUTIONS
SEC. 6.6
have nonpositive real parts, and Q 2 0. Moreover, if
rf;
A:,
-
E=
where Fl has d l eigenvalues with *;m real parts and
1', all eigenvalues with negative real parts, and if Q is partitioned
, then
conformably as Q =
Q,, and Q , , are zero.
Proof.' Because P is the negative of the limit of the Riccati
equation solution, it follows, as we have noted earlier, that the
spectral factor family
where V'V = YY'= I is such that W-'(s) exists throughout
Re [s] > 0.
This means that E has all eigenvalues in Re [s] 5 0, because?
- +
-
+
det (ST [F G(J J')-'(PC - H)']]
= det (sf
F) det [ I - (sI - fl-'G(J $ J3-'(m - H)']
= det (sl F)det [I ( J J')-I(PG - H)'(sI - F)-'GI
- +
+
+
= det (s1- F)det (J J')-L/Z
det [(J J')l/Z- (J
X (PG - H)'(sI
- F)-'GI
= hdet (sI - F) det ( J
J')-l" det W(s)
+ J3-1/2
+
Thus eigenvalues of are either eigenvalues of F or zeros of
det W(s), although not necessarily conversely. Since both the
eigenvalues of F and the zeros of det W(s) lie in Re[s] 0, the
first part of the corollary is proved.
If the eigenvalues of E are in Re[s] < 0 rather than just
Re [s] 0, Eq. (6.6.6) implies immediately by the lemma of
Lyapunov that Q 2 0, since QP 4- F'Q < 0. The wnclusion
extends to the case when eigenvalues of E are in Re[s] I 0 as
follows.
There is no loss of generality in assuming that
<
<
with
pJpossessing pure imaginary poles, Fzpossessing poles in
*The proof may be omitted at a first reading of this chapter.
?The following equalities make use of the results det [I AB] = det [I f BAI and
det CD = det DC.
+
298
COMPUTATION OF SOLUTIONS
CHAP. 6
Re [s] < 0.[For there is clearly no loss of generality in replacing
an arbitrary k' by TFT-', provided we simultaneouslyreplace G
by TG and Q by (T-')'QT-~ for 'some nonsingular T-if after
these replacements (6.6.6) is valid, then before the replacements
it is valid, and conversely. Certainly there exists a T such that
Tf'T-Iis the direct sum of matrices with the eigenvalue restrictions of F, and Fz.] By the same argument as used above to justify
our special choice of I', it follows that there is no loss of generality
in considering I', skew.
Equation (6.6.6) can be written as
(6.6.8)
where
@:]G(J+
=
J')-l-'.
and
S=
is nonnegative definite symmetric. We shall show that Q,, and
Q,, are zero, and that Q,, 2 0. This will complete the proof of
all the remaining claims of the corollary.
Now consider the 1-1 block of this equation. The trace of the
1-1 block of the right side of the equation is nonpositive, since
the right side itself is nonpositive definite. Also
These equalities follow from the skewness of k',,and the fact that
tr (AB) = tr (BA) for arbitrary A, B for which AB and BA exist.
This means that S,,= 0 and Q,,G, Q,,G, = 0, for otherwise the trace of the 1-1 block on the right side would be negative.
With S,, = 0,it follows that S,, = 0 to ensure that S is nonnegative.
Now consider the 2-1 block, recalling that Q,,C, f Q,,G,
and S,, are zero. We have
+
Since El and F: have no eigenvalues in common, a standard
property of linear matrix equations implies that Q',, = 0. Since
SEC. 6.6
FINDING ALL SOLUTIONS
299
Q,,G, f QI,G, = 0, it follows that Q,,G, = 0. Now we can
argue that Q,, = 0. For the complete controllabjlity of [F, G]
implies complete controllability of [F- GK', GI for any K, and
thus of IF, q.In turn, this implies the complete controllability of
IF,, GI] and IF,, G,]. Now we have
Q,,Fl+F;Q,,=O
and
Q,,G,=O
It follows that
Q,,F,G,
= -F:Q,,G,
=
o
and generally
Complete controllability of l',,G, now implies that Q,, = 0.
Consequently, (6.6.8) becomes simply
for some nonnegative definite S,,,and with Fz possessing eigenvalues in Re [s] < 0. As we have already argued, Q,, 2 0.
Therefore, Q =
2 0, as required.
VV V
Solution of the Quadratic Matrix InequalityNonsingular Solutions
Now we shall concentrate on solving the inequality (6.6.6) for
Q. Our basic technique will be to somehow replace (6.6.6) by a linear equation, and in one particular instance this is very straightforward. Because it
may help to illustrate the general approach to be discussed, we shall present
this special case now.
We shall find all nonsingular solutions of (6.6.6). (Note: From the corollary
to Theorem 6.6.2 it follows that if possesses a pure imaginary eigenvatue,
there exist no nonsingular solutions. Therefore, we assume that all eigenvalues of l'possess negative real part.)
All nonsingular solutions of (6.6.6) are generated from all nonsingular
solutions of
where S ranges over the set of all nonnegative definite matrices. Because
we are assuming nonsingularity of Q, it follows that the set of nonsingular
300
COMPUTATION OF SOLUTIONS
CHAP. 6
solutions of (6.6.6) may be generated by solving
BQ-1 + Q - I F = -G(J+ J,)-IG' - Q - ~ s Q - ~
(6.6.9)
where S ranges over the set of all nonnegative definite matrices. But if S
ranges over the set of all nonnegative definite matrices, so does Q - ' S Q - ! .
Therefore, all nonsingular solutions of (6.6.6) are found by solving'
BQ-1
+ Q - I F ,= -G(J + J')-lGr - 3
(6.6.1 0)
where 9 ranges over the set of all nonnegative definite matrices. This equation
is linear in Q - ' , and procedures exist for its solution.
With the constraint on F that its eigenvalues all have negative real parts,
it is known from the lemma of Lyapunov that
This matrix is always nonsingular for the following reason. Observe that
If the matrix on the right side of the inequality is nonsingular, so is that on
the left. But the matrix on the right is nonsingular if (and only if) [1", is
completely controllable. Since [F, GI is completely controllable, and since
R = F - GK' for a certain K, [R, ,C;l must be completely controllable.
In summary, Eq. (6.6.11) defines, as S^ ranges over the set of allnonnegative
definite symmetric matrices, all no~singularsolutions to (6.6.6); for such to
exist, B musr have all negative real part eigenvalues.
a
Solution of the Quadratic Matrix InequalitySingular Solutions
To find singular solutions of (6.6.6), it is necessary to delve
deeper into the structure of E and to determine its eigenvalues. (Note: This
was not necessary to compute nonsingular solutions.)
For convenience we suppose that all eigenvalues of F with negative real
parts are distinct, and that F is diagonalizable. Then there exists a matrix T
such that
FINDING ALL SOLUTlONS
SEC. 6.6
301
where EZ has purely imaginary eigenvalues and is skew. In this equation
-f- denotes direct sum, and A,, . ,A,, p,+,,. . ,p, are real positive numbers.
With & = T'QT and 6 = T-'G(J
J')-'", the inequality (6.6.6) is
equivalent to
..
.
+
G E + PG -&6@&
(6.6.12)
As we have already noted, [F, GI is completely controllable. It follows easily
from this fact that [P, is also completely controllable.
We have already shown in the corollary to Theorem 6.6.2 that certain
parts of & must be zero. In particular, if we write
so that p, contains negative real part eigenvalues only, and f2zero real part
eigenvalues only, and if we partition & as
so that &,, has the same dimension as pi?,,
then &,, and
Therefore, we can restrict attention to the equation
&,
are zero.
where PI has negative real part eigenvalues, 6, is defined in an obvious
fashion, and [f,,
6,]is completely controllable because [f,61is completely
controllable.
Let US drop the subscripts on (6.6.13), to write again
where we now understand that 3 has all negative real part eigenvalues, and
of course [E, 61is completely controllable.
The inequality (6.6.12) can be rewritten as
where S is an arbitrary nonnegative definite symmetric matrix.
Now observe that if x is a vector such that &x = 0, then Sx = 0 and
@x = 0 for all i. For multiplication of (6.6.14) on th$ left by x' and on the
right by x establishes that x'Sx = 0, and thus Sx = 0.Multiplication on the
right by x then establishes that &fx = 0, and dfi'x
.. = 0 follows by iterating
the argument.
302
COMPUTATION OF SOLUTIONS
CHAP. 6
The fact that &x = 0 implies that &glx = 0, for all i, implies that the
vector x must have some zero entries if @ is not identically zero. For if x
has no zero entry, the set of vectors x, fix, P3x, . . . ,spans the whole space
of vectors of dimension equal to the size of 0 [13]; then we would have to
have & = 0.
NOWlet x,, . . .,x, be vecJors spanning the null space of 0. By reordering
rows and columns of 0, j,G, and S if necessary, we may suppose that the
first r > 0 entries of x,, . .,x, are zero, while at least one of x,, . . . ,x,
has an (r 4- s)th entry nonzero for each s greater than zero.
Let q be the dimension of &. It may be checked that the set x,, fix,,
,
x,, l%
.,
. ,,. . .,x,, Fxm,. .,spans the space of vectors whose first r
entries are zero and whose last (q - r) entries can be arbitrary. This space
is identical with the nullspace of &. Therefore, the last (g - r) columns of
& and S are zero. The symmetry of @ and S then implies that the last (q - r)
rows are zero, so that (6.6.14) becomes
.
.
.. .
.
A
dip,
+ E:Q, -a,(~e.),a,- S,
=
(6.6.15)
where the subscript 1 denotes deletion of the last (q - r) rows and columns
of the ma* with which it is associated. The matrix 0, must be nonsingular,
or else we would have a contradiction of the fact that x,, . . . ,x, span the
nullspace of
Notice that F, will be a direct sum of blocks of the type [-dl and
G,.-
for 2, real and positive; i.e., the blocks making up F,are a
[subset of3
the blocks making up fi. It is not hard to check that it is impossible
p
for a block of the form
[I -11
in the group making up fi to be "split,"
so that part of the block is included in i?, and part not.
The solution of (6.6.15) with 0 , known to be nonsingular is straightforward. Following the earlier theory for constructing all nonsingular solutions to the original inequality, we note that solution of (6.6.15) for all
nonnegative definite S, is equivalent to solution for all nonnegative definite
s^, of
f2&l
+ &;I$'
,- -
- gl
(6.6.16)
That this equation always has a nonsingular solution @il
is readily checked.
Thus dl can be formed.
The corresponding solution of (6.6.12) is
FINDING ALL SOLUTIONS
SEC. 6.6
303
The above analysis implicitly describes how to compute all singular solutions of (6.6.12), without determining vectors x such that &x = 0. One
simply drops blocks from the direct sum making up I?, and at the same time
constructs a submatrix of GG' by dropping the same rows and columns as were
dropped from f. Denote by p, the matrix
with certainAocks dropped
(equivalently, certain rows and columns dropped). Denote by (GG'), the matrix
&& with corresponding rows and columns dropped. Then Egs. (6.6.15) and
(6.6.16) yield a wholef m i l y of solutions to (6.6.12), or equivolenrly the original
(6.6.6). All solutions are obtained by dropping different blocks of pandproceeding in the above manner.
Properties of the Solutions
In this subsection we wish to note briefly properties of the solutions P of (6.6.1). We shall describe these properties in terms of solutions
to (6.6.6).
As we have already noted in the corollary to Theorem 6.6.2, there is a
minimum solution to (6.6.1), which is the matrix (termed P i n this section)
associated with that family of spectral factors whose inverse exists in
Re [s] > 0.
It is also possible to write down the maximum solution to (6.6.1). For
simplicity, suppose that E has eigenvalues restricted to Re[s] < 0. Then
there exist nonsingular solutions to (6.6.61, one of them being defined by
the corresponding
If Q denotes this particular solution and P = P f
solution of (6.6.I), then P is the maximum solution of (6.6.1); i.e., if P is
any other solution, P - P 2 0. This is proved in [6]. The spectral factor
family generated by I' 1s interesting; it has the property that the inverse of
a member of the family exists throughout Re [s] < 0. (Problem 6.6.2 asks
for a derivation of this fact. See also [6].)
As noted in the last subsection, solutions of (6.6.6) fall into families, each
family derivable by deleting certain rows and columns in the matrix we have
termed 2. There is a maximum member for each family computed as follows.
The general equation determining a family is of the form
where El is Ewith some of its rows and columns deleted a n d s , is nonnegative
definite. The smallest 4;' satisfying (6.6.16) is easily checked to be that
304
COMPUTATION O F SOLUTIONS
CHAP. 6
obtained by setting 3,= 0, since in general
Therefore, the 4, derived from 3,= 0 will be the largest, or maximum,
member of the associated family.
The associated Q can be checked t o satisfy (6.6.6) with equality, so the
maximum members of the families are precisely the solutions of (6.6.6) with
the equality constraint.
Example We wish to illustrate the preceding theory with a very simple example.
We take Z(s) = 4 l / ( s 1). for which [-I, I , 1, 41 is a minimal
Then
realization. As we have calculated earlier, P = 2 -
+
6.6.1
+
a.
and the inequality for Q becomes
There is only one family of solutions. The maximum member of that
family is obtained from a nonsingular Q satisfying
That is,
Q=2,/3
Any Q in the interval [O, 21r57 satisfies the inequality.
P = 2 f l ,and (6.6.1) yields L = - 1
With Q =
The associated spectral factor is
v, +
- 0.
and is evidently maximum phase.
If, for the sake of argument, we take Q = JT,
then P becomes 2.
It follows that
PF
+ F'P + (PG- H)(J + J?-'(PC - H)'
=
-3
so that, following (6.6.3) and the associated remarks, N = ,/T. The
associated family of spectral factors is defined [see (6.6.4)] by
SEC. 6.6
FINDING ALL SOLUTIONS
305
where Vis an arbitrary matrix satisfying VV' = I. It follows that
Observe that, as required,
This example emphasizes the fact that spectral factors W(s) readily arise
that have a nonminimal number of rows.
Problem Prove Theorem 6.6.2.
6.6.1
Problem Suppose that P is that particular solution of (6.6.2) which satisfies the
equality and is the negative of the limit of a Riccati equation solution.
Suppose also that the eigenvalues of E all have negative real parts. Let
0 be thenonsingularsolution of QP P'Q = -QG(J J')-'G'Q and
letP = P f 0. Show that if W(s)isamember ofthespectralfactorfamily
defined by P, then det W(s) is nonzero in Re [s] < 0. [Thus W(s) is a
"maximum phase" spectral factor.]
6.6.2
+
+
Problem Find a general formula for all degree 1 spectral factors associated with
1 f l / ( s 1) by extending the analysis of Example 6.6.1 to cope with the
case in which Q is arbitrary, within the interval [O, 2 f l l .
Problem Study the generation of solutions of the positive real lemma equations
6.6.4
with Z(s) = (s2 3.5s f 2)(s2 3s f2)-'.
Problem Show that the set of matrices P satisfying the positive real lemma equa6.6.5
tions for a prescribed positive real Z(s) is convex, i.e. if PI and P2satisfy
the equations, so does U P , (1 - a)P2 for all a in the range [O, 11.
Problem Show that the maximum solution of
6.6.3
+
+
+
+
6.6.6
PF+FIP t (PG-HXJ+J'-'(PG
-H)'=O
306
.
CHAP.. 6
COMPUTATION OF SOLUTIONS
is the inverse of the minimum solution of
PF + FP + ( P H - G)(J+ J')-'(?H
- c)' = o
What is the significance of this fact?
REFERENCES
[ 1 I J. B. MWREand B. D. 0. ANDERSON,
"Extensions of Quadratic Minimization
Theory, Parts I and 11," International Journai of Control, Vol. 7, No. 5, May
1968, pp. 465480.
[ 2 ] B. D. 0. ANDERSON,
''Quadratic Minimuation, Positive Real Matrices and
Sptxtral Factorization," Tech. Rept. EE-6812, Department of ElecfricalEngineering, University of Newcastle, Australia, June 1968.
[ 3 ] B. D. 0. ANDERSON,
"An Algebraic Solution to the Spectral Factorization
Problem," Transactionsof the IEEE on Automatic Control, Vol. AC-12, No. 4,
Aug. 1967, pp. 410-414.
141 B. D. 0. ANDERSON,
"A System Theory Criterion for Positive Real Matrices,"
SIAMJownol of Confrol,V0l. 5, NO. 2, May 1967, pp. 171-182.
[ 5 ] D. C. YOULA,"On the Factorization of Rational Matrices," IRE Transactions
on Znformafion Theory, Vol. IT-7, No. 3, July 1961, pp. 172-189.
[ 6 ] B. D. 0.ANDERSON,
"The Inverse Problem of Stationary Covariance Generation," J o u r ~ of
l Statistical Physics, Vol. 1, No. 1, 1969, pp. 133-147.
[ 7 ] 3. J. LWIN, "On the Matrix Riccati Equation," Proceedings of the American
Mathematical Society, Vol. 10, 1959, pp. 519-524.
[S ] W. T. REID,"A Matrix Differentla1Equation of the Riccati Type," American
Journal of Mafhematics, Vol. 68, 1946, pp. 237-246.
[ 9 J J. E. POTTER,"Matrix Q~adraticSolutions,"Journalof the Societyfor Industrial
and Applied Mathematics Vol. 14, No. 3, May 1966, pp. 496-501.
1101 A. G. I. MACFARLANE,
"An Eigenvector Solution of the Optimal Linear
Regulator," Journal of Electronics and Control, Vol. 14, No. 6, June 1963, pp.
643-654.
[Ill K.L. HITZand B. D. 0. ANDERSON,
"The Relation between Continuous and
Discrete Time Quadratic Minimization Problems," Proceedings of the I969
Hawaii Infernotional Conference on Systems Science, pp. 847450.
1121 F. R. OANTMCHER,
The Theory of Matrices, Chelsea, New York, 1969.
1131 R. E KALMAN,
"Mathematical Description of Linear Dynamical Systems,"
Jonrnal of the Societyfor Industrial and Applied MafhemaficsControl, Series A,
Vol. 1, No. 2, 1963, pp. 152-192.
Bounded Real Matrices
and the Bounded Real Lemma;
the State-space Description
of Reciprocity
We have two main aims in this chapter. First, we shall discuss
bounded real matrices, and present an algebraic criterion for a rational
matrix to be bounded real. Second, we shall present conditions on a minimal
state-space realization of a transfer-function matrix for that transfer-function
matrix to be symmetric. In later chapters these results will be put to use in
synthesizing networks, i.e., in passing from a set of prescribed port characteristics of a network to a network possessing these port characteristics.
As we know, an m x m matrix of real rational functions S(s) of the
complex variable s is bounded real if and only if
1. All elements of S(s) are analytic in Re [s] 2 0.
2. I - S'*(s)S(s) is nonnegative definite Hermitian in Re [s] > 0, or
equivalently, I - S'*[jw)S(jm) is nonnegative definite Hermitian for aU
real o.
Further, S(s) is lossless bounded real if it is bounded real and
3. I - S(-s)S(s) = 0 for all s.
These conditions are analytic in nature; the bounded real Iemma seeks to
replace them by an equivalent set of algebraic conditions on the matrices
of a state-space realization of S(s). In Section 7.2 we state with proofs both
the bounded real lemma and the lossless bounded real lemma, which are of
308
BOUNDED REAL MATRICES
CHAP. 7
course counterparts of the positive real lemma and lossless positive real
lemma, being translationsinto statespace terms of the bounded real property.
Just as there arises out of the positive real lemma a set of equations that we
are interested in solving, so arising out of the bounded real lemma there is
a set of equations, which again we are interested in solving. Section 7.3 is
devoted to discussing the solution of these equations. Numerous parallels
with the earlier results for positive real matrices will be observed; for example,
the equations are explicitly and easily solvable when derived from a lossless
bounded real matrix.
In Section 7.4 we turn to the question of characterizing the symmetry of
a transfer-function matrix in terms of the matrices of a state-space realization
of that matrix. As we know, if a network is composed of passive resistors,
inductors, capacitors, and transformers, but no gyrators, it is reciprocal,
and immittance and scattering matrices associated with the network are
symmetric. Now in our later examination of synthesis problems we shall be
interested in reciprocal and nonreciprocal synthesis of networks from port
descriptions. Since reciprocal syntheses can only result from transfer-function-matrix descriptions with special properties, in general, the property of
symmetry, and since our synthesis procedures will be based on state-space
descriptions, we need a translation into state-space terms of the property
of symmetry of a transfer-function matrix. This is the rationale for the
material in Section 7.4.
Problem Let S(s) s) a scalar 10ssIess bounded real function of the form &s)/q(s)
for relatively pn'mepolynomialsp(s) andq(s). Show that S(s) is an all-pass
function, i.e., has magnitude unity for all s =jw, w real, and has zeros
that are dections in the right half-plane Re [s] > 0 of its poles.
7.1.1
Statement and Proof of the Bounded Real Lemma
We now wish to reduce the bounded real conditions for a real
rational S(s) to conditions on the matrices of a minimal state-space realization of S(s). The lemma we are about to present was stated without proof
in [I], while a proof appears in 121. The proof we shall give differs somewhat
from that of [2], but shares in common with [2] the technique of appealing
to the positive real lemma at an appropriate part of the necessity proof.
Bounded Real Lemma. Let S(s) be an m x m matrix of real
rational functions of a complex variable s, with S(M) < W , and
let (F,G, H,J ) be a minimal realization of S(s). Then S(s) is
bounded real if and only if there exist real matrices P, L, and Wo
with P positive definite symmetric such that
309
THE BOUNDED REAL LEMMA
SEC. 7.2
Note that the restriction S(m) < m is even more inessential here than the
corresponding restriction in the positive real lemma: If S(s) is to be bounded
real, no element may have a pole in Re [d 2 0,which, inter alia, implies that
no element may have a pole at s = m ; i.e., S(m) < 00.
Proof of Sufficiency. The proof is analogous to that of the
positive real lemma. First we must check analyticity of all elements of S(s) in Re [sl> 0.This follows from the first of Eqs.
(7.2.1), for the positive definiteness of P and the complete obsewability of [F, H] guarantee by the lemma of Lyapunov that
Re A,[F] < 0 for all i. This is the same as requiring analyticity of
all elements of S(s) in Re [s] 2 0.
Next we shall check the nonnegative definite Hermitian nature
of I - S8(jw)S(jo)= I - S'(-jw)S(jw). We have
x ( j w l - F)-'G
+
F) (-jwl - F')P]
using the first of Eqs. (7.2.1)
= G(-jol - F')-'[P(jwl-
- F')-IPG + G P ( j o 1 - F)-'G
= -G(-jwI - F')-'(HJ + LW,) - (HJ+ LW,)'
x ( j o I - F)-'G
using the second of eqs. (7.2.1)
= G'(-joI
Now rewrite this equality, using the third of Eqs. (7.2.1),as
+
I - J'J - WLW, - G ( - j o I - F')-'(HJ LW,)
- (HJ LW,)'(joI - F)-'G
- G'(-jcd- F')"(HH'
LL')(jol- F)-'G = 0
+
+
By setting
W(s) = W,
it follows that
as required.
VV V
+ L'(sI-
F)-'G
(7.2.2)
310
CHAP. 7
BOUNDED REAL MATRICES
Before turning to a proof of necessity, we wish to make one important
point. The above proof of sufficiencyincludes within it the definition of
a transfer-function matrix W(s), via (7.2.2), formed with the same F and G
as S(s), but with the remaining matrices in its realization, L and W,, defined
from the bounded real lemma equations. Moreover, W(s) satisfies (7.2.3),
and, in fact, with the most minor of adjustments to the argument above, can
be shown to satisfy Eq. (72.4) below. Thus W(s) is a spectral factor of
I - S'(-s)S(s). Let us sum up these remarks, in view of their importance.
Existence of Spectral Factor. Let S(s) be a bounded real
matrix of real rational functions, and let [F, G, H, J ] be a minimal
realization of S(s). Assume the validity of the bounded real lemma
(despite necessity having not yet been proved). In terms of the
matrices L and Wo mentioned therein, the transfer-function
matrix W(s) = W o L1(sI - F)-'G is a spectral factor of
I - S'(-s)S(s) in the sense that
+
The fact that there are many W(s) satisfying (7.2.4) suggests that we should
expect many triples P, L, and W, to satisfy the bounded real lemma equations.
Later in the chapter we shall discuss means for their determination.
Proof of Necessity. Our strategy will be to convert the bounded
real constraint to a positive real constraint. Then we shall apply
the positive real lemma, and this will lead to the existence of
P, L, and W, satisfying (7.2.1).
First, define P, as the unique positive definite symmetric solution of
(7.2.5)
P,F F'P, = -HH'
+
The stated properties of P , follow from the complete observability of [F, HI and the analyticity restriction on the bounded
real S(s), which implies that Re AdF] < 0.
Equation (7.2.5) allows a rewriting of I - S(-s)S(s):
I - S(-s)S(s)
= (I - J'J)
- (HJ)'(sI
- F)-'G
-Gf(--SI - F')-'HJ
- F ' - I HH'(s1- F)-IG
= (I - J'J) - (HJ + P,G)'(sI - F)-'G
- G'(-sI - F1)-'(NJ + P, G)
(7.2.6)
- G'(-sI
where we have used an argument in obtaining the last equality,
THE BOUNDED REAL LEMMA
SEC. 7.2
311
which should be by now familiar, involving the rewriting of HH'
as P,(sI - F ) (-sI - F3P,. Now define
+
Then Z(s) has all elements analytic in Re [s] 2 0,and evidently
Z(s) Z'(-s) = I - S'(-s)S(s), which is nonnegative dehite
Hermitian for all s =jw, o real, by the bounded real constraint.
It follows that Z(s) is positive real. Moreover, IF,G,
-(HJ P,G), (I - .7'J)/2) is a completely controllable realization of Z(s). This realization of Z(s) is not necessarily completely
observable, and so the positive real lemma is not immediately
applicable; a modification is applicable, however. This modification was given in Problem 5.2.2, and also appears as a lemma
in [3]; it states that there exist real matrices P,, L, and W. with
P, nonnegative dejnite symmetric satisfying
+
+
P,F
+ F'P, = -LL1
+
PzG = -(HJ
P,G)- LW,
I - T J = w;w,
(7.2.8)
+
Now set P = P, P,; P is positive dehite symmetric because
P, is positive definite symmetric and P, is nonnegative definite
symmetric. Combining the first of Eqs. (7.2.8) with (7.2.5) we
obtain
PF+ F'P= - H Z - L C
while
-PG = H J + LW,
I - J'J= WLW,
are immediate from (7.2.8). These are precisely the equations of
the bounded real lemma, and accordingly the proof is complete.
vvv
Other proofs not relying on application of the positive real lemma are of
course possible. In fact, we can generate a proof corresponding to each
necessity proof of the positive real lemma. The problems at the end of this
section seek to outline development of two such proofs, one based on a
network synthesis result and energy-balance arguments, the other based on a
quadratic variational problem.
312
BOUNDED REAL MATRICES
CHAP. 7
+
Example Let S(s) = s/(s
1). It is easy to verify that S(s) has a minimal realization
7.2.1
( - ] , I , -1, 1). Further, one may check that Eqs. (7.2.1) are satisfied
with P = 1, L = 1, and W, = 0. Thus S(s) is bounded real and
is such that I
- S(-s)S(s)= W(-s)W(s).
The Lossless Bounded Real Lemma
The lossless bounded real lemma is a specialization of the
bounded real lemma to lossless bounded real matrices. Recall that a lossless
bounded real matrix is a bounded real matrix S(s) satisfying the additional
restriction
The lemma is as follows.
Lossless Bounded Real Lemma. Let S(s) be an m X m
matrix of real rational functions of a complex variable s, with
S(w) < w , and let {F, G, H, J)be a minimal realization of S(s).
Then S(s) is lossless bounded real if and only if there exists a
(real) positive definite symmetric matrix P such that
PF
+ F'P = -HH'
Proof. Suppose first that S(s) is lossless bounded real. By the
bounded rea1 lemma and the fact that S(s) is simply bounded real,
there exist matrices P,L, and W,satisfying the regular bounded
real lemma equations and such that
where
W(s) = W,
+ L'(sI - F)-'G
(7.2.2)
Applying the lossless constraint, it follows that W'(-s)W(s) = 0,
and setting s = jo we see that W(ja) = 0 for all o.Hence W(s)
= 0 for all s, implying W, = 0, and, because [F,G] is completely
controllable, L = 0.When W, = 0 and L = 0 are inserted in the
regular bounded real lemma equations, (7.2.10) result.
Conversely, suppose that (7.2.10) hold. These equations are
SEC. 7.2
THE BOUNDED REAL LEMMA
313
a special case of the bounded real lemma equations, corresponding to W,= 0 and L = 0, and imply that S(s) is bounded real
and, by (7.2.2) and (7.2.4), that I - S'(-s)S(s) = 0 ; i.e., S(s) is
lossless bounded real.
VVV
Example Consider S(s) = (s - l)/(s+- I), which has a minimal realization
7.2.2
(-1.1, -2, I). We see that (7.2.10) are satisfied by taking P = [2], a
positive definite matrix. Therefore, S(s) is lossless bounded real.
Problem Using the bounded real lemma, prove that S(s) is hounded real if S(s)
is bounded real, and louless bounded real if S(s) is lossless bounded real.
7.2.1
Problem This problem requests a necessity proof of the bounded real lemma via
an advanced network theory result. This result is as follows. Suppose
that S(s)is bounded real with minimal statespace realization[F, G, H,J).
Then there exists a network of passive components synthesizing S(s),
where, in the statespace equations i= Fx f Gu, y = H'x f Ju, one
i)),v and i being port voltage and current, and
can identify u = $(v
y = t(u - i). Check that the instantaneous power flow into the network
is given by u'u - y'y. Then proceed as in the corresponding necessity
proof of the positive real lemma to genemte a necessity proof of the
bounded real lemma.
7.2.2
+
Problem This problem requests a necessity proof of the bounded real lemma via
a quadratic variational problem. The timedomain statement of the fact
that S(s) with minimal realization (F,G, H, J ] is hounded real is
7.2.3
for all I, and u ( . ) for which the integrals exist. Put another way, if x(0)
= 0 and i Fx f Gu,
i
.
for all t , and u(.) for which the integrais exist.
To set up the variational problem, i t is necessary to assume that
I - I'J = R is positive definite, as distinct from merely nonnegative
definite. The variational problem becomes one of minimizing
+
subject to i= Fx Gu, x(0) = x..
(a) Prove that the optimal performance index is bounded above and
below if S(s) is bounded real, and that the optimal performance index
is x!,lT(O, t,)x,, where II(., ti) satisfies a certain Riccati equation.
314
Problem
7.2.4
BOUNDED REAL MATRICES
CHAP. 7
(b) Show that lim,,,, n(t,t , ) exists, is independent of 1, and satislies
a quadratic matrix equation. Generate a solution of the bounded real
lemma equations.
Let S(s) =
2). Show from the bounded real definition that
S(s) is bounded real, and find a solution of the bounded real lemma
equations.
+
7.3
SOLUTION OF THE BOUNDED REAL
LEMMA EQUATIONS*
In this section our aim is to indicate how triples P, L,and W,
may be found that satisfy the bounded real lemma equations
We shall consider fust the special case of a lossless bounded real matrix,
corresponding t o L and W,both zero. This case turns out subsequently to
& important and is immediately dealt with. Setting L and W, to zero in
(7.3.1), we get
PF+ F'P= -HH'
-PG = HJ
Because Re AXF)
< 0 and [F, HI is completely observable, the first equation
is solvable to yield a unique ~bluti011P. The second equation has to automatically be satisfied, while the third does not involve P. Thus solution of
the lossless bounded real lemma equations is equivalent to solution of
We now turn to the more general case. We shall state most of the results
as theorems without proofs, the proofs following closely on corresponding
results associated with the positive real lemma equations.
A Special Solution via Spectral Factorization
The spectral factorization result of Youla [4], which we have
referred to previously, will establish the following result.
*Thissection may be omitted at a first reading.
SEC. 7.3
SOLUTION OF THE BOUNDED REAL EQUATIONS
315
Theorem 7.3.1. Consider the solution of (7.3.1) when {F, G,H,
J) is a minimal realization of an m x m bounded real matrix S(s).
Then there exists a matrix W(s) such that
I - s-(-s)S(s) = w y - s ) W(s)
(7.3.3)
with W(s)computable by procedures specified in [4].Moreover,
W(s) satisfies the following properties:
1. W(s)is r x m,where r is the normal rank of I - S(-s)S(s).
2. All entries of W(s) are analytic in Re [s] 2 0.
3. W(s)has rank r throughout Re [s] > 0; equivalently, W(s)
possesses a right inverse whose entries are analytic in Re [s]
> 0.
4. W(s)is unique to within left multiplication by an arbitrary
r x r real orthogonal matrix.
5. There exist matrices W, and L such that
W(s)= W ,
+ L'(sI - F)-'G
(7.3.4)
with L and W, readily computable from W(s),F, and G.
The content of this theorem is the existence and computability via a technique discussed in I41 of a certain transfer-function matrix W(s). Among
various properties possessed by W(s) is that of the existence of the decomposition (7.3.4).
The way all this relates to the solution of (7.3.1) is simple.
+
-
Theorem 7.3.2. Let W(s)= W , L'(sI F)-'G be as descrjbed in Theorem 7.3.1. Let P be the solution of the first of
Eqs. (7.3.1).Then the second and third of Eqs. (7.3.1) also hold;
i.e., P, L, and W , satisfy (7.3.1). Further, P is positive definite
symmetric and is independent of the particular W(s)of the family
described in Theorem 7.3.1.
Therefore, the computations involved in obtaining one solution of (7.3.1)
break into three parts: the determination of W(s), the determination of L
and W,, and the determination of P. While P is uniquely defined by this
procedure, L and W , are not; in fact, L and W,may be replaced by LV'
and VW. for any orthogonal V.
In the case when W(s) is scalar, the computations are not necessarily
difficult; the "worst" operation is polynomial factorization. In the case of
matrix W(s),however, the computational burden is much greater. For this
reason we note another procedure, again analogous to a procedure valid for
solving the positive real lemma equations.
31 6
CHAP. 7
BOUNDED REAL MATRICES
A Special Solution via a Riccati Equation
A preliminary specialization must be made before this method is
applied. We require
(7.3.5)
R=I-J'J
to be nonsingular. As with the analogous positive real lemma equations, we
can justify imposition of this restriction in two ways:
1. With much greater complexity in the theory, we can drop the restriction. This is essentially done in [5].
2. We can show that the task of synthesizing a prescribed S(s)-admittedly
a notion for which we have not yet given a precise definition, but certainly the main application of the bounded real lemma-for the case
when R is singnlar may be reduced by simple transformations, described
in detail subsequently in our discussion of the synthesis problem, to the
case when R is nonsingular.
In a problem at the end of the last section, the reader was requested to
prove Theorem 7.3.3 below. The background to this theorem is roughly as
follows. A quadratic variational problem may be defined using the matrices
F, G, H, and J of a minimal state-space realization of S(s). This problem is
solvable precisely when S(s) is bounded real. The solution is obtained by
solving a Riccati equation, and the solution of the equation exists precisely
when S(s) is hounded real.
Theorem 7.3.3. Let {F, C, H, J } be a minimal realization of a
bounded real S(s), with R = I - J'J nonsingular. Then II(t, t , )
exists for all t < t , , where
-E
dt
=n(F
+ GR-~J'H?+ (F 3- G R - ~ J ' H ~ ' ~
- I ~ G R - ~ G ' I-I HH' - HJR-'J'H'
(7.3.6)
and II(t,, t,) = 0. Further, there exists a constant negative
definite symmetric matrix i=f given by
R = am
n ( t , t , ) =!mII(t,
*,-
(7.3.7)
t,)
a satisfies a limiting version of (7.3.6), viz.,
R(F+ G R - I J ' +
~ (F + CR-~J'HO'Z~
-i=fc~-~~'ii
and
- HH' - HJR-'J'H'
The significance of
i=f
=0
(7.3.8)
is that it defines a solution of (7.3.1) as follows:
SEC. 7.3
SOLUTION OF THE BOUNDED REAL EQUATIONS
ii
Theorem 7.3.4. Let
317
be as above. Then P = -R, L =
(itG - HJ)R-I/2, and W, = R'" are a solution triple of(7.3.1).
The proof of this theorem is requested in the problems. It follows simply
from (7.3.8). Note that, again, L and W, may be replaced by LV' and VW,
for any orthogonal V.
With the interpretation of Theorem 7.3.4, Theorem 7.3.3 really provides
two ways of solving (7.3.1); the first way requires the solving of (7.3.6)
backward in time till a steady-state solution is reached. The second way
requires direct solution of (7.3.8).
We have already discussed techniques for the solution of (7.3.6) and (7.3.8)
in connection with solving the positive real lemma equations. The ideas are
unchanged. Recall too that the quadratic matrix equation
,Y(F
+ GR-'J'H') + (F + GR-'SH')'X - XGR-IG'X
- HH'
- HJR-'SH'
=0
(7.3.9)
in general has a number of solut~ons,only one of which is the limit of the
associated Riccati equation (7.3.6). Nevertheless, any solution of (7.3.9) will
define solutions of (7.3.1), since the application of Theorem 7.3.4 is not
dependent on using that solution of (7.3.9) which is also derivable from the
Riccati equation.
The definitions of P, L,and W, provided by Theorem 7.3.4 always generate
a spectral factor W(s) = W,
L'(s1- F)-'G that is of dimension m x m;
this means W(s) has the smallest possible number of rows.
Connection of Theorems 7.3.2 and 7.3.3 is possible, as for the positive
real lemma case. The connection is as follows:
+
Theorem 7.3.5. Let P be determined by the procedure outlined
in Theorem 7.3.2 and
7.3.3. Then P = -ii.
Ti by the procedure outlined in Theorem
In other words, the spectral factor generated using the formulas of Theorem
7.3.4, when I
3 is determined as the limiting solution of the R i m t i equation
(7.3.6), is the spectral factor defined in [4] with the properties listed in
Theorem 7.3.1 (to within Left multiplication by an arbitrary m X m real
orthogonal matrix).
+
2), which can be verified to be bounded real.
Example Consider S(s)
Let us first proceed via Theorem 7.3.1 to compute solutions to the
bounded real lemma equations. We have
7.3.1
318
BOUNDED REAL MATRICES
+
CHAP. 7
+
We see that W(s) = ( s a
s 9 - 1 satisiiesallthe conditions listed
in the statement of Theorem 7.3.1. Further, a minimal realization for S(s)
is ( - 2 , I , f l ,01, and for W(s) one is readily found to be ( - 2 , 1 ,
-2
1). The equation for P is simply
+a,
As Theorem 7.3.2 predicts, we can verify simply that the second and
third of Eqs. (7.3.1) hold.
i
is
Let us now follow Theorem 7.3.3. The R i c ~ t equation
from which
-(z - JQ1
a c t , r , ) = 1 - [(2 - 2 / 2 ) / ( 2
eZfiLzl-01
+-JT)lezfi~fl-t)
Clearly n ( t , r,) exists for all t 5 t , , and
il =,--lim n(t,t , ) = -(2 - 0)
This is the correct value, as predicted by Theorem 7.3.5. We can also
solve (7.3.1) by using any solution of the quadratic equation (7.3.8).
which here becomes
a),
+a,
One solution is -(2 as we have found. The other is
-(2 + 0.
This leads to P = 2
L = -2 - f l ,while still
W. = 1. The spectral factor is
So far, we have indicated how to obtain special solution P of (7.3.1) and
associated L and W,; also, we have shown how to find a limited number of
other P and associated L and W, by computing solutions of a quadratic
matrix equation differing from the limiting solution of a Riccati equation.
Now we wish to find all solutions of (7.3.1).
SEC. 7.3
SOLUTION OF THE BOUNDED REAL EQUATIONS
319
General Solution of the Bounded Real Lemma
Equations
We continue with a policy of stating results in theorem form
without proofs, on account of the great similarity with the analogous calculations based on the positive real lemma. As before, we convert in two steps
the problem of solving (7.3.1) to the problem of solving a more or less
manageable quadratic matrix inequality.
We continue the restrictionthat R = I - J'Jbe nonsingular. Then we have
Theorem 7.3.6. Solutionof (7.3.1) is equivalent, in a sense made
precise below, to the solution of the quadratic matrix inequality
PF $- F'P
-HH' - (PG
+ HJ)R-'(PG + HJ)'
(7.3.10)
which is the same as
+ +
P(F+ GR-'J'H')
(F GR-IJ'H')'P
$- PGR-'G1P
HH' 4- HJR-'J'H'
+
<0
(7.3.11)
If P,L,and W, satisfy (7.3.1), then P satisfies (7.3.10). If P
satisfies (7.3.10), then P together with some matrices L and W,
satisfies (7.3.1). The set of all such L and W, is given by
where V ranges over the set of real matrices for which YV' = I,
the zero block in the definition of W; contains the same number
of columns as N, and N is a real matrix with minimum number
of columns such that
PF + F'P
= -HH'
- (PG + HJ)R-'(PC + HJ)'
- NN'
(7.3.13)
A partial proof of this theorem is requested in the problems. Essentially,
it says that P satisfies (7.3.1) if and only if P satisfies a quadratic matrix
inequality, and L and W,can be computed by certain formulas ifpis known.
Observe that the matrices P satisfying (7.3.10) with equality are the negatives of the solutions of (7.3.9). In particular, P = -Zt satisfies(7.3.10) with
equality, where ii is the limiting solution as t approaches minus iniinity of
the Riccati equation (7.3.6).
The next step is to replace (7.3.10) by a more manageable inequality.
Recalling that P satisfies (7.3.10) with equality, it is not difficult to prove the
following:
320
BOUNDED REAL MATRICES
CHAP. 7
Theorem 7.3.7. Let Q = P - F, where P is defined as above.
Then P satisfies (7.3.10)if and only if Q satisfies
QE+ PQ < - Q G K r c ' Q
where
P = F + GR'IPH'
(7.3.14)
+ GR-'G'p
(7.3.15)
Solution procedures for (7.3.14) have already been discussed. As one
might imagine, the choice of as the negative of the limiting solution of the
Riccati equation (7.3.6)leads to possessing nonpositive real part eigenvalues, and thus to Q being nonnegative definite.
At this point, we conclude our discussion. The main point that we hope
the reader has grasped is the essential similarity between the problem of
solving the positive real lemma equations and the bounded real lemma
equations.
Problem
Prove Theorem 7.3.4.
7.3.1
Problem
7.3.2
Problem
7.3.3
Prove that if P satisfies Eqs. (7.3.1),then P satisfiesthe inequality (7.3.10).
Verify that if P satisfies (7.3.10) and L and Woare defined by (7.3.12).
then (7.3.1)holds.
Let S(s) = # - &/(s
I). Find a minimal realization [F, G, H, J ) fot
S(s).Find a W(s)with all the properties listed in Thwrem 7.3.1 and construct a realization {F, G, L, W,];
determine a P such that (7.3.1)holds.
Also set up a quadratic equation determining two P safisfying (7.3.1);
find these P and the associated W(s). Finally, find all P satisfying (7.3.1).
+
7.4
REClPROClTV IN STATE-SPACE TERMS
As we have noted in the introduction to this chapter, there are
advantages from the point of view of solving synthesis problems in stating
the constraints applying to the matrices in a state-space realization of a
symmetric transfer function. Without further ado, we state the first main
result. For the original proof, see [q.
Theorem 7.4.1. Let W(s) be an m x m matrix of real rational
functions of s, with W ( w )< m. Let (F,G, H,J) be a minimal
realization of W(s). Then W(s) = W'(s) if and only if
and there exists a nonsingular symmetric matrix P such that
SEC. 7.4
REClPROClN IN STATE-SPACE TERMS
321
Proof. First suppose that W(s) = W'(s). Equation (7.4.1) is
immediate, by settings = m. Symmetry of W(s) implies then that
so that {F,G,H ] and{Ff, -H, -G] are two minimal realizations
of W(s) - J. It follows that there exists a nonsingular matrix P ,
not necessarily symmetric, such that
The first two of these equations imply Eqs. (7.4.2).
It also follows that P is unique. This is a general property of
a coordinate-basis transformation relating prescribed minimal
realizations of the same transfer-function matrix, but may be seen
in this particular care by the following argument. Suppose that
PI and P, both satisfy (7.4.2). Set P, = P, - P,, so that P3F
= F'P, and P,G =r 0. Then P,FG = F'P,G = 0, and more
generally P3FfG= 0, or P3[G FG . .. Fm-'GJ= 0, where n is the
dimension of F. Complete controllability implies that P3 = 0 or
PI = P,, establishing uniqueness.
Having proved the uniqueness of P,to establish symmetry is
straightforward. Transposing the first and last equations of
(7.4.3) yields
.
F'P' = P'F
PC=-H
Now compare Eqs. (7.4.4) with (7.4.2). We see that if P satisfies
(7.4.2), so does P'. By the uniqueness of mafrices P satisfying
(7.4.2), P = P' as required.
Now we prove the converse. Assume (7.4.1) and (7.4.2). Then
The third equality follows by using (7.4.2); the remainder is
evident, and the proof is complete.
VOV
In the statement of the above theorem we restricted W(s) to being such
that W(m) < ca. If W(s) is real rational but does not satisfy this constraint,
322
BOUNDED REAL MATRICES
.+
+
CHAP. 7
+-
we may write W(s)= . . W,sl
W , s + J H'(sZ- F)-'G. Then
Theorem 7.4.1 applies with the addition of W , = W:, W2 = W;,- - . ,to
Eq. (7.4.1).
Calculation of P
Just as we have foreshadowed the need to use solutions of the
positive real lemma equation in applications, so we make the point now that
it will be necessary to use the matrix P in later synthesis calcuiations. For this
reason, we note here how P may be calculated.
For notational convenience, we define
as the controllability and observability matrices associated with W(s).It will
be recalled that F i s assumed to be n x n, and that V,and V. will have rank
n by virtue of the minimality of (F, G, H, J ] .
From (7.4.2) we have
PC = -H
PFG = FPG = -F'H
PFZG = FPFG = -(F3'H
.
.
and generally
PFG = -(F'YH
I t follows that
PV< = -V,
or
P=(-V,V:)(v,v:)-'
.
'
(7.4.5)
with the inverse existing by virtue of the full rank property of V,.
Example
7.4.1
Consider the scaIar transfer function W(s) = (s f lXs3
for which a minimal realization is
The controllability and obse~abilitymatrices are
+ s2 + s + I)-'
SEC. 7.4
RECIPROCITY IN STATE-SPACE TERMS
323
and
Observe that P,as required, is symmetric. The satisfaction of (7.4.2) is
easily checked.
Developments from Theorem 7.4.1
We wish to indicate certain consequences of Theorem 7.4.1,
including a coordinate-basis change that results in Eqs. (7.4.2) taking a
specially simple form.
Because P is symmetric and nonsingular, there exists a nonsingular matrix
T such that
where Z is a diagonal matrix of the form
Here, if F is n
result:
x
n, we have k, f k, = n. We note the foUowing important
Theorem 7.4.2. The integers k, and k, above are uniquely
defined by W(s) and are independent of the particular minimal
realization of W(s)used to generate P.
Proof. Suppose that {F,,G,, H,, J ) and (F,,G,, H,, J ) are two
minimal realizations of the same W(s)= W'(s), with T a nonsingular matrix such that TF,T-1= F,,etc. Then if P, andP, are
the solutions of (7.4.2) corresponding to the two realizations, it is
easily checked that PI =.TIP,T. It follows that P, and P, have
identical numbers of posltlve and negative real eigenvalues. But
k, and k, are, respectively, the number of positive and the number
of negative eigenvalues of PI or P,;thus k, and k, are uniquely
d e h e d by W(s).
VVV
In the sequel thecomputation of Tfrom P will be required. This is actually'
quite straightforward and can proceed by simple rational operations on the
entries of P. Reference [7] includes two techniques, due to Lagrange and
324
BOUNDED REAL MATRICES
CHAP. 7
Jacobi. The matrix Tis not unique and may be taken to be triangular. If k ,
and k , above are required rather than T, these numbers can be computed
from the signs of the principal leading minors of P without calculation of T.
As shown in [7],if D, is the value of the minor of P obtained by deleting all
but the first i rows and columns, and if D, # 0for all i, then k , is the number
of permanences of sign of the sequence 1, D,, D,, . . . , D,. (If one or more
D, is zero, a more complicated rule applies.) Given k , , k, is immediately
obtained as n - k , .
Now let us note a coordinate-basis change that yields an interesting form
for Eqs. (7.4.2).
With T such that P = T'XT, define a new realization {F,, G I ,H , , J ) of
W(s)by
F, = TFT-1
G , = TG
H , = (T-')'H
(7.4.8)
Then it is a simple matter to check that Eqs. (7.4.2) in the new coordinate
basis become
Dl = F;X
ZG, = - H E
(7.4.9)
so that
and if G ; = [G:, G;,], then Hi = [-G:, Gl,]
Since Tis not unique, F , , , F,,, etc., are also not unique, although k , and
k , are unique.
We sum up this result as follows:
Theorem 7.4.3. Let W(s)be a symmetric matrixof real rational
functions of s with W(m) < oo. Let {F, G, H, J ] be a minimal
realization. Then there exists a nonunique minimal realization
IF,, G , , H,, J } with F , = TFT-1, G , = TG, and H, = (T-')'H,
the matrix T being defined below, such that
XF, = F;X
XG,
= -HI
+
Here the matrix C is of the form I,, (-Ix,), with k , and k ,
determined uniquely by W(s). With P the unique symmetric
solution of PF = F'P, PG = -H, any decomposition of P as
P = T'XT yields T.
Problem Suppose that W(s)is a scalar with minimal realization (F, G, H, J ] ,
with F of dimension n x n. Suppose also that W(s)is expanded in the
7.4.1
form
CHAP. 7
REFERENCES
325
so that w, = H'F'G. By examining VLPV,, where P is a solution of
PF = F'P and PG = -H, conclude that k , and k, are the number of
negative eigenvalues and positive eigenvalues respectively of the matrix
Problem Suppose that W(s) = W'(s), and that (F, G, H,J ) is a realization for
7.4.2
W(s) with the property that XF = F'X and ZG = -H for some
=I
1 Show that a realization (F,, GI,H,, J ] with
F, = TIIT-', etc., will satisfy XF, = FiZ and ZG, = -H, if and only
if T ' Z T = X.
Problem Suppose that W ( s )= (s -i- I)/(s
+ 2)(s + 3).
Compute the integers k ,
and kz of Theorem 7.4.2.
Problem Let
7.4.3
Find P such that PF = F'P,PG = -H, and then Tsuch that T%T= P.
Finally, determine F, = TFT-l, C, = TG, HI = (T-1)'H, and check
that ZF, = FiI: and Z G , = -HI for some appropriate E.
REFERENCES
[ I ] B. D. 0.ANDERSON,
"Algebraic Description of Bounded Real Matrim,"
Electronics Letlers, Vol. 2, No. 12, Dec. 1966, pp. 464-465.
[ Z ] S. VONOPANITLERD,
"F'assive Reciprocal Network Synthesis-The StateSpace Approach," Ph.D. Dissertation, University of Newcastle, Australia,
May 1970.
[ 3 ] B. D. 0.ANDERSON
and J. B. MOORE,
"Algebraic Structure of Generalized
Positive Real Matrices," SIAM Journal on Control, Vol. 6, No. 4, 1968, pp.
615-624.
[ 4 ] D. C. YOULA,
the Factorization of Rational Matrices," IRE Transactions
on Information Theory, Vol. IT-7, No. 3, July 1961, pp. 172-189.
[ 5 ] B. D. 0.ANDERSON
and L. M. SILVERMAN,
"Multivariable Covariance Factorization," to appear.
[ 6 ] D. C. YOULAand P. Trssr, "N-Port Synthesis via Reactance Extraction-Part
I," IEEE International Convention Record, New York, 1966, Pt. 7, pp. 183-205.
[7]F. R. GANTMACHER,
The Theory of Matrices, Chelsea, New York, 1959.
Part V
NETWORK SYNTHESIS
In this part our aim is to apply the algebraic descriptions of the
positive real, bounded real, and reciprocity properties to the
problem of synthesis. We shall present algorithms for passing
from prescribed positive real immittance and bounded real
scattering matrices to networks whose immittance or scattering
matrices are identical with those prescribed. We shall also pay
attention to the problem of reciprocal (gyratorless) synthesis
and to synthesizing transfer functions.
Formulation of State-Space
Synthesis Problems
8.1
AN INTRODUCTION TO SYNTHESIS
PROBLEMS
In the early part of this book we were largely preoccupied with
obtaining analysis results. Now our aim is to reverse some of these results.
The reader will recall, for example, that the impedance matrix Z(s) of a passive network N is positive real, and that G[Z(s)J is less than or equal to the
number of inductors and capacitors in the network N.* A major synthesis
result we shall establish is that, given a prescribed positive real Z(s), one can
find a passive network with impedance Z(s) containing precisely the minimum possible number 6[Z(s)] of reactive elements. Again, the reader will
recall that the impedance matrix of a reciprocal network N is symmetric.
The corresponding synthesis result we shall establish is that, given a prescribed symmetric positive real Z(s), we can lind a passive network with
impedanceZ(s) wntainingprecisely 6[Z(s)] reactive elements and no gyrators.
Statements similar to the above can be made regarding hybrid-matrix
synthesis and scattering-matrix synthesis. We leave the details to later sections.
Generally speaking, we shall attempt to solve synthesis problems by converting them in some way to a problem of matrix algebca, and then solving
*This second point was established in a section that may have been omitted at a first
reading.
330
FORMULATION OF STATE-SPACE PROBLEMS
CHAP.8
the problem of matrix algebra. In this chapter our main aims are to formulate, but not solve, the problems of matrix algebra and to indicate some
preliminary and minor, but also helpful, steps that can be used to simplify
a synthesis problem. Sections 8.2 and 8.3 are concerned with converting
synthesis problems to matrix algebra problems for, respectively, impedance
(actually including hybrid) matrices and scattering matrices. Section 8.4 is
concerned with presenting the preliminary synthesis steps.
Solutions to the matrix algebra problems-which amount to solutions of
the synthesis problems themselves-will be obtained in later chapters. In
obtaining these solutions, we shall make great use of the positive real and
bounded real lemmas and the state-space characterization of reciprocity.
The reader will probably appreciate that impedance (or hybrid) matrix
synthesis and scattering-matrix synthesis constitute two sides of the same
coin, in the following sense. Associated with an impedance Z(s) there is
a scattering matrix
A synthesis of Z(s) automatically yields a synthesis of S(s), and conversely.
It might then be argued that the separate consideration of impedance- and
scattering-matrix synthesis is redundant; however, the insights to be gained
from separate considerations are great, and we have no hesitation in offering
separate consideration. This is entirely consonant with the ideas expressed
in books dealing with network synthesis via classical procedures (see, e.g.,
[I]).
8.2
THE BASIC IMPEDANCE SYNTHESIS
PROBLEM
In this section we shall examine from a state-space viewpoint
broad aspects of the impedance synthesis problem. In the past, many classical
multiport synthesis methods made use of the underlying idea of an early
classical synthesis called the Darlington synthesis (see [I]), the idea being that
of resistance extraction, which we deline below. With recent developments in
the theory of state-space characterization of a real-rational matrix, a new
synthesis concept has been discovered-the reactance-extraction technique,
again defined below. The technique appears to have originated in a paper by
Youla and Tissi 121, which deals with the rational bounded real scattering
matrix synthesis problem. Later Anderson and Newcomb extended use of
the concept to the rational positive real impedance synthesis problem [3].
Indeed, the technique plays a prominent role in the modern theory of statespace n-port network synthesis, as we shall see subsequently. We shall now
331
THE BASIC IMPEDANCE SYNTHESIS PROBLEM
SEC. 8.2
formulate the impedance synthesis problem via (I) the resistance-extraction
approach, and (2) the reactance-extraction approach.
We shall assume throughout this section that there is prescribed a rational
positive real Z(s) that is to be synthesized, with Z(s) m x m, and with Z ( m )
< ao. Although the property that Z ( w ) < m is not always possessed by
rational positive real matrices, we have noted in Section 5.1 that a rational
positive real Z(s) may be expressed as
where L is nonnegative definite symmetric, and Z,(s) is rational positive real
with Z,(oo) < m . [Of course, the computations involved in breaking up
Z(s) this way are straightforward.] It follows then that a synthesis of Z(s)
is achieved by a series connection of syntheses of sL and Z,(s). The synthesis
of sL can be achieved in a trivial manner using transformers and inductors;
the details are developed in one of the problems. So even if the original Z(s)
does not have Z(ao) < m, we can always reduce in a simple manner the
problem of synthesizing a positive real Z(s) to one of synthesizing another
positive real Z,(s) with the bounded-at-infinity property.
The Resistance-Extraction Approach to Synthesis
The key idea of the resistance-extraction technique lies in viewing
the network N synthesizing the prescribed Z(s) as a cascade interconnection
of a lossless subnetwork and positive resistors. Since any positive resistor r
is equivalent to a transformer of turns ratio f i :1 terminated in a unit
resistor, we can therefore assume all resistance values to be 1 C2 by absorbing
in the lossless subnetwork the transformers used for normalization. On
collecting these unit resistors, p in number, into a subnetwork A',, we see
that N,loads a lossless (m + p ) port N, to yield N,as shown in Fig. 8.2.1.
The synthesis problem falls into two parts. The first part requires the derivation of a lossless positive real hybrid matrix X(s) of size (m p) x
+
--1
;-Ft--JFl
r----------------
I
I
-
Lossless
Z(s)
I
NL
NL
FIGURE 8.2.1. Resistance Extraction to Obtain a Lossless
Coupling Network.
I
I
332
FORMULATION OF STATE-SPACE PROBLEMS
CHAP. 8
+
(m p) from the given m x m positive real Z(s)such that termination in
unit resistors of the last p ports of a network synthesizing X(s) results in
Z(s)being observed at the unterminated ports. The second part requires
the synthesis of the lossless positive real X(s).As we shall later see, the
lossless character of X(s)makes the problem of synthesizing X(s)an easy
one. The difficult problem is to pass from Z(s) to X(s),and we shall now
discuss this in a little more detail.
Let us now translate the basic concept of the resistance-extraction technique into a true state-space impedance synthesis problem. Consider an
(m +p>port lossless network N,, as shown in Fig. 8.2.2. Let us identify
FIGURE 8.2.2. An (m +p)-port Lossless Network.
an input m vector u with the currents and a corresponding output m vector
y with the voltages at the left-hand m ports of N,. Let a p vector u, denote
the inputs at the right-hand p ports, where the i'h component u,,of u , is
either a current or a voltage, and let a p vector y , denote the corresponding
outputs with the ith component y,; being the quantity dual to u,,, i.e., either
a voltage or a current. Assume that we know the hybrid matrix X(s)describing the port behavior of N,, and that [F,, G,, H,, JJ isa minimal realization
of 3e(s). Then we can describe the port behavior of NL by the following
time-domain state-space equations:
We shall denote the dimension of the realization IF,, G,; H,, J,] by n.
Because NL is lossless, .the quadruple {F,, G,, H,, J,) satisfies the lossless
positive real lemma equations, i.e., the equations
should hold for a symmetric positive definite n
x n matrix P.
SEC. 8.2
THE
BASIC IMPEDANCE SYNTHESIS PROBLEM
333
Now suppose that termination of the right-hand p ports of NL in unit
resistors leads to the impedance matrix Z(s) being observed at the left-hand
ports. The operation of terminating thesep ports in unit resistors is equivalent to forcing
Therefore, the matrices F,, G, Hz, and J,, besides satisfying (8.2.3), are
further restricted to those with the property that by setting (8.2.4) into (8.2.2)
and on eliminating u, and y,, (8.2.2) must then reduce simply to
+
with F,G,H, and J such that J HJ(sI- F)-IG = Z(s).
If N, is reciprocal, as will be the case when a symmetricZ(s) is synthesized
by a reciprocal network, there is imposed on X(s) (and thus on the matrices
F,, G,, H,, and J,) a further constraint, which is that X(s) = J, f- HL(sI
- F,)-IG, should satisfy
I:X(s)
= X'(s)
l:
(8.2.6)
for some diagonal matrix I: with only $1 and -1 entries, in which the + I
entries correspond to the current-excited ports and the -1 entries to the
voltage-excited ports in the definition of X(s).
We may summarize these remarks by saying that if a network N synthesizing Z(s) is viewed as a lossless network N, terminated in unit resistors,
andif the lossless network has a hybrid matrix X(s) with minimal realization
{F,, G,, H,, J,], then (8.2.3) hold for some positive P, and under the substitution of (8.2.4) in (8.2.2), the state-space equations (8.2.5) will be recovered,
where {F, G, H, J ) is a realization of Z(s). The additional constraint (8.2.6)
applies in case N is reciprocal.
Now we reverse the line of reasoning just taken. Assuming for the moment
the ready solvability of the lossless synthesis problem, it should be clear that
impedance synthesis via the resistance-extraction technique requires the following: Find'a minimal state-space realization {F,, G,, HL,JL} that characterizes N, via (8.2.2) such that
1. For nonreciprocal networks,
(a) the conditions of the lossless positive reallemma, (8.2.3), arefir@lled,
and
(b) under the constraint of (8.2.4), the set of equations (8.2.2) simpl$es
to (8.2.5).
334
FORMllLATlON OF STATE-SPACE PROBLEMS
CHAP. 8
2. For reciprocal networks,
(a) the conditions of the Iossless positive real lemma and the reciprocity
property, corresponding to (8.2.3) and .(8.2.6), respecfively, are met,
and
(b) under the constraint of (8.2.4), the set of equations (8.2.2) simplges
to (8.2.5).
Before we proceed to the next subsection, we wish to make two comments.
First, the entries of the vector u, have not been specified up to this point
as being voltages or currents. Actually, we can demand that they be all currents, or all voltages, or 'some definite combination of currents and voltages, and nonreciprocal synthesis is still possible for any assignation of.
variables. However, as we shall see later, reciprocal synthesis requires us to
label certain entries of u, as currents and the others as voltages according to
a pattern determined by the particular Z(s) being synthesized:
The second point is,that we have for the moment sidestepped the question
of synthesizing a lossless hybrid matrix;X(s). Now lossleu synthesis problems have been easily solved in classical network synthesis, and therefore it is
not surprising to anticipate similar ease of synthesis via state-space procedures. However, when a lossless hybrid matrix X(s) is derived in the course
of a resistance-extraction synthesis of a positive real Z(s), it turns out that
synthesis of X(s) is even easier than if X(s) were simply an arbitrary lossless hybrid matrix that was to be synthesized. We shall defer illustration
of this point to a subsequent chapter dealing with actual synthesis
methods.
The Reactance-Extraction Problem
In a similar fashion to the resistance-extraction idea, we again
suppose that we have a network N synthesizing Z(s), but now isolate and
separate out all reactive elements (inductors and capacitors), instead of
resistors. Thus an m-port network N realizing a rational positive real Z(s)
L viewed as an interconnection of two subnetworks N , and N,, as illustrated
in Fig. 8.2.3. The subnetwork N , is a nondynamic (m n) port, while the
n port N, consists of, say, n, I-H inductors and n, 1-F capacitors with n,
n, = n. Note that all inductors and capacitors may always be assumed
unity in value. This is because an 1-H inductor may be replaced by a transformer of turns ratio f i 1 terminated in a 1-H inductor with a similar
replacement possible for capacitors; the normalizing transformers can then
be incorporated into N,.
In formulating the synthesis problem via the reactance-extraction technique, it is more convenient to consider the nonreciprocal and reciprocal
cases separately.
+
+
SEC. 8.2
THE
BASIC IMPEDANCE SYNTHESIS
I
PROBLEM
N
335
I
,.------------------J
FIGURE 8.2.3.Reactance Extraction to Obtain a Nondynamic Coupling Network.
Reactance Extraction-Nonreciprocal
Networks
In the nonreciprocal case, gyrators are allowable as circuit el*
ments. We can simplify the situation by assuming that all reactive elements
present in the network N synthesizing Z(s) are of the same kind, i.e., either
inductors or capacitors. This is always possible since one type of element may
be replaced by the other with the aid of gyrators, as illustrated by the equivaIences of Fig. 8.2.4. (These equivalences may easily be verified from the basic
direct element definitions.) Assume for the sake of argument that only inductors are used; then the proposed realization arrangement in Fig. 8.2.3 simplifies to that in Fig. 8.2.5.
Although N possesses an impedance matrix Z(s) by assumption, there is
however no guarantee that N, will possess an impedance matrix, though, as
FIGURE 8.2.4. Element Replacements.
336
CHAP. 8
FORMULATION OF STATE-SPACE PROBLEMS
1
L
- - - - - - - - - - - -N - - - - - -
I
J
FIGURE 8.2.5. Inductor Extraction.
remarked earlier, a scattering matrix or a hybrid matrix must exist for N,.
For the present purpose in discussing an approach to the synthesis problem,
it will be sufficient to assume tentatively that an impedance matrix does
exist for N,; this will be shown later to be true* when we actually tackle
the synthesis of an arbitrary rational positive real matrix.
Now because N, consists of purely nondynamic elements, its impedance
matrix M is real and constant; it must also be positive real since N, consists
of purely passive elements. Application of the positive real definition yields
a necessary and sufficient condition for M to be positive 14as
+
Let the (m n) x (m4- n) constant positive real impedance matrix M be
partitioned similarly to the ports of N, as
where Z , , is m x m, Z,,is n x n, etc. To relate these submatrices Z,, to
the input impedanceZ(s) when N, is loaded by N,,consisting of n unit inductors, we note that the impedance of N, is simply sl,. It follows simply that
Comparison with the standard equation
*Suictly speaking, the existence of an impedance matrix for N, is guaranted only if
the number of reactive elements used in N,in realizingZ(s) is a minimum (as shown earlier
in Section 5.3). However, for the sake of generality, such a restriction on reactive-element
minimality will not be imposed at this stage.
SEC. 8.2
THE BASIC IMPEDANCE SYNTHESIS PROBLEM
337
reveals immediately that a realization of Z(s) is given by
We may conclude from (8.2.10) therefore that if we have on hand a nonreciprocal synthesis of Z(s), and if, on isolating all reactive elements (normalized) and with all capacitors replaced by inductors as shown in Fig. 8.2.5,
A4 becomes the constunt impedance of the nondynamic network N,, ?hen this
impedrmee matrix M determines one particular 'realization of Z(s) through
(8.2.10).
Conversely, then, if we have any realization {F, G, H, J ] of a prescribed
Z(s), we might think of trying to synthesize Z(s) by terminating a nondynamic
N, in unit inductors, with the impedance matrix M of N, given by
For an arbitrary state-space realization of Z(s), M may not be positive real.
If M is positive real, it is, as we shall see, very easy to synthesize a network
N , with impedance matrix M. So if a realization of Z(s) can be found such
that M is positive real, then it follows that a synthesis of Z(s) is given by N,
as shown in Fig. 8.2.5, where the impedance matrix of N , exists, being that
of (8.2.11), and is synthesizable.
In summary, the problem of giving a nonreciprocal synthesis for a rational
positive real Z(s) via the reactance-exfrac?iontechnique is essentially this:
among the infinitely many realizations of Z(s) with Z ( m ) arsumed to have
finite entries,find one realizaiion (F,G, H,J ] of Z(s) such that M of (8.2.11) is
positive real, or equivalently, such that (8.2.7) Itolds.
O f course, if Z(s) is not positive real, there is certainly no possibility of
the existence of such a realization {F, G, H, J ) of Z(s).
We shall now consider the problem of giving a reciprocal synthesis for
a symmetric rational positive real Z(sf via thc reactance-extraction approach.
Reactance Extraction-Reciprocal
Networks
.
When a synthesis of a symmetric rational positive real Z(s) is
required to be reciprocal-(i.e., no gyrators may be used in the synthesis),
the element replacements of Fig. 8.2.4 used for the nonreciprocal networks
are not applicable, and we return to consider Fig. 8.2.3 as our proposed
scheme for a reciprocal synthesis of Z(s). Of course, N , must be reciprocal if
the network N synthesizing Z(s) is to be reciprocal.
Suppose that N, is described by an ( m n) x (m n) hybrid matrix M
in which the excitations at the first m $ n, ports of N , are currents and those
+
+
338
FORMULATION OF STATE-SPACE PROBLEMS
CHAP. 8
at the remaining n, ports are voltages. Let M be partitioned as
where M , , ism x m and M,, is n x n. Because of the lack of energy storage
elements, M is constant. Because N , is reciprocal, M satisfies
and because N , is passive, M satisfies
M+ M'2O
Now noting that the hybrid matrix of N,is simply st, with excitation variables of N, appropriately defined, it is straightforward to see that the impedance observed at the input m ports of N , when loaded in N, is
Examination of (8.2.15) reveals clearly that one state-space realization of
Z(s) is given by
IE, G, H, Jl = {-Mm Mzl, -M:z, MI11
(8.2.16)
Conversely, apossible hybrid matrix M of NN,
is
where (F, G, H, J}is any realization of Z(s). Further, if a realization of Z(s)
can be found such fhat M is positive real (i.e., M M' 2 0) and (l,C 2 ) M
is symmetric with Z an n x n diagonal matrix consisting of il and -1
diagonal entries only, then it follows, provided M can be synthesized to yield
N,, that a reciprocal synthesis of Z(s) is given as shown in Fig. 8.2.3, in
which the hybrid matrix for N,is that of (8.2.17). As we shall see, the problem
of synthesizing a constant hybrid matrix satisfying the passivity and reciprocity constraints is easy.
In summary, the problem of giving a reciprocalnetwork synthesis for a symmetric positive real impedance matrix Z(s) via the reactance-extraction technique is essentially equivalent to the problem offinding a realization {F, G, H, J ]
among the infinitely many realizations of Z(s), with Z(m) assumed to have
fnite entries, such fhat M of (8.2.17) satiSfies (8.2.13) and (8.2.14).
We can conclude from the resistan* and reactance-extraction problems
posed in this section that the latter technique, that of reactance extraction,
is the more natural approach to solving the synthesis problem from the state-
+
SEC. 8.3
THE BASIC SCATTERING MATRIX PROBLEM
339
space point of view. This is so because the constant impedance or hybrid
matrix of the nondynamic network N, (see Fig. 8.2.3) resulting fromextraction of all reactive elements is closely associated through (8.2.1 1) or (8.2.17)
with the four constant matrices F,C, H, and J, which constitute a statespace realization of the prescribed Z(s). On the other hand, an appropriate
description for the lossless network N, resulting from extraction of all resistors (see Fig. 8.2.1) does not reveal any close association to F,C, H, and J
in an obvious manner.
Two additional points are worth noting:
I. The idea of reactance extraction applies readily to the problem of
hybrid-matrix synthesis. Problem 8.2.2 asks the reader to illustrate this
point.
2. Another important property that the reactance-extraction approach
possesses, but the resistance-extraction approach does not. is its ready
application to the lossless network synthesis problem. From the reactanceextraction point of view, the lossless synthesis problem is essentially equivalent to the problem of synthesizing a constant hybrid (or
immittance) matrix that satisfies the losslessness and (in the reciprocalnetwork case) the reciprocity constraints. This problem, as will be
seen subsequently, is an extremely easy one. The resistance-extraction approach, on the other hand, essentially avoids consideration of
lossless synthesis, assuming this can be carried out by classical procedures or by a state-space procedure such as a reactance-extraction
synthesis.
Problem Synthesize the lossless positive real impedance matrix sL,where L is
nonnegafive definite symmetric. Show that if syntheses of sL and Zo(s)
are available, a synthesis of sL Z,(s) is easily achieved. (Hint:If L is
nonnegative definite symmetric, there exists a matrix T such that L =
8.2.1
+
T'T.)
Problem Suppose that X(s) is an m x m rational positive real hybrid matrix with
8.2.2
a statespace realization {F,C, H, J). Formulate thereactane&xtraction
synthesis problem for X(s) for nonreciprocal and reciprocal cases, and
derive the conditions required of IF, G, H , J ] for synthesis.
Problem Verify Eq. (8.2.15).
8.2.3
8.3
THE BASIC SCATTERING MATRIX
SYNTHESIS PROBLEM
In this section we shall discuss the state-space scattering matrix
synthesis problem in terms of the reactance-extraction method and the
resistance-extraction method. As with the impedance synthesis problem,
340
FORMULATION OF STATE-SPACE PROBLEMS
CHAP. 8
matrices appearing in the descriptions of a nondynamic coupling network
arising in the reactance-extraction method exhibit close links with a state
space realization IF, G, H, J ) of the prescribed rational bounded real scattering matrix S(s). The material dealing with the reactance-extraction method
is based on [2], and that dealing with the resistance-extraction method is
drawn from [4].
The Reactance-Extraction Problem
As before, we regard an m port Nsynthesizing a rational bounded
real S(s) as an interconnection of two subnetworks N, and N,,as shown in
Fig. 8.23, except that we now have S(s) as the scattering matrix observed at
the left-hand m terminals of N,, rather than Z(s) as the impedance matrix.
The subnetwork N, is a nondynamic (m f n) port, while the n-port subnetwork N, consists of n, inductors and n, capacitors with n, R, = n. If
gyrators are allowed, then we may arrange for N, to consist of n inductors,
and Fig. 8.2.5 applies in the latter case.
+
Reactance Extraction-Reciprocal Networks. For the moment we wish to
consider the reciprocal case in which N,contains no gyrators. In contrast to
the impdance synthesis problem, we shall allow arbitrary choice of the values
of the inductors and capacitors of N,. Assume that the n, inductors and n,
capacitors are L,, L,, . ,L,, henries and C,, C,, . . .,C., farads. Suppose
that N, is described by a scattering matrix S.; then because of the reciprocity,
lack of energy storage elements, and passivity of N,,S. is symmetric and
constant, and the bounded real condition becomes simply
..
Of course, S. is normalized to +1 at each of the m input ports (i.e., the lefthand m ports in Fig. 8.2.3) of N,. The normalization numbers for the output
(right-hand side) n ports have yet to be specified. (A different set of nunrbers
results in a diierent describing matrix S. for the same network N,.) A useful
choice we shall make is
We shall atso write down the scattering matrix of N,; the normalizing
number of each port will be the same as the normalizing number at the
corresponding port of N,.
Now with normalization numberl, the reactance sL, possesses a scattering
coefficient of
SEC. 8.3
THE BASIC SCATTERING MATRIX PROBLEM
341
for 1 = 1,2, . . . ,n,, while with normalization number 1/C, the reactance
I/sC, possesses a scattering coefficient of
I
= n, t-
1, . . . ,n. Evidently the scattering matrix of N,is
where
X
+(-1)L
= in,
(8.3.4)
If the resulting S. having the above normalization numbers is partitioned like
the ports of N, as
where S , , = S',, is m x m and S,, = S;, is n x n, then the termination of
N, in N , results in N with the prescribed matrix S(s).
The calculation of S(s) from S, and B(s) has been done in an earlier chapter. The result is
+ Slz[B-'(s) - S,J''S\z
s+ 1
= S , , + s,2(xx - s ~ ~ ) - ~ s ; (8.3.6)
~
S(S)= S , ,
=S,,
4-S,,(pi - x S , 3 - 1 z S : ,
with
Therefore, we can view (8.3.7) as a change of complex variables, so that
the function S(.) of the variable s may naturally be regarded as a function
W ( . ) of the variable p, via
342
FORMULATION OF STATE-SPACE PROBLEMS
CHAP. 8
I t follows then that
I n summary, if we have available a network N synthesizing S(s), this determines a state-space realization of
via (8.3.9), where the S,, are defined [see (8.3.5)] by partitioning of the scattering matrix of the nondynamic part of N; further, S , , and S,, are symmetric
and Eq. (8.3.1) holds.
Conversely, we may state the reciprocal passive synthesis problem as
follows. First, from S(s) form W(p) according to the change of variables
(8.3.7), and construct a state-space realization of W@) such that quantities
S,,, S,,, and S,, are defined via (8.3.9) by this realization, and S, in (8.3.5) is
symmetric and satirjFes the passivity condition (8.3.1). Second, .synthesize S,
with a nondynamic reciprocal network. (This step turns out to be straightforward.)
Let us now rephrase this problem slightly. On examination of (8.3.9), we
can see that given Nand So, one possible state-space realization for W@)is
(Fw,Gw, Hw JJ = (ZSzz, ZSi2. S'm
Therefore
s, = 11,
+ ZIM
s
1
1
1
(8.3.10)
where
Note that the quadruple (F,, G., H,, J1. obtained from the scattering
matrix S, of N,via (8.3.10), although a state-space realization of W(p), is not
(except in a chance situation) a state-space realization (F, G, H, J ) of the
prescribed S(s). However, one quadruple may always be obtained from the
other via a set of invertible relations; we shall state explicitly these relations
in a later chapter.
Considering again the synthesis problem, we see that we can state the
symmetry condition and the passivity condition (8.3.1) on S, directly in
terms of M as follows:
and
SEC. 8.3
THE BASIC SCATTERING MATRIX PROBLEM
343
This allows us to restate the passive reciprocal scattering matrix synthesis
problem.
1. Form a real rational W(p)from the prescribed rational bounded real S(s)
via a change of variable defned in (8.3.7), and
2. Derive a state-space realizatjon IF,, G,, H,,Jw} of W b ) such that with
the definition (8.3.11), both (8.3.12) and (8.3.13) are fulfiled.
As we shall show later, once 1 and 2 have been done, synthesis is easy.
Reactance Extraction-Nonreciprocal Networks. We now turn our attention
to the nonreciprocal synthesis case in which gyrators are m p t a b l e as circuit
elements. Suppose that a synthesis is available in this case; all reactances
may be assumed inductive, since any capacitor may be replaced by a gyrator
terminated in an inductor, as shown in Fig. 8.2.4, and the gyrator part may
he then regarded as part of the nondynamic network N,. By choosing the
normalization numbers for then inductively terminated ports of N, according to (8.3.2), this then determines a real constant scattering matrix S, for
N,, which is a nonsyrnmetric matrix and satisfies the passivity condition
(8.3.1). If we partition S, similarly to the ports of N, as
and proceed as for the reciprocal case, we obtain, instead of (8.3.9),
Thus, knowing N and S,, we can write down one state-space realization for
W(P) as
This may be rewritten as
and the passivity condition of (8.3.1) in terms of M is therefore
I-MrM>O
(8.3.13)
The argument is reversible as for the reciprocal case. Hence we conclude
that to solve the nonreciprocal scattering matrix synthesis problem via
reactance extraction, we need to do the following things:
344
FORMULATION OF STATE-SPACE PROBLEMS
CHAP. 8
1. Form a real rational W(p)from the given rational bounded real S(s) via
a change of variabfe (8.3.7), and
2. Find a state-space realization {Fw,G,, H., J,] for W(p)such that with
the definition of (8.3.16), the passivity condition (8.3.13) is satisfied.
Once S, is known, synthesis is, as we shall see, very straightforward. Therefore, we do not bother to list a further step demanding passage from S, to
a network synthesizing it.
An important application of the reactance-extraction technique is to the
problem of lossless scattering matrix synthesis. The nondynamic coupling
network N, is lossless, and accordingly its scattering matrix is constant and
orthogonal. The lossless synthesis problem is therefore essentially equivalent
to one of synthesizing a constant orthogonal scattering matrix that is also
symmetric in the reciprocal-network case. This problem turns out to be particularly straightforward.
The Resistance-Extraction Problem
Consider an arrangement of an m port N synthesizing a prescribed
rational bounded real S(s), as shown in Fig. 8.2.1, save that the scattering
matrix S(s) [rather than an impedance matrix Z(s)] is observed at the lefthand ports of N,. Here a p-port subnetwork N,consisting of p uncoupled
unit resistors loads a lossless (m p)-port coupling network N, to yield N.
Clearly, the total number of resistors employed in the synthesis of S(s) isp.
Let S,(s) be the scattering matrix of N,, partitioned as the ports
+
where S,,(s) is m x in and S,,(s) is p x p. The network N,is described by
a constant scattering matrix S, = O,, and it is known that the cascade-load
connection of N,and N, yields a scattering matrix S(s) viewed from the
unterminated m ports that is given by
Therefore, the top left m x m submatrix of S,,(s) of N, must necessarily be
S(s). Since NL is lossless, S,(s) necessarily satisfies the lossless bounded real
requirement S:(-s)S,(s) = I.
We shall digress from the main problem temporarily to note an important
result that specifies the minimum number of resistors necessary to physically
construct a finite passive m port. It is easy to see that the requirement
S',(-s)S,(s) = I forces, among others, the equation
SEC. 8.3
THE BASIC SCATTERING MATRIX PROBLEM
345
Since S,,(s) is an m x p matrix, Sit(--s)S,,(s) has normal rank no larger
than p. It follows therefore that the rank p of the matrix I - S(-s)S(s),
called the resistivity matrix and designated by @(s), is at most equal to the
number of resistors in a synthesis of S(s). Thus we may state the above result
in the following theorem:
Theorem 8.3.1. [4]: The number of resistors p in a finite passive synthesis of S(s) is bounded below by the normal rank p of
the resistivity matrix m(s) = I- S(-s)S(s); i.e., p 2 p.
As we shall see in a later chapter, we can synthesize S(s) with exactly p
resistors.
The result of Theorem 8.3.1 can also be stated in terms of the impedance
matrix when it exists, as follows (a proof is requested in the problems):
Corollary 8.3.1. The number of resistors in a passive network
synthesis of a prescribed impedance matrix Z(s) is bounded below
by the normal rank p of the matrix ~ ( h ) Z'(-s).
+
Let us now return to the mainstream of the argument, but now considering
the synthesis problem. The idea is to split the synthesis problem into two
parts: first, to obtain from a prescribed S(s) (by augmentation of it with p
rows and columns) a lossless scattering matrix S,(sFhere, p is no less than
the rank p of the resistivity matrix; second, to obtain a network N, synthesizing S,(s). If these two steps are executed, it follows that termination of
the appropriatep ports of NL will lead to S(s) being observed at the remaining
m ports.
Since the second step turns out to be easy, we shaU now confine discussion
to the first step, and translate the idea of the first step into state-space terms.
Suppose that the lossless (m +p) x (m + p ) S,(s) has a minimal state
space realization {F,, G,, H,, J,), where FLis n x n. (Note that the dimensions of the other matrices are automatically fixed.)
Let us partition G,, Hz, and JL as
where G,, and H,, have m columns, and JL, has m columns and rows.
Straightforward calculations show that
J,,) constitutes a stateIt follows therefore from (8.3.18) that (F,, G,,, H,,,
346
FORMULATION OF STATE-SPACE PROBLEMS
CHAP. 8
space realization (not necessarily minimal) of S(s). Also, the lossless property
of N, requires the quadruple (F,, G,, H,, J,) to satisfy the following equations of the lossless bounded real lemma:
where P is an n x n positive definite symmetric matrix.
Further, if a reciprocal synthesis N of a prescribed S(s) is required [of
course, one must have S(s) = S'(s)], NL must also be reciprocal, and therefore it is necessary that SL(s) = S:(s), or that
Leaving aside the problem of lossless scattering matrix synthesis, we conclude that the scattering matrix synthesis problem via resistance extraction is
equivalent to the problem offinding constant matrices FL, C,, H,,and J, such
that
1. For nonreciprocal networks,
(a) the conditions of the lossless bounded real lemma (8.3.21) are fuljlled, and
(b) with the partitioning as in (8.3.20), {F,, C,,, H,,, J,,] is a realization of the prescribed S(s).
2. For reciprocal networks,
(a) the conditions of the lossless bounded real lemma (8.3.21) and the
conditions of the reciprocity property (8.3.22) are fulfileii, and
(b) with the partitioning as in (8.3.20), {F,, G,,, H,,, JL,)is a realization of the prescribed matrix S(s).
As with the impedance synthesis problem, once a desired lossless scattering
matrix S, has been found, the second step requiring a synthesis of SLis
straightforward and readily solved through an application of the reactanceextraction technique considered previously. This will be considered in detail
later.
Problem
8.3.1
Comment on the significance of the change of variables
and then give an interpretation of the (frequency-domain) bounded real
properties of S(s) in t e m of
~ ~the real rational
SEC. 8.4
PRELIMINARY SlMPLlFlCATlON
347
Does there exist a one-to-one transformation between a state-space
realization of S(s) and that of W(p)?
Problem
8.3.2
Give a proof of Corollary 8.3.1 using the result of Theorem 8.3.1.
8.4
PRELIMINARY SlMPLlFlCATlON BY
LOSSLESS EXTRACTIONS
Before we proceed to consider synthesis procedures that answer
the state-space synthesis problems for finite passive networks posed in the
previous sections, we shall indicate simplifications to the synthesis task.
These simplifications involve elementary operations on the matrix to be
synthesized, which amount to extracting, in a way to be made precise, lossless sections composed of inductors, capacitors, transformers, and perhaps
gyrators. The operations may be applied to a prescribed positive real or
bounded real matrix prior to carrying out various synthesis methods yet to
be discussed and, after their application, lead to a simpler synthesis problem
than the original. (For those familiar with classical network synthesis, the
procedure may be viewed as a partial parallel of the classical Foster preamble
performed on a positive real function preceding such synthesis methods as
the Brune synthesis and the Bott-Duffin synthesrs [S]. Essentially, in the
Foster preamble a series of reactance extractions is successively carried out
until the residual positive real function is mlnimum reactive and minimum
susceptrve; i.e., the function has a strictly Hurwitz numerator and a strictly
Hurwitz denominator.)
The reasons for using these preliminary simplification procedures are
actually several. AU are basically associated with reducing computational
burdens.
First, with each lossless section extracted, the degree of a certain residual
matrix is lowered by exactly the number of reactive elements extracted. This,
in turn, results in any minimal statespace realization of the residual matrix
possessing lower dimension than that of any minimal state-space realization
of the original matrix. Now, after extraction, it is the residual matrix that has
to be synthesized. Clearly, the computations to be performed on the constant
matrices of a minimal realization of the residual matrix in a synthesis of this
matrix will generally be less than in the case of the original matrix, though
this reduction is, of course, at the expense of calculations necessary for
synthesis of the lossless sections extracted. Note though that these calculations are generally particularly straightforward.
348
CHAP. 8
FORMULATION OF STATE-SPACE PROBLEMS
Second, as we shall subsequently see, in solving the state-space synthesis
problems formulated in the previous two sections, it will be necessary to
compute solutions of the positive real or the bounded real lemma equations.
Various techniques for computation may be found earlier; recall that one
method depends on solving an algebraic quadratic matrix inequality and
another one on solving a Riccati differential equation. There is however
a restriction that must be imposed in applying these methods: the matrix
Z(m) Z'(m) or I - S1(m)S(m)must be nonsingular. Now it is the lossless extraction process that enables this restriction to be met after the extra=
tion process is carried out, even if the restriction is not met initially. Thus
the extraction procedure enables the synthesis procedures yet to he given to
be purely algebraic if so desired.
Third, it turns out that a version of the equivalent network problem, the
problem of deriving all those networks with a minimal number of reactive
elements that synthesize a prescribed positive real impedance or bounded
real scattering matrix, is equivalent to deriving all triples P, L, and W,,
satisfying the equations of the positive real lemma or the bounded real lemma.
The solution to the latter problem (and hence the equivalent network problem) hinges, as we have seen, on finding all solutions of a quadratic matrix
inequality,* given the restriction that the matrix Z(m) Z'(m) or I S'(m)S(ca) is nonsingular. So, essentially, the lossless extraction operations
play a vital part in solving the minimal reactive element (nonreciprocal)
equivalence problem.
There are also other less important reasons, which we shall try to point
out as the occasion arises. But notice that none of the above reasons is so
strong as to demand the execution of the procedure as a necessary preliminary
to synthesis. Indeed, and this is most important, savefor an extraction operation corresponding to the removal of apole at infinity of entries of aprescribed
positive real matrix (impedance, admittance, or hybrid) that is to be synthesized, the procedures to be described are generally not necessary to, but merely
benelfcial for, synthesis.
+
+
Simplifications for the lmrnittance Synthesis
Problem
The major specific aim now is to show how we can reduce, by
a sequence of lossless extractions, the problem of synthesizing an arbitrary
prescribed positive real Z(s), with Z ( m ) < 00, to one of synthesizing a second
positive real matrix, 2(s) say, for which f(m) p ( m ) is nonsingular. We
shall also show how f(s) can sometimes be made to possess certain additional properties.
+
*The discussion of the quadratic matrix inequality appeared in Chapter 6, which may
have been amitted at a first reading.
SEC. 8.4
PRELIMINARY SIMPLIFICATION
349
In particular, we shall see how the sequence of simple operations can
reduce the problem of synthesizing Z(s) to one of synthesizing a s ) such that
+
1. 2(w) ?(w) is nonsingular.
2. 2(w) is nonsingular.
3. A reciprocal synthesis of Z(s) follows from a reciprocal synthesis of
2(s).
4. A synthesis of Z(s) with the minimum possible number of resistors
follows from one of P(s) with a minimum possible number of resistors.
5. A synthesis of Z(s) with the minimum possible number of reactive elements follows from one of 2(s) with a minimum possible number of
reactive elements.
Some remarks are necessary here. We shall note that although the sequence
of operations to be described is potentially capable of meeting any of the
listed conditions, there are certainly cases in which one or more of the conditions represents a pointless goal. For example, it is clear that 3 is pointless
when the prescribed Z(s) is not symmetric to start with. Also, it is a technical
fact established in [l] that in general there exists no reciprocal synthesis that
has both the minimum possible number of resistors of any synthesis (reciprocal or nonreciprocal), and simultaneously the minimum possible number
of reactive elements of any synthesis (again, reciprocal or nonreciprocal).
Thus it is not generally sensible to adopt 3,4, and 5 simultaneously as goals,
though the adoption of any two turns out to be satisfactory.
Since reciprocity of a network implies symmetry of its impedance or admittance matrix, property 3 is equivalent to saying that
3a. a s ) is symmetric if Z(s) is symmetric.
[There is an appropriate adjustment in case Z(s) is a hybrid matrix.] Further,
in order to establish 4, we need only to show that
4a. Normal rank [Z(s) 4- Z'(-s)]
= normal rank [2(s)
+ p(-$1.
The full justification for this statement will be given subsequently, when we
show that there exist syntheses of Z(s) using a number of resistors equal to
p = normal rank [Z(s) Z'(-s)]. Since we showed in the last section that
the number of resistors in a synthesis cannot be less than p, it follows that
a synthesis using p resistors is one using the minimum possible number of
resistors.
In the sequel we shall consider four classes of operations. For convenience,
we shall consider the application of these only to impedance or admittance
matrices and leave the hybrid-matrix case to the reader. For ease of notation,
Z(s), perhaps with a subscript, will denote both an impedance and admittance
matrix. Following discussion of the four classes of operation, we shall be
+
350
CHAP. 8
FORMULATION OF STATE-SPACE PROBLEMS
able to note quickly how the restrictions 1-5 may be obtained. The four
classes of operations are
1. Reduction of the problem of synthesizing Z(s), singular throughout
the s plane, to the problem of synthesizing Z,(s), nonsmgular almost
everywhere.
2. Reduction of the problem of synthesizing Z(s) with Z(m) singular to
the problem of synthesizingZ,(s) with some elements possessing a pole
at infinity.
3. Reduction of the problem of synthesizing Z(s) with some elements
possessing a pole at infinity to the problem of synthesizing Z,(s) with
6[Z,(s)] < 6[Z(s)] and with no element of Z,(s) possessing a pole at
infinity.
4. Reduction of the problem of synthesizing Z(s) withZ(w) nonsymmetric
to the problem of synthesizing Z,(s) with Z,(w) symmetric.
I. Singular Z(s). Suppose that Z(s) is positive real and singular. We shall
m; if not, operation 3 can be carried out. Then the
suppose that Z(m) :
problem of synthesizingZ(s) can be modified by using the following theorem:
x m positive real matrix with
Z(m) < m and with minimal realization (F, G, H, J ) . Suppose
that Z(s) is singular throughout the s plane, possessing rank
m' < m. Then there exist a constant nonsingular matrix T and
an m' x m' positive real matrix Z,(s), nonsingular almost everywhere, such that
Theorem 8.4.1. Let Z(s) be an m
=
+ om-,,
r[z,(~)
,-,
.I T
(8.4.1)
The matrix Z,(s) has a minimal realization IF,, G,, H,, J,),where
F, = F
G, = GT-'[I,,,. Om,,-, .]'
H , = HT-'[I,,,, Om*.m-m,l'
J , = [Im Om:n-m~I(T')-'JT-'[Im. Om,,,,-J'
(8.4.2)
Moreover, b[Z(s)] = b[Z,(s)], Z,(s) is symmetric if and only if
Z(s) is symmetric, and normal rank [Z(s) z'(-s)] = normal
rank [Z,(s) Z;(-s)].
+
+
For the proof of this theorem up to and including Eq. (8.4.2), see Problem
8.4.1. (The result is actually a standard one of network theory and is derived
in 111 by frequency-domain arguments.) The claims of the theorem following
Eq. (8.4.2) are immediate consequences of Eq. (8.4.1).
The significance of this theorem lies in the fact that ifa synthesis of Z,(s) is
on hand, a synthesis of Z(s) follotvs by appropriate use of a multipart t r m -
PRELIMINARY SIMPLIFICATION
SEC. 8.4
.
351
. ..,. ....
former associated with the synthesis of Z,(s).If Z,(s) andZ(s) are impedances,
one may take a transformer of turns-ratio matrix T and terminate its first m'
secondary ports in a network N, synthesizing Z,(s) and its last m - m' ports
in short circuits. The impedance Z(s) will be observed at the input ports.
Alternatively; if denotes T with its last m - m' rows deleted, a transformer of turns-ratio matrix terminated at its m' secondary:portsin a network N, synthesizing Z,(s) will provide the same effect (see Fig. 8.4.1): For
admittances,. the situation is almost the same--one merely interchanges
the roles of the primary and secondary ports of the transformer.
.
.
'
m
-
. .
-Z,(s)
FIGURE 8.4.1. Conversion of a-Singular Z(sf to a
singular 2,(s).
on-
2. Singular Z(M), Nomingular Z(S). We now suppose that Z(s) is nonsingular almost everywhere, but singular at s = w. [Again, operation 3 can be
carried out if necessary to ensure that Z(w) < CO.]We shall make use of
the following theorem.
..
...
.
.
.
'Theorem 8.4.2. Let Z O b e an m x m positive realmatrix with
Z(w) < w, Z(s)nonsingular almost everywhere, and Z(w) singular. Then Zl(s) = [Z(s)l'l is positive real and has elements
..
possessing a pole at infinity. Moreover, 6[Z(s)] +[z,(s)], Z,(s)
is symmetric if and only'if Z(s) is symmetric, and normal rank
[Z(s) Z'(-s)] = normal rank [Z,(s) 4- Z',(-s)].
=
+
Proof. That Z,(s) is positive real because Z(s) is positive real is
a standard result, which may be proved in outline form as follows. Let S(s) = [Z(s) - I][Z(s)
4-';then S(s) is bounded
real. It follows that S,(s) = -S(s) is bounded real, and thus that
Zl(s) = [I S,(s)][I - S1(s)]-I is positive real. Expansion of
this argument is requested in Problem 8.4.2.
The definition of Z,(s) implies that Z,(s)Z(s) = I; setting s =
co and using the singularity of Z(M) yields the fact that some
elements of Z,(s) possess a pole at infinity.
The degree property is standard (see Chapter 3) and the sym-
+
+
352
FORMULATION OF STATE-SPACE PROBLEMS
CHAP. 8
metry property is obvious. The normal-rank property follows
from the easily established relation
This theorem really only becomes significant because it may provide the
conditions necessary for the application of operation 3. Notice too that the
theorem is really a theorem concerning admittance and impedance matrices
that are inverses of one another. It suggests that the problem of synthesizing
an impedance Z(s) with Z ( m ) < w and Z(oo) singular may be viewed as the
problem of synthesizing an admittance Z,(s) with Z,(co) < w failiig, and with
various other relations between Z(s) and Z,(s).
3. Removal of Pole at Infnity. Suppose that Z(s) is positive real, with some
elements possessing a pole at infinity. A decomposition of Z(s) is possible, as
described in the following theorem.
Theorem 8.4.3. Let Z(s) be an m x m positive realmatrix, with
Z ( m ) < m failing. Then there exists a nonnegative definite symmetric L and a positive real Z,(s), with Z,(m) < m, such that
+
Moreover, b[Z(s)]= rank L 6[Z,(s)],Z,(s) is symmetric if and
only if Z(s) is symmetric, and normal rank [Z(s) fZ'(-s)] =
normal rank [Z,(s) Z;(-s)].
+
We have established the decomposition (8.4.3) in Chapter 5. The remaining
claims of the theorem are straightforward.
The importance of this theorem in synthesis is as follows. First, observe
that sL is simply synthesiznble, for the nornegativity of L guarantees that
a matrix Ti exists with m rows and I = rank L columns, such that
If sL is an impedance, the network of Fig. 8.4.2, showing unit inductors
terminating the secondary ports of a transformer of turns-ratio matrix T,,
yields a synthesis of sL. If sL is an admittance, the transformer is reversed
and terminated in unit capacitors, as shown in Fig. 8.4.3.
With a synthesis of an admittance or impedance sL achievable so easily,
it follows that a synthesis of Z(s) can be achievedfrom a synthesisof Z,(s) by,
in the impedance case, series connecting the synthesisof Z,(s) and the synthesis
of sL, and, in the admittance case, paralleling two such syntheses.
Since 6[st]= rank L, the syntheses we have given of sL, both impedance
PRELIMINARY SIMPLIFICATION
SEC. 8.4
353
I
FIGURE 8.4.2.
Synthesis of Impedance sL.
u
FIGURE 8.4.3.
Synthesis of Admittance sL.
and admittance, use S[sL] reactive elements-actually the minimum number
possible. Hence if Z,(s) is synthesized using the minimal number of reactive
elements, the same is true of Z(s).
4 . Z ( w ) Nonsymmetric. The synthesis reduction depends on the following
result.
Theorem 8.4.4. Let Z(s) be positive real, with Z ( w ) < w and
nonsymmetric. Then the matrix Z,(s) defined by
is positive real, has Z , ( m ) < w , and Z,(m) = Z',(oo). Moreover,
S[Z(s)]= 6[Zl(s)] and normal rank [Z(s) Z'(-s)] = normal
rank [Z,(s) $- Z:(-s)].
+
The straightforward proof of this theorem is called for in the problems.
The significance for synthesis is that a synthesis of Z(s) follows from a
series connection in the impedance case, or paraUel connection in the admittance case, of syntheses of Z,(s) and of g [ Z ( w ) - Z1(w)].The latter is easily
synthesized, as we now indicate. The matrix &[Z(w)- Z1(m)]is skew sym-
354
FORMULATION OF STATE-SPACEPROBLEMS
CHAP. 8
metric; if Z(oo) is rn x rn and +[Z(oo)
- Z'(w)] has rank 2g, evenness of the
rank being guaranteed by the skew symmetry 161, it follows that there exists
a real 2g x rn matrix T,, readily computable, such that
$[Z(w) - Zl(m)] = Th[E -!-E &
. . . + El T ,
(8.4.5)
where
in the case when Z(s) is an impedance, and
in the case when Z(s) is an admittance, and there are g summands in the
direct sum on the right side of (8.4.5). The first E is the impedance matrix
and the second E the admittance matrix of the gyrator with gyration resistance of 1 R Therefore, appropriate use of a transformer of turns ratio T,in
the impedance case and T', in the admittance case, terminated in g unit
gyrators, will lead to a synthesis of b[Z(oa) - Zi(oo)]. See Fig. 8.4.4 for an
illustration of the impedance case.
Our discussion of the four operations considered separately is complete.
Observe that each operation consists of mathematical operations on the
immittance matrices, coupled with the physical operations of lossless element
extraction; all classes of lossless elements, transformers, inductors, capacitors,
and gyrators arise.
We s h d now observe the effect of combining the four operations. Figure
8.4.5 should be studied. It shows in flow-diagramform the sequence in which
the operations are performed. Notice that there is one major loop. By Theorem 8.4.3, each time this loop is traversed, degree reduction in the residual
immittance to be synthesized occurs. Since the degree of the initially given
immittance is finite and any degree is positive, looping has to eventuaUy
end. From Fig. 8.4.5 it is clear that the immittance finally obtained, call it
2(s), will have 2(m) finite, 2(w) = 2('(m),
and
nonsingular. As a consequence, 2(w) $- P ( m ) is nonsingular. Notice too, using Theorems 8.4.1
through 8.4.4, that
will be symmetric if Z(s) is symmetric, and normal
rank [Z(s) Z'(-s)] = normal rank [2(s) &-s)].
FinaUy, the only step
at which degree reduction of the residual immittances occurs is in the looping
procedure. By the remarks following Theorem 8.4.3, it follows that
am)
+
qs)
b[Z(s)l = b[as)]
+
+ X (degree of inductor or capacitor element
extracfions)
= 6[2(s)]
+ (number of reactive elements extracted)
SEC. 8.4
PRELIMINARY SIMPLIFICATION
355
I
FIGURE 8.4.4.
Synthesis for Impedance #Z(m) - Z'(m)].
Since, as we remarked in Section 8.1, the minimum number of reactive elements in any synthesis of Z(s)is 6[Z(s)],
it follows that a synthesis of Z(s)
using the minimal number of reactive elements will follow from a minimal
reactive element synthesis of 2%).
Example
8.4.1
Consider the impedance matrix
Adopting the procedure of the flow chart, we would first write
where
356
FORMULATION OF STATE-SPACE PROBLEMS
Use operation 3
to get Zl(s)
with Z,(m)<m
t
Use Operat~on4
to get Zds)
w~th
z,(m)=z;(m)
11s
I
I
Is Z{COl mnsingulor?
I
-
ruse Opemt~on1
to get Z1(s)
w~thZl(s)
nonsingulor
almost
everywhere
Use Operation2
FIGURE 8.4.5. Flow Diagram Depicting Preliminary Lossless Extractions for Immittance Matrix Problem.
-
PRELIMINARY SIMPLIFICATION
SEC. 8.4
357
Next, we observe that Z,(m) # Z:(m), and so we write
where
Now we observe that Z2(s) is singular. We write it as
and take
We see that Z,(m) is singular, and thus we form
Z4(s) = 1z31-1
=S
+1
Finally, we take Z,(s) = 1, and at this point the procedure stops. In
sequence, we have extracted a series inductor, extracted a series gyrator,
represented the remaining impedance by a transformer terminated in
a one-port network with a certain impedance, converted this impedance
to an admittance, and represented the admittance as a parallel combination of a capacitor and resistor. Although in this case application of the
"preliminary" simplifications led to a complete synthesis, this is not the
case in general.
There is one additional point we shall make here. Suppose that the positive
real matrix 2 ( s ) deduced from Z ( s ) possesses one or more elements with pure
imaginary poles. If desired, these poles may be removed, thereby simplifying
and resulting in another positive real impedance, z ( s ) say, the elements
of which are free of imaginary poles. More precisely, as indicated in Section
5 . 1 , 2 ( s ) is expressed, using a partial fraction decomposition, as a sum of two
positive real impedances, one with elements possessing purely imaginary
poles-this being Z,(s), and one with elements possessing no purely imaginary pole together with a m j t h i s being @). Thus 2(s) = Z,(s)
g(s).
The matrix Z,(s) is, of course, Iossless positive red. Synthesis procedures
using a minimal number, 6[ZL(s)l,of reactive elements are available (see [I]
for classical approaches and later sections in this book for a state-space
approach).
a$)
+
358
FORMULATION OF STATE-SPACE PROBLEMS
CHAP. 8
Because f(co)is symmetric and nonsingular, 2(oo) = 2 ( m ) is also. Moreover, because entries of Z , ( . ) and z ( . ) can have no common poles,
which implies that a minimal reactive element synthesis of 2 ( s ) follows from
a series connection of minimal reactive element syntheses of Z,(s) and Z(s).
Since a minimal reactive element synthesis ofZ,(s) always exists, as remarked
above, it follows therefore that the problem of giving a minimal reactive
synthesis for 2 ( s ) is equivalent to that of giving one for z ( s ) also with a minimal number of reactive elements. Note also that Z,(s)
Z:(-s) = 0 for all
s, and therefore 2 ( s ) 2'(-s) = Z(s)
p ( - s ) and has normal rank equal
to the normal rank of Z ( s ) Z'(-s). Finally, symmetry oC 2(s) is readily
found to guarantee symmetry of Z,(s) and Z(s).
Therefore, assuming synthesis of lossless Z,(s) is possible, we conclude
that the problem of synthesizing an arbitrary positive real Z l s ) may even be
reduced to the problem of synthesizing another positive real Z(s), with z ( s )
satisfying conditions 1 to 5 on page 349 [with 2 ( s ) replaced by a s ) ] , and also
6. No element of Z ( s ) possesses a purely imaginary pole.
+
+
+
+
Simplification for the Scattering Matrix Synthesis
Problem
In this subsection we shall use the sort of lossless extractions just
considered to show how the problem of synthesizing an arbitrary prescribed
bounded real S(s) may be reduced to one of synthesizing another bounded
real s ( s ) such that
-
1. I f?'(o~)g(w)
is nonsingular.
2. A reciprocal synthesis of S(s) follows from a reciprocal synthesis of 3(s).
3. A synthesis of S(s) with the minimum possible number of resistors
follows from a synthesis of $(s) with a minimum possible number of
resistors.
4. A synthesis of S(s) with the minimum possible number of reactive elements follows from one of s(s) with a minimum possible number of
reactive elements.
Since the necessary and sufficient condition for a network to be reciprocal
is that its scattering matrix be symmetric, and since the minimum possible
number of resistors necessary to give a synthesis of a bounded real S(s) is
given by normal rank [I S'(-s)S(s)] (which, as we shall see, is actually
achievable in practice), it is clear that 2 and 3 are in fact equivalent to
2a. $(s) is symmetric if S(s) is symmetric.
3a. Normal rank [I- $(-s)S^($] = normal rank [I- S(-s)S(s)].
--
SEC. 8.4
PRELIMINARY SIMPLIFICATION
359
Similarly, we note that the same sort of remarks as those made in the
immittance-matrix case may be made; it., although the sequence of operations to be specified is capable of fulfilling any of the conditions listed above,
one or more of them may become a pointless objective in a particular case.
Below, three classes of operations will be introduced. Then we shall be able
to note quickly how the listed restrictions 1-4 may be obtained using these
three classes in conjunction with the other four considered previously for the
immittance matrices.
The three classes of operations are
5. Reduction of the problem of synthesizing S(s) with S(m) nonsymmetric
to the problem of synthesizing S,(s) with S , ( m ) symmetric.
6. Reduction of the problem of synthesizing S(s) with 1- S(s) singular
throughout the s plane to the problem of synthesizing a matrix S,(s) of
smaller dimensions with I- S,(s) nonsingular almost everywhere.
7. Reduction of the problemof synthesizing S(s) with I - S(s) nonsingular
almost everywhere to the problem of synthesizing a positive real impedance Z(s), and the reverse process of reduction of the pro biem of synthesizing a positive real immittance Z(s) to the problem of synthesizing
a bounded real S(s).
5. S(m) Nonsymmetric. The synthesis problem may be modified by using
the following result:
Theorem 8.4.5. Let S(s) be an m x m rational bounded real
matrix with minimal realization IF, G,H, J ] . Suppose that J is
nonsymmetric. Then there exists an m x m real orthogonal
matrix U and an m x nr bounded real matrix S,(s) with J, =
S , ( M )symmetric, such that
The matrix S,(s) has aminimal realization IF,, G , , H I ,J,), where
F, = F
G, = C
H , = HU
J , = U'J (8.4.7)
-
Moreover, d[S(s)]= 6[S,(s)],
and normal rank [I S(-s)S(s)J
= normal rank [I- S:(-s)S,(s)l.
The proof of this theorem (see Problem 8.4.5) is straightforward and simply
depends on a well-established result that for a given real constant matrix J,
there exist a real orthogonal U and a real constant symmetric matrix J , such
that J = UJ,.
The significance of the above result for synthesis is that a synthesis of S(s)
follows from syntheses of U and S,(s) via a coupling network N, of m un-
360
FORMULATION OF STATE-SPACE PROBLEMS
CHAP. 8
coupled gyrators, as shown in Fig. 8.4.6. The scattering matrix U, being
a real orthogonal matrix, is bounded real and easily synthesizable using
transformer-coupled gyrators, open circuits, and short circuits. Full details
justifying the scheme of Fig. 8.4.6 and the synthesis of U are requested in
Problems 8.4.6 and 8.4.7.
FIGURE 8.4.6.
Connection for Scattering-Matrix Multipli-
cation.
6. Singular I - S(s). Suppose that S(s) is bounded real and I - S(s) singular throughout the s plane. Then the synthesis simplification depends on
the following theorem:
x m bounded realmatrixwith
minimal realization (F, G, H, J). Suppose that I- S(s) is singular throughout the s plane and has rank m' < m. Then there
exist a real orthogonal matrix T and an m' x m' hounded real
matrix S,(s) with I - S,(s) nonsingular almost everywhere, such
that
Theorem 8.4.6. Let S(s) be an m
The matrix S,(s) has a minimal realization{F,, G,, H,, J,), where
SEC. 8.4
PRELIMINARY SIMPLIFICATION
361
Furthermore, 6[S(s)]= d[S,(s)],S,(s) is symmetric if and only if
S(s) is symmetric, S , ( m ) is symmetric if and only if S(m) is symmetric, and normal rank [I- S(-s)S(s)] = normal rank [I
S',(-s)S,(s)l.
-
The result of the theorem is actually a standard one of network theory
and is established in [ I ] by frequency-domain arguments. The theorem may
also be proved using the arguments used to establish Theorem 8.4.1 (see
Problem 8.4.1).
The significance of this theorem for synthesis lies in the fact that if a synthesis of S,(s) is on hand, a synthesis of S(s) follows by appropriate use of
an orthogona1 transformer of turns-ratio matrix T.Now a scattering matrix
I,_, represents m - m' open circuits, so one may take a transformer with
the turns-ratio matrix resulting from Twith its last m - m' columns deleted
and terminate its m' primary ports in a network N , synthesizing S,(s), to
yield a synthesis of S(s) at the unterminated ports (see Fig. 8.4.7).
7. Conversion of S(s) to Z(s) and Conversely. Suppose that S(s) is bounded
real, with I- S(s) nonsingular almost everywhere. Then the problem of
synthesizing S(s) can be modified to an equivalent problem of synthesizing
a positive real impedance matrix Z(s), as contained in the following result:
x m bounded realmatrix, with
I - S(s) nonsingular almost everywhere. Then
Theorem 8.4.7. Let S(s) be an m
js positive real. Conversely, if Z(s) is positive real, then
always exists and is bounded real. Moreover, d[S(s)]= S[Z(s)],
Z(s) is symmetric if and only if S(s) is symmetric, Z(m) is symmetric if and only if S(m) is symmetric, and normal rank [ I S(-s)S(s)] = normal rank [Z(s) T ( - s ) ] .
+
The results of the theorem have all been demonstrated earlier.
The main application of this theorem is as follows. As illustrated in the
immittance case, the operations arisjng in the preliminary synthesis steps are
SEC. 8.4
PRELIMINARY SIMPLIFICATION
363
likely in some cases to involve extractions of transformer-coupled inductors
and transformer-coupled capacitors (or even transformer-coupled tuned
circuits if desired). These operations are simpler by far to define in terms of
immittance matrices.
Observe that operations 5 and 6 consist of mathematical operations on
the scattering matrices which are associated with lossless element extractions
involving transformers and gyrators only. However, through operation 7
coupled with those operations considered in the imittance-matrix case, all
classes of lossless elements can be involved in simplifications to the scattering
matrix synthesis probIem.
We shall now observe the result of combining all the operations considered
in this section. Figure 8.4.8 together with Fig. 8.4.5 is to be used for this
purpose, and it shows in flow-diagram form the sequence in which the operations are performed. Notice that there is stiIl only one major loop in the
sequence. By Theorems 8.4.3 and 8.4.7, each time this loop is traversed,
degree reduction in the residual matrix to be synthesized occurs and, as
explained previously, the sequence of operations has to eventually end. Now
at the end of the looping (if Iooping is necessary) the immittance finally
obtamned, call it 2(s), will have a m ) finite and nonsingular, and 2(m) =
p(m). Therefore, the corresponding bounded real matrix, 3(s) say, will
have g(w) = S^l(m), I - S(m) and I S(m) nonsingular, and thus I S^.(m)S(m) xonsing~lar.Notice then (using Theorems 8.4.1 through 8.4.7
and the line of argument as applied to the immittance-matrix case) that
S(s) will satisfy conditions 2 through 4 on page 358.
+
Application to Lossless Synthesis
Before we conclude the section we wish to make the following
point. The preliminary simplification procedures that have just been
described, though basically intended to help in the synthesis of an arbitrary
positive real or bounded real matrix, provide a technique for the complete
synthesis of a lossless positive real or bounded real matrix. To see why,
consider in more detail the case of a lossless positive real matrix. At no stage
in the application of the preliminary procedures can a positive real _matrix
arise that is not lossless, and therefore at no stage will a matrix Z(s) be
arrived at for which 2(w) is symmetric and nonsingular. But as reference to
Fig. 8.4.5 shows, the preliminary extraction procedure will n_ot terminate
unless 2(m)is symmetric and nonsingular, unless of course Z(s) = 0, i.e.,
unless a complete synthesis is achieved. In the case of a lossless immittance,
the resulting synthesis resembles that obtained via thefirst Cauer synthesis
(see [I] for a detailed discussion).
Example Consider the lossless positive real impedance matrix
8.4.2
364
FORMULATION OF STATE-SPACE PROBLEMS
CHAP. 8
Use O~erotion5
lo get S J s l
with
s (ml=s;(ml
to get Sf($)
with 1-SJsI
nonsingular
-cb
No
Yes
Use Operation 7
to gel o
positive reol
impedance Z(s1
Contains
Major Lwp
Go through the
flow diagramof
Fig.8.1.5
Z(s1 is impe&nce
ond(8.4-llblif
Z(sl is admittance,
to get a bounded
reol S(s1
FIGURE 8.4.8. Flow DiagramDepicting Preliminary Lossless Extractions for Scattering Matrix Problem.
PRELIMINARY SIMPLIFICATION
SEC. 8.4
365
First we separate out those elements of Z(s) that possess a pole at infinity
(operation 3). Thus
The first term is readily synthesized (merely an inductor for port 1 and
a short circuit for port 2). Next, the remaining positive real impedance
Z,(s), with Zi(m) im, is examined to see if Z,(m) = Z:(m). The
answer is no, so we apply operation 4 and write
for which a gyrator may be extracted synthesizing the second term. Now
the remaining positive real matrixZZ(s)is nonsin&laralmost everywhere,
but ZZ(m)is singular. We then invert Zz(s)(operation 2) to give a positive
real admittance matrix
which evidently has elements with a pole at m, and these may be synthe
sized (operation 3) using capacitors terminating a multiport transformer.
Since Y2(s) consists of only elements with a pole at m, the remaining
admittance is simply a zero matrix; hence the synthesis of Z(s1 is cornpleted and is depicted in Fig. 8.4.9.
Later we shall be considering state-space syntheses of lossless positive real
and lossless bounded real matrices that are simple to carry out and are able
t o avoidthe matrix inversion operations contained in the preceding procedures.
Problem Prove Theorem 8.4.1. First write down the positive real lemma equations.
Show that if w is a constant vector in the nullspace of Z(s), then Gw = 0
and J w = 0. Conclude that Hw = 0.Then take Tas a matrix whose last
m' columns span the nullspace of Zfs). In like manner, prove Theorem
8.4.1
8.4.6.
366
FORMULATION OF STATE-SPACE PROBLEMS
CHAP. 8
FIGURE 8.4.9. A Synthesis of Z(s).
Problem Prove that part of the statement of Theorem 8.4.2 which claims that
8.4.2
[Z(S)]-~
is positive real if Z(s)is positive real.
Problem Rove Theorem 8.4.4
8.4.3
Problem Consider a positive real impedance matrix
8.4.4
s i l
fl(xi1)
Z(s)=
2+2s+2
szi2s+2
I
Show how to reduce Z(s)to another positive real matrix z(s)such that
2(m) is nonsingular. Then verify that the normal-rank property and the
degree property as stated in the text hold for Z(s)above.
Problem Prove Theorem 8.4.5.
8.4.5
Problem Consider the scheme of Fig. 8.4.10, which depicts an interconnection of
8.4.6
m gyrators with an m-port network Nzof scattering matrix SL.Show that
the resultant 2m port has scattering matrix
using the fact that the scattering matrix of the unit gyrator is
SEC. 8.4
PRELlMlNARY SIMPLIFICATION
367
FIGURE 8.4.70. Building Block for the Network of Figure
8.4-11.
Conclude that the scattering matrix of the scheme of Fig. 8.4.11 is S,SI,
where N, has scatterjug matrix S,. (The cascade load formula may be
helpful here.)
Problem In this problem you will need to recall the fact that an arbitrary
8.4.7
orthogonal matrix U can be written as
U
= T'VT
where T is orthogonal and V is a direct sum of blocks of the form [I],
[-I],
and
[-sin
sin @ ] , 81 0 . show that the scattering matrix of
0 case
a gyrator with gyrator resistance y ohms is
368
FORMULATION OF STATE-SPACE PROBLEMS
CHAP. 8
and then show that a scattering matrix that is a real constant orthogonal
matrix can be synthesized by appropriately terminating a transformer in
gyrators.
Problem Synthesize the lossless bounded real matrix
8.4.8
REFERENCES
[ 1 I R. W. NEWCOMB,
Linear Mulfiport Synthesis, McGraw-Hill, New York, 1966.
[2] D. C.YOULAand P. T m , "N-Port Synthesis via Reactance Extraction, Part
I," ZEEE International Convention Record, New York, 1966, Pt. 7, pp. 183-205.
[ 3 ] B. D.0.ANDERSON
and R. W. NEWCOMB,
"Impedance Synthesis via StateSpace Techniques," Proceedings of the ZEE, Vol. 115, No. 7, July 1968,pp. 928-
936.
[ 4 ] S. VONGPANITLERD,
"Passive Reciprocal Network Synthesis-The State
Space Approach," Ph.D. Dissertation, Uni\iersity of Newcastle, Australia,
M a y 1970.
[ 5I
E. A. GUILLEMIN,
Synthesis of Passive Networks, Wiley, New York, 1957.
[ 6 I F. R. G ~ T M A C H E
The
R , Theory of Matrices, Chelsea, New York, 1959.
Impedance Synthesis
Earlier we formulated two separate approaches to the problem
of synthesizing a positive real m x m impedance matrix Z(s); one is based
on the reactance-extraction concept; the other is based on the resistanceextraction concept. We shall devote most of this chapter to giving synthesis
procedures using both approaches yielding nonreciprocal networks, in which
gyrators are included as circuit elements.
Although synthesis procedures that yield a nonreciprocal network with
a prescribed nonsymmetric positive real impedance matrix can naturally be
applied to obtain a synthesis of a symmetr2 positive real impedance matrix
(since the latter can be regarded as a special case of the former), one objection
is that gyrators generally arise if a synthesis method for a nonsyrnmetric
Z(s) is applied to a symmetric Z(s)-despite
the fact that reciprocal (gyratorless) networks synthesizing a prescribed symmetric positive real Z(s) can
(and will) be shown to exist. The nonreciprocal synthesis procedures are
therefore often unsuitable for the synthesis of a symmetric positive real
Z(s), and other methods are desired which always lead to a reciprocal synthesis. Section 9.4 and much of Chanter
10 will arovide some reci~rocal
*
syntheses of symmetric positive real impedance matrices.
We now ask the reader to note the following points concerning the material
in this chapter. First, we shall assume throughout that Z(s) is such that
Z(m) < m. As noted earlier, this restriction can always be satisfied with no
369
370
IMPEDANCE SYNTHESIS
CHAP. 9
loss of generality; for if a prescribed Z,(s) does not possess this property,
one can always remove the pole at infinity by a series extraction of transformer-coupled inductors, and the problem of synthesizing the original
Z,(s) is then equivalent to the problem of synthesizing the remainingpositive
real Z(s) satisfying the restriction Z(w) < w.
Second, recall that the formulations of various approaches to synthesis
were posed in terms of a state-space realization (F, G, H, J ) of Z(s), sometimes with no specific reference to the size of the associated state vector.
The synthesis procedures to be considered will, however, be stated in terms
of a particular minimal realization of Z(s). The reason becomes apparent if
one observes that the algebraic characterization of the positive real property
of Z(s) in state-space terms, as contained in the positive real lemma, applies
only to minimal realizations of Z(s). That is, the existence is guaranteed of
a positive definite symmetric P and real matrices L and W , that satisfy the
positive real lemma equations only if [F, G, H, J } constitutes a minimal
realization of the positive real Z(s). Since the synthesis of Z(s) in state-space
terms depends on the algebraic characterization of its positive real property,
the above restriction naturally follows. This in one sense provides an advantage over some classical synthesis methods in that a synthesis of Z(s)is always
assured that uses the minimum number of reactive elements, this number
being precisely 6[Z] or the size of the F matrix in a minimal realization.
Third, we shall even assume knowledge of a minimal realization [F, G,
H, J) of Z(s) such that
F+F'=-LL'
G=H-LW,
J + J ' = WiW.
(9.1.1)
for some real matrices L and W,. The existence of the minimal realization
{F, G, H, J ) and matrices L and W , satisfying (9.1.1) is guaranteed by the
following theorem-a development of the positive real lemma:
Theorem 9.1 .I.Let Z(s) be a rational positive real matrix with
Z(m) < 00. Let {F,, Go,H,, J ) be a minimal realization and
[P,Lo, W,} be a triple that satisfies the positive real lemma equations for the above minimal realization. With T any matrix
ranging over the set of matrices for which
then the minimal realization {F, G, H, J ) , with F = TF.T-',
G = TG,, and H = (T-')'H,, satisfies Eq. (9.1.1), where L =
(T-')'Lo.
INTRODUCTION
SEC. 9.1
371
The proof of the above result follows immediately on invoking the positive
real lemma equations and on performing a few simple calculations. Note
that F, G, H, L, and W, are certainly not unique: if IF,, Go,Ho,J ] is an
arbitrary minimal realization of Z(s), we know from Chapter 6* that there
exists an infinity of matrices P satisfying the positive real lemma equations,
and for any one P an infinity of L and W, satisfying the same equations.
Further, for any one P there exists an infinity of matrices T satisfying (9.1.2).
So the nonuniqueness arises on several counts. Even the dimensions of L
and Woare not unique, for, as noted earlier, the number of rows of L6 and W b
(and thus of L' and W.) is bounded below by p = normal rank [Z(s)
Z'(-s)], but is otherwise arbitrary (see also Problem 9.1.1).
In the following synthesis procedures, except for the lossless synthesis,
we shall take (9.1.1) as the starting point. In the case of the synthesis of a
lossless impedance, we know from previous considerations that the matrices
L and W,are zero and (9.1.1) therefore is replaced by
+
F+F'=O
G=H
(9.1.3)
As we know from our discussions on the computation of solutions to the
positive real lemma equations, a minimal realization of a lossless positive
real Z(s) satisfying (9.1.3) is much easier to derive from an arbitrary minimal
realization than is a minimal realization of a nonlossless or lossy positive
real Z(s) satisfying (9.1.1).
An outline of this chapter is as follows.
In Section 9.2 we consider the synthesis of nonreciprocal networks using
the reactance-extraction approach. The procedure includes as a special case
the synthesis of lossless nonreciprocal networks. Since the latter can be
achieved with much less computational effort than the synthesis of lossy
nonreciprocal networks, and since lossless networks play a major role in
network theory and applications, the lossless synthesis is therefore considered
in a separate subsection. The material in Section 9.2 is drawn from [I] and
121.
In Section 9.3 a synthesis procedure for nonreciprocal networks using
the resistance-extraction approach is discussed. This material is drawn from
[3]. Finally, Section 9.4 is devoted to considering a straightforward synthesis
procedure for a prescribed symmetric positive real Z(s) that results in a
reciprocal network. The material of this section is drawn from [4] and [Sl.
*This chapter may have been omitted at a first reading.
372
IMPEDANCE SYNTHESIS
CHAP. 9
Problem Show that ifF, G,H, Jaresuch that (9.1.1) holdsfor somepairofmatrices
L and W,,then it holds for L and W, replaced by LVaud V'W. for any
V, not necessarily square, with VV' = I. Show also that the minimum
number of rows in L' and W, is p = normal rank [Z(s) Z'(-s)], and
that, given the existence of L' and W, with p rows, there exist other
L' and W, with any number of rows greater than p. (Existence of L'
and W. with p rows is established in Chapters 5 and 6.)
9.1.1
+
We recall that the problem of finding a passive structure synthesizing a prescribed m x m positive real Z(s) via reactance extraction essentially
rests on finding a realization [F,, Go,H,, J ] for Z(s), such that the real
constant (m n) x (m n) matrix
+
+
has its symmetric part nonnegative definite:
In other words, M is positive real. With n denoting the dimension of F, the
constant positive real matrix M is the impedance of a nondynamic (m n)port coupling network N,. On terminating N, at its last n ports in 1-Hinductors, a network N synthesizing the prescribed Z(s) is obtained.
As is guaranteed by Theorem 9.1.1, we can find a minimal realization
[F, G, H, J ] of Z(s) and real matrices L and W,, such that
+
The dimensions of the various matrices are automatically fixed except for
the number of rows of both L' and W,. These are arbitrary, save that the
number must be at least p, the normal rank of [Z(s) Z'(-s)]. As we have
already noted, there are infinitely many minimal realizations (F, G, H, J ) ,
which, with some L and W,, satisfy (9.2.3). We shall show that any one of
these provides a synthesizable M, i.e., a matrix M satisfying (9.2.2).
By direct calculation, we have
+
REACTANCE-EXTRACTION SYNTHESIS
SEC. 9.2
M+M'=
1
-""
J+J'
(G-H)'
G-H
-F-F'
=["W0
- L o
373
LL'
I
The first equality follows from the definition of M, the second from (9.23),
and the last from an obvious factorization. Thus, we conclude that
[where IF, G, H,J ] is any one minimal realization of Z(s) satisfying (9.2.3)]
has the property that M is positive real, or equivalently that the symmetricpart
of M is nonnegative dejnite:
We have claimed earlier that if M satisfies this constraint, it is readily
synthesizable. We shall now justify this claim. Note that a synthesis of Z(s)
follows immediately once a synthesis of M is known.
Synthesis of a Constant PR Impedance Matrix M
To realize a constant positive real impedance M, we make use of
the simple identity
(9.2.6)
M = M,, -k M,,
+-
where M,, denotes the symmetric part of M, i.e., &(M M'), and M,,
denotes the skew-symmetric part of M, i.e., J(M M'); then we use the
immediately verifiable fact that M,, and M,, am both positive real irnpedances. As may also be easily checked, M,,is even lossless. A synthesis of
JM is then obtained by a series connection of a synthesis N, of M, and a
synthesis N, of M,,, both with rn n ports, as illustrated in Fig. 9.2.1.
A synthesis of the skew impedance matrix M,, is readily obtained, as seen
earlier, on noting that M,, may be expressed as
-
+
where E =
[
1,
0 1
-1
0
and thee are g nummandsin (9.21) with 2g = rank
374
IMPEDANCE SYNTHESIS
CHAP. 9
r - - - - - - - - - - - - - T
I
I
I
I
I
I
I
-
I
Impedance MI
I
I
I
I
I
I
I
I
I
FIGURE 9.2.1. Series Connection of Impedances
-I-M') and
- M3.
M,,. Thus M,, may be synthesized by a multiport transformer of turns-ratio
matrix T,terminated by g unit gyrators. Note that the synthesis of M,, does
not use any resistors, as predicted by the lossless (or skew) character of
M,,. To synthesize M,, we note from (9.2.4) that
where r is the number of rows of L' and W,. This equation implies that M ,
may be synthesized by terminating a multiport transformer of turns ratio
I / f l [ W o -L7 in r unit resistors. Since the synthesis of M,,does not use
any resistors, it follows that the total number of resistors used in synthesizing
N is equal to the number of rows r of the matrices L' and W oin (9.2.3).
We now wish to note a number of interesting properties of the synthesis
procedure given above.
1. Since the number of reactive elements, in this case inductors, needed
in the synthesis of Z(s) is n, the dimension of F, and since F is part of
a minimal state-space realization of Z(s), the number of reactive elements used is 6[2(.)]. We conclude that the synthesis of Z(s) achieved
by this method always uses a minimal number of reactive elements.
2. As the total number of resistors required in the synthesis of Z(s) is the
same as the number of rows of the matrices L' and W, in (9.2.3), it
follows that the smallest number of resistors that can synthesize Z(s)
SEC. 9.2
REACTANCE-EXTRACTION SYNTHESIS
375
via the above method is equal to the lower bound on the number of
rows of L' and W,, and this number is, as we have noted, p = normal
rank [Z(s) fZ'(-s)]. As we have also noted in Chapter 8, no synthesis
of Z(s) is possible using fewer than p resistors. Hence we can always
achieve a synthesis of Z(s) using the minimum number p of resistors
and simultaneously the minimum number of reactive elements, since
it is always possible to compute matrices L' and W. with p rows that
satisfy (9.2.3), as shown in Chapters 5 and 6.*
3. The synthesis procedure described requires only that Z(s) be positive
real; hence it always yields a passive synthesis for any positive real
Z(s), either symmetric or nonsymmetric. However, gyrators are almost
always required even when Z(s) is symmetric, and so the method is
generally unsuitable for synthesizing a symmetric positive real Z(s)
when a synthesis is required that uses no gyrators.
4. Problem 9.2.4 requests a proof of the easily established fact that every
(nonreciprocal) minimal reactive synthesis defines a particular minimal
realization (F, G, H, J ] of Z(s) together with real matrices L and W,
that satisfy (9.2.3). Also, as we have just proved, each minimal realization (F, G,H, J ] with associated real matrices L and W,, such that
(9.2.3) hold, yields a minimal reactive synthesis. It follows that all
possible minimal reactive element syntheses of Z(s) are obtained, i.e.,
the minimal reactive element equivalence problem for nonreciprocal
networks is solved, if all minimal realizations can be found that satisfy
(9.2.3) together with real matrices L and W,. How may this be done?
Suppose that (F,, G,, H , , J , ] is any one minimal realization of Z(s).
From Theorem 9.1.1, it is clear that from each solution triple P, L,,
and W. of the positive real lemma equations associated with [F,, G,,
H,, J ) , there arises a set of minimal realizations satisfying (9.2.3) for
some real matrices L and W,,. Further, we know from Chapter 6* how
to compute essentially all solution triples P, L,, and W,for an arbitrary
initial minimal realization [F,,
G,,H,, J}. It follows that we can derive
all possible minimal realizations satisfying (9.2.3) from a particular
[F,, G , , H , , J ] by obtaining aU matrices P, L,, and W, and then all
T such that T'T = P . This is also the set of all possible minimal realizations with the properties given in (9.2.3) that may be derived from any
other initial minimal realization, and so from all arbitrary minimal
realizations. To see this, suppose that [F,, G,, H,,
J ] and [F,, G,, Hz, J ]
are two minimal realizations of the same Z(s), with f a nonsingular
matrix such that f ~ , f - l= F,, etc.; if P,, L,, and W. constitute a solution of the positive real lemma corresponding to (F,, G , , HI,J h then
P, = 2"p1f, L, = P L , , and W,constitute a solution corresponding
*This materid may have been omitted at a first reading.
IMPEDANCE SYNTHESIS
376
I
CHAP. 9
to (F,, G,, Hz, J ) . The factorization PI = T'T leads to the same [F, G,
H, J3 from(F,, G,, HI,J ) as does the factorization P, = (TOT?. from
IF23 Gm Hi, JJ.
Lossless-Impedance Synthesis
Suppose that we are given a lossless m x m impedance Z(s).
Recall that
By removing the pole at infinity of all elements of Z(s), if any, we may
consider a lossless Z(s) with Z(W) < W. Note, however, and this is important, that rhe simpl~cationprocedure involving.lossless extraction considered
in Chapter 8 should not be. applied, save for making Z(w) < W, since this
would eventually completely synthesize a lossless Z(s).
Let {F,G, H, J ) be a minimal statespace realization of Z(s) with F an
n x n matrix, easily derivable from an arbitrary minimal realization of
Z(s), suchthat
.
.
F+F'=O
,:
. G-=H
J+J'=O
.
. .
.
(9.2.10)
.
.
.
Of course, the lossless pbsitive real lemma guarantees that such a quadruple
(F, G, H, J ) always exists. Now the matrix
.
.
is obviously skew, because of (9.2.10). As shown earlier, the real constant
matrix M is the impedance, lossless in this case, of an (m n)-port lossless
nondynamic network N. synthesizable with transfonner-coupled gyrators.
Termination of the last n ports of N , with uncoupled 1-H inductors yields a
lossless synthesis of Z(s) that uses a minimum number n of reactive elements.
Once any minimal realization of a lossless Z(s) has been found, synthesis
is easy. We have .described in Chapter 6* the straightforward calculations
required to generate, from an arbitrary minimal realization, a minimal realization satisfying (9.2.10). As we have just seen, synthesis, given (9.2.10), is
easy.
Classical synthesis procedures for lossless impedances, though not computationally difficult, may not be quite so straightforward as the state-space
+
'This chapter may have beenomitted at a first reading.
SEC. 9.2
377
REACTANCE-EXTRACTION SYNTHESIS
procedure. For example, there is a partial fraction synthesis that generalizes
the famous Foster reactance-function synthesis 161. The method rests on the
decomposition of Z(s), as a consequence of its lossless positive real property,
in the form
where each term is individually lossless positive real; the matrices L, C, and
A, are real, constant, symmetric, and nonnegative definite, while J a n d B, are
real, constant, and skew symmetric. A synthesis of Z(s) follows by series
connection of syntheses of the individual summands of (9.2.1 1). The terms
J, sL, and s-'C are easily synthesized, as is sAi/(s2 m:) if Bi = 0. [Note:
IfZ(s) = Z'(s), B, = 0 for all i.] However, a synthesis of the terms (sA, f B,)/
(s2
mf) is not quite so easily computed. It could be argued that the
+
+
Table 9.2.1
SUMMARY OF SYNTHESIS PROCEDURE
Operation
Step
1. Extract
2. Synthesize
3. Find
4. Solve
5. Find
6. Form
7. Synthesize
Lossless-impedance case
Lossy-impedance case
+
Pole at infinity of all elements of Z(s), to write Z(s) as sL Zo(s),
with Zo(m) < ar
sL via tranjformer-coupled inducton, so that a synthesis of Z(s) will
follow bv seriesconnection with a synthesis ofZo(s)(also
~ .. . ifdesired,
carry out the other preliminary extraction procedures for the lossyimpedance case)
A minimal realization of the impedance to be synthesized
The positive real lemma equations
A minimal realization (F, G, H,
A minimal realization [F, G, H,
3 ) for Z(s) such that (9.2.10)
J ] for Z(s) together with real
holds
matrices L and Wa such
that (9.2.3) holds
The impedance matrix
M as a lossless nondynamic
network Nc consisting of
transfornier-coupled gyrators;
terminate the last d[Zl ports
in unit inductors to yield N
M a s a nondynamic network N,
comprised of a series connec
tion of a transformer-resktor
network of impedance matrix
t(M M') and a transformer-gyrator network of impedance matrix HM - M3;
terminate the last 6[Z1 ports
in unit inductors to yield N
+
378
CHAP. 9
IMPEDANCE SYNTHESIS
state-space synthesis procedure is computationally simpler, but it has the
disadvantage that, in the form we have stated it, gyrators will always appear
in the synthesis, even for symmetric Z(s).
We now summarize in Table 9.2.1 the steps in the synthesis of an arbitrary
lossless impedance and an arbitrary positive real impedance using the
reactance-extraction approach. Following below are illustrative examples.
Example Consider the positive real impedance
9.2.1
A minimal realization of Z(s) is
We then calculate a triple P,L, and W Osatisfying the positive real
lemma equations:
Note that the matricesl' and W,have only one row, and this is obviously
also the normal rank of Z(s) Z'(-s). The positive definite symmetric
P has a factorization
+
Therefore, we obtain
and a minimal realization (F, G, H, J ] of Z(s) satisfying (9.2.3) is
The network N, then has a positive real impedance matrix
SEC. 9.2
REACTANCE-EXTRACTION SYNTHESIS
379
Next, form the impedance matrices
1 0 -1
-1
0
and
Figure 9.2.2 shows a synthesis for the nondynamic network N, by
FIGURE 9.2.2. Synthesis of Nondynamic Coupling Net.
work Nofor Example 9.2.1.
380
IMPEDANCE SYNTHESIS
CHAP. 9
+
series connection of networks of impedance &(M M 3 and &(M- M').
Termination of the last two ports in unit inductors yields a complete
synthesis of the prescribed positive real Z(s), shown in Figure 9.2.3.
I
I
FIGURE 9.2.3. Complete Synthesis of Z(s) of Example
9.2.1.
Example Consider the positive real matrix (actually symmetric)
9.2.2
A minimal realization of Z(s) is given by
F=[-21
G=[O I] H = [ O -31
J=
A solution triple P, L, and W,of the positive real lemma equations is
REACTANCE-EXTRACTION SYNTH ESlS
SEC. 9.2
381
(This triple may be easily found by inspection.) The minimal realization
above satisfies (9.2.3) since P is the idkntity matrix, and no coordinatebasis change is therefore necessary. An impedance M for the nondynamic
coupling network N. is then given by
Then we have the symmetric part and the skew symmetric part of M a s
follows:
and
The network N, of impedance M is thus obtained by a series connection of networks of impedances t ( M M3and *(M - M'),as shown
in Fig. 9.2.4. A complete network N for the prescribed Z(s) is found by
terminating the last port of N, in a 1-H inductance, as illustrated in Fig.
9.2.5.
+
From the examples above we observe that both syntheses use simultaneously the minimum number of reactances and resistances, as expected, at the
cost however of the inclusion of a gyrator in the syntheses. (We know that
both impedances being symmetric may be synthesized without the use of
gyrators.)
We shall give yet another synthesis for the second symmetric positive real
Z(s) that arises from a situation in which the number of rows of the matrices
L.' and W, in (9.2.3) exceed the minimal number, i.e., normal rank [Z(s)
Z'(-s)]. As a consequence, the synthesis uses more than the minimum
number of resistances. The example also illustrates the fact that by using
more than the minimum number of resistances (while retaining the minimum
number of reactances), it may be possible to achieve a reciprocal synthesis,
i.e., one using no gyrators.
+
Example Consider once again the symmetric positive real matrix in Example
9.2.2:
9.2.3
r~ n r o
o 1
382
.
IMPEDANCE SYNTHESIS
CHAP. 9
FIGURE 9.2.4. Synthesis of M of Example 9.2.2.
It can be easily verified that the quadruple [F, G, H, J ] with
is a minimal realization of Z(s). Further, it is easy to check by direct
calculation that F, G, H, and J above satisfy (9.2.3) with
REACTANCE-EXTRACTION SYNTHESIS
SEC. 9.2
383
FIGURE 9.2.5. A Final Synthesis ofZ(s)for Example9.2.2.
Notice that L' and W onow have three rows while normal rank Z(s)
Z'(-s) is two. The nondynamic coupling network has a positive real
impedance M of
+
Obviously M, as well as being positive real, is also symmetric, which
implies that M can be synthesized by a transformer-coupled resistor
network. We have then
384
IMPEDANCE SYNTHESIS
CHAP. 9
and a synthesis N,of M therefore requires three resistancss (equal to the
number of rows of L' and W,).Terminating N. in a I-H inductor at the
last port then yields a (reciprocal) synthesis of Z(s)using three resistances,
but no gyrators, as shown in Fig. 9.2.6.
FIGURE9.2.6. A Final Synthesis ofZ(s)forExample9.2.3.
:
Though the example illustrates a reciprocal synthesis for a prescribed positive real Z(s) that is ~ y r n m e ~ the
c , theory considered in this section does
not allow us to guarantee that when Z(s) is symmetric, M is too, permitting
a reciprocal synthesis. Neither have we shown in the example how we
managed to arrive at a symmetric M,and one might ask if the gyratorless
synthesis obtained above resulted by pure chance. The answer is neither yes
nor no. A reciprocal synthesis was obtained in Example 9.2.3 without modification of the present theory simply because the impedance is characteristic
of an RL network, in the sense that only one type of reactive element (in this
case, inductances) is sufficient to give a reciprocal synthesis; on the other
hand, one must know how to seek a reciprocal synthesis-simple increase
of the number of resistances in !he synthesis will not necessarily result
in a reciprocal network. Later we shall consider procedures that always
lead to a reciprocal synthesis for a positive real impedance matrix that is
symmetric.
SEC. 9.2
REACTANCE-EXTRACTION SYNTHESIS
385
Minimal Gyrator Synthesis
Until recently, it was an open question as to what the minimum
number of gyrators is i n any synthesis of a prescribed nonsymmetric m x
m Z(s). A lower bound was known to be one half the rank of Z(s)- Z'(s)
161, while an upper bound for lossless Z(s) was known to be m - 1 (see [8] for
a classical treatment and [2] for a straightforward state-space treatment).
The conjectured lower bound was established for lossless synthesis in [9]
and for lossy synthesis in [lo]. A state-space treatment of the lossless case
appears in [I 11; see also Problem 9.2.7. A state-space treatment of the lossy
case has not yet been achieved, but see Problem 9.2.6 for a restricted result.
Problem Synthesize the lossless positive real impedance matrix
9.2.1
Z(s) = ----
5sZ+ 4
-2s' - 3s2 - 4s - 12
2.9'
- 3s= + 4s - 12
16sz
+ 40
1
Problem Synthesize the lossless bounded real matrix
9.2.2
[Hint: Convert S(s) into an impedance matrix Z(s) by using Z =
- n-l.1
(1 -!-Ax1
Problem Synthesize the positive real impedance matrix
9.2.3
by (a) extracting the jw-axis pole initially, and (b) without initial extration of the jw-axis pole. Compare the two syntheses obtained and give
a discussion of the computational aspects.
Problem Using reactance-extraction arguments, show that every (nonreciprocal)
9.2.4
minimal reactive synthesis of a positive real impedance Z(s) defines a
minimal realization [F, G, H, J ] such that (9.2.3) holds for certain
matrices. (Assume that the nondynamic network resulting from a reao
tance extraction possesses an impedance matrix.)
Problem (Constant hybrid matrix synthesis) Let ICi' be a constant p X p hybrid
9.2.5
matrix, with the top left p , x p , corner of M an impedance matrix and
the bottom rightp, x p, wrneran admittancematrix. Here,p = p , + p 2 .
Show how to synthes~zeM.
Problem (Minimal gyrator synthesis for constant matrices) Let M be as in Prob9.2.6
lem 9.2.5. Show that M can be synthesized with rank ( Z M - M ' x )
gyrators, but no fewer, where = I,, (-I)I,.
+
IMPEDANCE SYNTHESIS
386
Problem
9.2.7
CHAP. 9
(Minimal gyrator lossless synthesis) Show that in the problem of synthesising a prescribed lossless positive real Z ( s ) using simultaneously a
minimum number of gyrators and reactive elements, it may be assumed
without loss of generality that Z(m) is nonsingular and that no element
of Z(s) possesses a pole at s = 0. Form a minimal realization {I? G, H,
J ] of Z(s) with F t- F' = 0 and G = H. Let Vbe an orthogonal matrix
such that the top left r x r submatrix of V'(GJ-'G' - F)Vand bottom
right r X r submatrix of V'FVare zero, where 2r q 6[Zl. (Existence of
V is a consequence simply of the skewness of GJ-'G' - F and F, see
Ill].) Use V to change the coordinate basis and apply the ideas of h b lem 9.2.6 and of the discussion in Chapter 8 on reciprocal reactance
extraction to derive a synthesis using both the minimum number of
gyrators and the minimum number of reactive elements.
9.3
RESISTANCE-EXTRACTION SYNTHESIS
We recall that the problem of givinga nonreciprocal synthesis
for an m x m positive real impedance Z(s) via resistance extraction is as
follows. Find real constant matrices F , G , , H , and JL that define the
state-space equations
and are such that the following two conditions hold:
1. The matrices F,; G
,, HL,and J, satisfy the lossless positive real lemma
equations for some positive definite symmetric matrix P.
2. On setting
Y I = -UI
(9.3.2)
(9.3.1) simplifies to
where the transfer-function matrix relating G[u(.)] to Sb(.)], which is
H'(sI F)-'G,is the- prescribed matrix Z(s).
J
+
-
The vector u is an m vector and u, an r vector, .so that Eqs. (9.3.1) are
state-space equations for a Iossless m r port. Physically, u and y represent
current and voltage, as is evident from the fact that (9.3.3) constitute statespace equations for the prescribed impedance, while u, and y, are such that
jf the ith entry of u, is a current, the ith entry of y, is a voltage, and vice
versa. In this section, however, we shall take every entry of u, to be a cur-
+
SEC. 9.3
RESISTANCE-EXTRACTION SYNTHESIS
387
+
rent.and every entry of y, to be a voltage. Thus JL Hi(sI - F J L G , becomes an impedance matrix, as distinct from a hybrid matrix. If a lossless
network N, is found synthesizing this impedance, termination of the last r
ports of NL in unit resistances yields a synthesis of Z(s).
We shall impose two more restrictions, each with a physical meaning that
will be explained. The first requires the quadruple (F,G,H, J j in (9.3.3) to
be a minimal realization of Z(s). This means therefore that the size of the
state vector x in (9.3.3),and hence (9.3.1), is to be exactly 6 [ 4 .Consequently,
the lossless coupling network NL may be synthesized with 6 [ Z ]reactive e l e
ments, and, in turn, the network N synthesizing Z(s) may be obtained using
6 [ 4reactive elements-the minimum number possible. The second restriction
is that the size r of the vectors u, and y , is to be p = normal rank Z(s)
Z'(-s). Since p is the minimum possible number of resistances in any
synthesis of Z(s), while r is the actual number of resistances used in synthesizing Z(s), this restriction is therefore equivalent to demanding a synthesis of
Z(s) that uses a minimum number of resistances. Actually, the latter restriction, in contrast with the first, is not essential for the method to work; as
we shall see later, the dimension of u, and y, may be greater than p, corresponding to syntheses using a nonminimal number of resistances.
Proceeding now with the synthesis procedure, suppose that we have on
hand a minimal realization {F, G, H, J ) of Z(s), such that
+
F+F'=-LL'
G=H-LW,
J + r = WiW,
(9.3.4)
+
with L' and W , having p rows, where p = normal rank [Z(s) Z'(-s)].
We claim that matrices F,, G,, H,, and J, with the properties stated are given
by
First we shall observe that (FL,G,, H,, JJ defined in (9.3.5) satisfies condition 1 listed earlier. By direct calculation, we have
F'
+ FL = 0
GL = HL
JL JL = 0
+
(9.3.6)
388
CHAP. 9
IMPEDANCE SYNTHESIS
Clearly, (9.3.6) are exactly the lossless positive real lemma equations with
P = I, the identity matrix.
Now we shall prove that the quadruple of (9.3.5) satisfies condition 2.
Substituting (9.3.5) into (9.3.1) yields, on expansion, the following equations:
y,. =
--j+
"" y ' p1
. ,
Settjng(9.3.2) in (9.3.7a) and (9.3.7b) to eliminate u,, and then substituting
for y, from (9.3.7~)leads to
= J(F-
F' - L o n $ t[G
y = JIG + H
+N - LWJu
+ LWJx + j[J - J'. +- WLWo]u
On using (9.3.4), these equations become
and the impedance matrix relating S[u(-)] to SM.)] is clearly Z(s).
We conclude therefore that {FL,G,, H,, J,] of (9.3.5) defines a lossless
impedance matrix for the coupling network N,. A synthesis of N,, and hence
the network N synthesiszing Z(s), may be obtained using available classical
methods [b], the technique of Chapter 8, or the state-space approach as
discussed in the previous section. The latter method is preferable in this case
since one avoids computation of the lossless impedance matrix Z,(s) = J,
HL(sI - F&'G,, and, more significantly, the matricesF,, G,, H,,and J,
are such that a synthesis of N,is immediately derivable (without the need to
change the state-space coordinate basis). To see this, observe that the real
constant matrix M defined by
+
is skew, and therefore lossless positive real. In other words, M is the impedance of an easily synthesized (m p n)-port lossless nondynamic network of transformer-coupled gyrators, and termination of this network at
its last n ports in I-H inductances yields a synthesis for N,. Resistive termination of all but the first m ports of N, finally gives the desired synthesis for
+ +
SEC. 9.3
RESISTANCE-EXTRACTION SYNTHESIS
389
Z(s) in which a minimal number of resistances as well as reactive elements is
used.
A study of the preceding synthesis procedure will quickly convince the
reader that the minimality of the number of rows of the matrices L' and W,
played no role in the manipulations, save that, of course, of guaranteeing a
minimal number of resistors in the synthesis. Indeed, it is easy to see that
any minimal realization {F, G, H, J ) and real matrices L' and W . not necessarily minimal in their numbers of rows, such that (9.3.4) is satisfied, will
yield a synthesis of Z(s). The number of resistances required is the dimension
of the vectors u, and y,, which, in turn, is precisely the number of rows in
L' and Wo.
Summary of Synthesis
A summary of the synthesis procedure outlined in this section is as
follows:
1. Ensure that no element of Z(s) has a pole at infinity by carrying out,
if necessary, a series extraction of transformer-coupled inductors.
2. Computea minimal realization (F, G, H, J ] of Z(s) such. that (9.3.4)
holds for some real constant matrices L and W,.
...
3. Form the skew c o n s t i t impedance matrix
r1
$J
- J')
1
w;
1
-S(G
f
1
H)'
and give a synthesis of M as a transformer-coupled gyrator network
by using a factorization M = TL[E
.-F. EIT,, with E the impedance matrix of a unit gyrator.
4. Terminate the last n (= S[Z] = dimension of the matrix F) ports of
the above network in I-H inductances, and then aU but the first m ports
of the resulting network in 1-&2resistances to give a synthesis of Z(s).
The number of inductances used, being n = S[Z],is minimal, while the
number of resistances used is equal to the number of rows in L' and
Wo, and may be minimal if desired.
As an alternative to steps 3 and 4, the following may be used.
..
3'. Compute matrices F, G,, HA,
and JL from (9.3.5) and form the loss-
+
+
+
less (m r) x (m r) impedance matrix J, H;(sI - FA-IG, (here,
r is the number of rows in L' and WG).
4'. Synthesize the lossless impedance matrix J,
H;(sI - F p G , by
any procedure, classical or state-space, and terminate the last r ports
+
CHAP. 9
IMPEDANCE SYNTHESIS
390
of the network synthesizing this matrix in unit resistors to yield the
impedance matrix Z(s) at the first m ports.
Example
9.3.1
Consider the positive real impedance
A minimal realization of Z(s) is
A solution triple of the positive real lemma equations may be found to be
r 7
a quadruple (P. G, H,I] and the associated L
-1
and W. satisfying (9.3.4) is therefore given by
With I-
1
Using (9.3.8) as a guide, we form the skew matrix
whence
1
0
0
0
0 1
- 1 0
0 0
0 0
0 0 - 1 0
1 0
0 1
0
-1
-1
0 0
A synthesis of M is shown in Fig. 9.3.1; a synthesis of Z(s) is obtained
by terminating poas 3 and 4 of the network synthesizing M in 1-H
inductances and port 2 in a 1-Q. resistance. The desired network is shown
SEC. 9.3
RESISTANCE-EXTRACTION SYNTHESIS
,FIGURE 9.3.1.
Sydthesis of M in Example 9.3.1.
in Fig. 9.3.2. The synthesis obviously uses a minimal number of inductances as well as a minimal number of resistances.
To close this section, we offer some remarks comparing the reactanceextraction and resistance-extraction methods. These remarks carry over
mutatis mutandis to syntheses to be presented later, viz., reciprocal immittance synthesis and nonreciprocal and reciprocal scattering synthesis. First,
the onIy awkward computations are those requiring the construction of
a minimal realization IF, G, H,J ] of the prescribed Z(s) satisfying (9.3.4).
392
IMPEDANCE SYNTHESIS
FIGURE 9.3.2. A
CHAP. 9
Synthesis of Z(s) of Example 9.3.1.
This construction is required in both the reactance- and resistance-extraction
approaches, while the remaining calculations in both methods are very
straightforward. Second, if one adopts the reactanceextraction approach
to synthesizing the lossless impedance derived in the course of the resistanceextraction synthesis, one is led to the skew constant impedance matrix M
of (9.3.8). Now, as may be checked easily, if a network with impedance M
in (9.3.8) is terminated at ports m 1 through m p in unit resistors, the
resulting network has impedance matrix
+
+
h RECIPROCAL IMPEDANCE SYNTHESIS
SEC. 9.4
393
This is the matrix that appears in the normal reactance extraction, and its
occurrence here means that the structures resulting from the two syntheses
are the same.
One might well ask why two approaches to the same end result should be
discussed. There are several answers.
1. The two approaches give different conceptual insights into thesynthesis
problem.
2. The resistance-extraction approach, while marginally more complex,
offers more variety in synthesis structures because there is variety in
lossless synthesis procedures.
3. While the reactance-extraction approach is a m o r e immediate conse
quence of state-space ideas than the. resistance-extraction approach,
the latter isbetter known from classical synthesis.
Show that the procedure considered in this section always yields a synthesis that uses nonreciprocal elements, i.e., gyrators, even if Z(s) is symmetric.
Can you state what modifications might be made so that a reciprocal
synthesis could be obtained?
Problem Give two syntheses for the positive real impedance
Problem
9.3.1
9.3.2
one with a minimal number of resistors (bnek this case) irrd thd dther
with more than one reristor.'Comparethe two realiiatioas in thenuniber
. .
. .
of gyrators used.
.
,
9.4
A RECIPROCAL (NONMINIMAL RESISTIVE
AND REACTIVE) IMPEDANCE SYNTHESIS
In this section we shall look at a reciprocal synthesis procedure for a
prescriid m x m positive real impedance Z(s), which is also symmetric.
The method is based on the reactance-extraction concept. It is always successful in eliminating gyrators in a synthesis, given that Z(s) is symmetric,
and there is no real increase in the amount of major computation involved
over the nonreciprocal synthesis given. The major computational effort lies
in the determination of a minimal realization {F, G, H, J ] of Z(s), with
Z(m) < w, such that
394
l MPEDANCE SYNTHESIS
CHAP. 9
for some real matrices L and W,, which is a calculation that has been required
in the various syntheses considered so far. As may be expected, we do'not
gain something for nothing; in eliminating gyrators while retaining the same
level of computation in achieving a synthesis, the cost paid is in terms of
extra numbers of reactive elements and resistances appearing in the synthesis.
That additional elements are needed should not be surprising at all in the
light of Example 9.2.3, in which theonly gyrator was eliminated at the cost
of introducing an extra resistance.
We shall make use in discussing the method of a rather trivial identity.
(Considered by Koga in 171, this identity provided a key to a related synthesis
problem.) The identity is
where (f,G,R, 91 is any state-space realization of Z(s) with Z(s) = Z'(s).
Equation (9.4.2) may be readily verified on noting that since Z(s) = Z'(s),
one has Z(s) = $[Z(s) Z'(s)].
A sketch of the development leading to the reciprocal synthesis procedure
is as follows. We shall first set up a constant square matrix, in turn made up
from matrices that form a special realization (nonminimal in fact) of Z(s).
This constant matrix will be such that its positive real character can be easily
recognised. We shall then define from this matrix another matrix via a simple
orthogonal transformation, so that while the positive real character is
retained, other additional properties arise. These properties then make it
apparent that if the constant matrix is regarded as a hybrid matrix, it has
a reciprocal passive synthesis; moreover, terminating certain ports of a network synthesizing the matrix appropriately in inductances and capacitances
then yields a synthesis of Z(s).
To begin, it is easy to see from (9.4.2) that matrices Fa, Go,H,, and J,
defined by
+
[with F, C, H, and J a minimal realization ofZ(s) satisfying (9.4.111 constitute
a state-space realization (certainly nonminimal) of Z(s). In other words, as
may be checked readily,
The dimension of F,, or the size of the associated state vector, is obviously
Zn, i.e., twice the degree of Z(s), which indicates that if we are to synthesize
Z(s) via, say, reactance extraction through the realization (F,, Go,H,, J ) of
Z(s), then 2n reactive elements will presumably be needed. In fact, we shall
SEC. 9.4
A RECIPROCAL IMPEDANCE SYNTHESIS
395
show that a reciprocal synthesis for Z(s) is readily obtained from {Fa,Go,
No,
J ] through a simple orthogonal matrix basis change; exactly n inductances
and n capacitances-a total of 2n reactive elements-are always used in the
synthesis.
Let us consider a constant matrix M defined by
Note that we have not so far given physical significance to M by, for example,
demanding that it be the impedance matrix of a nondynamic network. But
we can conclude that M is positive real from the following sequence of simply
derived equalities:
-itLW. 1
w;wo
-
I
-~+(LwJ z(Lwo)r
LL'
0
The first equality follows immediately from (9.4.5), the second on using
(9.4.1), and the third from an obvious factorization.
We observe also another important fact from this sequence of equalities,
which is that
rank [M -f M'] = 2r
where r is the number of rows of L,' and W,.
(9.4.7)
CHAP. 9
IMPEDANCE SYNTHESIS
396
We shall now introduce yet another constant matrix M , , like M i n that it is
positive real and consists of matrices of a realization of Z(s), but possessing
one additional property to be outlined. Consider the following matrix:
T
'
["~I,, -7
I"
with I, an n x n identity matrix and n = J[q. It is easy to verify that T
is an orthogonal matrix. Now define another realization IF,, G,, H I , J ) of
Z(s), using the matrix T of (9.4.8), via
F, = TF0Tf
G, = TG,
H , = THO
(9.4.9)
The realization {F1,G,, H,, J ) defines a constant matrix M , by
such that
+.
1. M , M : is nonnegative definite, or M Iis positive real, and M , fMi
has rank Zr, where r is as defined previously following (9.4.7).
2. [I,,, I, i
(-In)]M, is symmetric.
+
To prove claim 1, we note that MI of (9.4.10) is
T)M(I.
= (1,
+ T')
so that
Since M f M' is nonnegative definite, as noted in (9.4.6), it follows immediately that MI + M ; is also. Further, the above equality implies from the
nonsingularity of I,, T and (9.4.7) that
+
rank ( M ,
+ Mi)
= rank
( M + M')
= 2r
(9.4.1 1)
We now prove claim 2. By direct calculation, using (9.4.3), (9.4.8), and
(9.4.9), we obtain
Fl =
+
j ~I=h+d
J f F F') &(F- F')
t ( F - F1) 4(F F')
[
+
' ( G - H)
H-G)
~l=[;iG+d
(9.4.12)
397
A RECIPROCAL IMPEDANCE SYNTHESIS
SEC. 9.4
and therefore
J
$(G-H)'
-f(G+H)'
&(G - H) - i ( F
F') $(-F
F')
J(G H) l ( - F
F? -$(F F')
+
+
+ +
+
+
+
I
(9.4.13)
That [I,, I,, ( - I d M , is symmetric is evident by inspection.
Let us summarize the major facts derived above:
1. {F,, G,, H,, J J as given in (9.4.12) is a realization of Z(s) with F,being
2n
X
2n.
2. The constant matrix M, =
symm&ric, where X = [ I ,
[i,
]:I
'
is positive reaI, and 2..
is
-F.,I, +(-I,,)].
Point 2 irnpiies that, if we consider the matrix M, as a hybrid matrix of
an (m 2n)-port network, where the first m n ports are current-excited
ports corresponding to the frst m 4- n rows of 2,that have a +I on the
diagonal, and the remaining n ports are voltage-excited ports corresponding
to the rows of 2: that have a -1 on the diagonal, then a network realizing
M , can be found that uses only passive reciprocal elements.
Further, from our discussion of the reactance-extraction procedure we can
see that 1 and 2 together imply that termination of the last 2n ports of a
synthesis of M , in n unit inductances for the current-excited ports, and n
unit capacitances for the voltage-excited ports yields a network synthesizing
Z(s). Since a synthesis of M,that is passive and reciprocal is guaranteed by
point 2 to exist, and since we know how to derive a passive reciprocal synthesis of Z(s) from a passive reciprocal synthesis of M I , it follows that a reciprocal passive synthesis of Z(s) can be achieved.
So far, we have not yet shown how to go about synthesizing M I . This we
shall now consider, thereby completing the task of synthesizing Z(s).
+
+
Synthesis of a Resistive Network Described
by a Constant Hybrid Matrix M
Consider the following equation describing a reciprocal (p
port N, having a constant hybrid matrix M:
+
+ q)
with the (p f q) x (p q ) constant matrix Mpositive real, or M $ M' 2 0.
The matrix MI,is p x p and symmetric, and M,, is q x q and symmetric.
The first p ports of N, are the current-excited or open-circuit ports, while the
398
CHAP. 9
IMPEDANCE SYNTHESIS
remaining q ports are the voltage-excited or short-circuit ports. It is easy to
see that the positive real character of M, or the nonnegative definiteness of
M 4- M' together with the form of M, implies that the symmetricprincipal
submatrices M,, and M,, are individually nonnegative defmite, orpositive red.
The positive real p x p symmetric matrix M , , , corresponding to the
current-excited ports of M, is therefore an impedance matrix and is obviously
realizable as a p-port transformer-coupled resistance network. The positive
real q x q symmetric matrix M,,,corresponding to the voltage-excited ports
of M, is an admittance matrix and is realizable as a q-port transformercoupled conductance network. The remaining portion of M, that is,
-M'xl,
represents, as we may recall from an earlier section, the
fM>,
0
hybrid-matri;description
of a multiport transformer with a turns-ratio
matrix -M,,. We may therefore rightly expect that the q x p matrix Mi,
represents some sort of transformer coupling between the two subnetworks
described by the impedance M I ,
and the admittance M,,. A complete synthesis of M is therefore achieved by an appropriate series-parallel connection.
In fact, it can be readily verified by straightforward analysis that the network
N, in Fig. 9.4.1 synthesizes M; j.a, the hybrid matrix of N, is M. (Problem
9.4.1 explores the hybrid matrix synthesis problem further.)
Synthesis of Z(s)
A synthesis of the hybrid matrix M , of (9.4.13) is evidently
provided by the co11nection of Fig. 9.4.1 with
+0
-i
'
Impedance MI,
1
(P)
Adrn~ttanceMt2
[G,T network1
(9)
-
"I
Current-excited
ports
Transformer
-M21
(P 1
-
(9)
+
v2
A
Voltage-excited
ports -
I
FIGURE 9.4.1. Synthesisof theHybrid Matrix Mof Equa-
tion (9.4.14).
12
SEC. 9.4
M
A RECIPROCAL IMPEDANCE SYNTHESIS
J
-
"-Li(G-H)
+(G - H)'
#-F-P')
1
M,, = $[-F - F']
399
(9.4.15)
A synthesis of Z(s) then follows from that for M,, by terminating a11 currentexcited poas other than the first m ports of the network N, synthesizing M,
in 1-Hinductances, and all the voltage-excited ports in 1-F capacitances.
Observe that exactly n inductors and n capacitors are always used, so a
total of 2n = 26[Z] reactive elements appear. Also, the total number of
resistances used is the sum of rank M,, and rank M,,, which is the same as
+
rank [M,, Mi,] $ rank [M,,
+ Mi,] = rank [M,+ M:] = 2r
Therefore, the synthesis uses 2r resistances, where r is the number of rows
of the matrices L' and W oappearing in (9.4.1).
Arising from these observations, the following points should be noted.
First, before carrying out a synthesis for a positive real Z(s) with Z(m) < m,
it is worthwhile to apply those parts of the preliminary simplification procedure described earlier that permit extraction of reciprocal lossless sections.
The motivation in doing so should be clear, for the positive real impedance
that remains to be synthesized after such extractions will have a degree, n'
say, that is less than the original degree 6[aby the number of reactive elements extracted, d say. Thus the number of reactive elements used in the
overall synthesis becomes d f2n'; in contrast, if no initial lossless extractions
were made, this number would be 26[21 or 2(n1 d)-hence a saving of d
reactive elements results. Moreover, in computing a solution triple P, L, and
W,, satisfying the positive real lemma equations [which is necessary in deriving (9.4.1)l some of the many available methods described earlier in any case
require the simplification process to be carried out (s,o that J + J' = 2 J is
nonsingular).
Second, the number of rows r of the matrices L' and W, associated with
(9.4.1) has a minimum value of p = normal rank [Z(s) $. Z'(-s)]. Thus
the minimum number of resistors that it is possible to achieve using the
synthesis procedure just presented is 2p.
The synthesis procedure will now be summarized, and then a simple synthesis example will follow.
+
Summary of Synthesis
The synthesis procedure falls into the following steps:
1. By lossless extractions, reduce the problem of synthesizing a prescribed
symmetric positive real impedance matrix to one of synthesizing a
symmetric positive real m x m Z(s) with Z(m) < m.
400
IMPEDANCE SYNTHESIS
CHAP. 9
2. Find a minimal realization for Z(s) and solve the positive real lemma
equations.
3. Obtain a minimal realization IF,G, H, J) of Z(s) and'associated real
matrices L' and W, possessing the minimum possible number of rows,
such that (9.4.1) holds.. .
4. Form the symmetric positive real impedance M , , , admittance MI,,
a n d transformer turns-ratio matrix -M,, using (9.4.15).,
5. Synthesize M,,, M,,,and -M,, and connect these together a s shown
in Fig. 9.4.1. Terminate all of the voltage-excited ports in I-F capacitances, and all but the first m current-excited ports in 1-H inductances
to yield a synthesis of the m x m Z(s):
Example Consider the positive real impedance of Example 9.3.1:
9.4.1
A minimal realization [F,G, H,J ) satisfying (9.4.1) is
.
.
and the associated matrices L' and W,, with L' and W, possessing the
minimal number of rows, are
First, weform the impedance MI,, the admittance M,,,.and the turnsratio matrix M2, as follows:
A synthesis of M,,is shown in Fig. 9.4.2a, and Ma, in Fig. 9.4.2b.
Note that M,, is nothing more than an open circuit for port 1 and a
1-C? resistor for port 2. A final synthesis of Z(s)is obtained by using the
connection of Fig. 9.4.1 and terminating this network appropriately
i n 1-F capacitances and 1-H induc'tances, as shown in Fig. 9.42.
By inspecting Fig. 9.4.2, we observe that, as expected, two resistances,
two inductances, and two capacitances are used in the synthesis.But if we
examine them more closely, we can observe an interesting fact: the circuit
A RECIPROCAL IMPEDANCE SYNTHESIS
401
/, i ilI(m23111
1-
I
~~
2'
I'
2'
((1)
1iI
3'
Synthesisof M,,
( b ) Synthesis of M22
( c l Final Synthesis of Z(s1 of Example 9.4.1
FIGURE 9.4.2 Synthesis of Example9.4.1.
of Fig. 9.4.2 contains unnecessary elements that may be eliminated without affecting the terminal behavior, and it actually reduces to the simplified circuit arrangement shown in Fig. 9.4.3. Problem 9.412 asks the
402
CHAP. 9
IMPEDANCE SYNTHESIS
reader to verify this. Observe now that the synthesis in Fig. 9.4.3 uses
one resistance, inductance, and capacitance, and no gyrators. Thus,
1H
FIGURE 9.4.3. A
Simplified Circuit Arrangement for Fig.
9.4.2.
in this rather special case, we have been able to obtain a reciprocal
synthesis that uses simultaneously the minimal number of resistances and
the minimal number of reactances! One should not be tempted to think
that this might also hold true in general; in fact (see [6]),it is not possible
in general to achieve a reciprocal synthesis with simultanwusly a
minimum number of resistances as well as reactive elements, although
it is true that a reciprocal synthesis can always be obtained that uses a
minimal number of resistances and a nonminimal number of reactive
elements, or vice versa. This will be shown in Chapter 10.
The example above also illustrates the following point: although the
reciprocal synthesis method in this section theoretically requires twice the
minimal number of resistances (2p)and twice the minimal number of reactiveelements (26[2]) for a reciprocal synthesis of a positive real symmetric Z(s),
[at least, when Z(s) is the residual impedance obtained after preliminary
synthesis simplifications have been applied to the original impedance matrix],
cases may arise such that these numbers may be reduced.
As already noted, the method in this section yields a reciprocal synthesis
in a fashion as simple as the nonreciprocal synthesis obtained from the
methods of the earlier sections of this chapter. In fact, the major computation
task is the same, requiring the derivation of (9.4.1). However, the cost paid
for the elimination of gyrators is that a nonminimal number of resistances
A RECIPROCAL IMPEDANCE SYNTHESIS
SEC. 9.4
403
as well as a nonminimal number of reactive elements are needed. In Chapter
10, we shall consider synthesis procedures that are able to achieve reciprocal
syntheses with a minimum number of resistances or a minimal number of
reactive elements. The computation necessary for these will be more complex
than that required in this chapter.
One feature of the synthesis of this section, as well as the syntheses of
Chapter 10, is that transformers are in general required. State-space techniques have so far made no significant impact on transformerless synthesis
problems; in fact, there is as yet not even a state-space parallel of the classical
transformerless synthesis of a positive real impedance function. This synthesis, due to R. Bott and R.J.Duffin, is described in, e.g., [8]; it uses a number
of resistive and reactive elements which increases exponentially wlth the
degree of the impedance function being synthesized, and is certainly almost
never minimal in the number of either reactive or resistive elements. It is
possible, however, to make a very small step in the direction of state-space
transformerless synthesis. The step amounts to converting the probfem of
synthesizing (without transformers) an arbitrary positive real Z(s) to the
problem of synthesizing a constant positive real Z,. If the reciprocal synthesis
approach of this section is used, one can argue that a transformerless synthesis will result if the constant matrix MI of (9.4.13), which is a hybrid
matrix, is synthesizable with no transformers. Therefore, one simply demands
that not only should F, G,H, and Jsatisfy (9.4.1), but also that they be such
that the nondynamic hybrid matrix of (9.4.13) possess a transformerless
synthesis. Similar problem statements can be made in connection with the
reciprocal synthesis procedures of the next section. (Note that if gyrators are
allowed, transformerless synthesis is trivial. So it only makes sense to consider
reciprocal transfoxmerless synthesis.)
Finally we comment that there exists a resistance extraction approach,
paralleling closely the reactance extraction approach of this section, which
will synthesize a symmetric positive real Z(s) with twice the minimum
numbers of reactive and resistive elements [4, 51.
Problem
9.4.1
Show that the hybrid matrix Mof the circuit arrangement in Pig. 9.4.1
is
--
-"~IP
M z 2 I9
P
4
Consider specifically a hybrid matrlx M, of the form of (9.4.13) with
the first m n porn currentexcited and the remaining n ports voltage
excited ports. Now it is known that the last n of the m n currentexcited ports are to be terminated in inductances, while all the n
voltageexcited ports are to be terminated in capacitances. Can you find
an alternative circuit arrangement to Fig. 9.4.1 for M, that exhibitsclearly
+
+
404
IMPEDANCE SYNTHESIS
CHAP. 9
the last n current-excited ports separated from the first m currentexcited
ports (Fig. 9.4.1 shows these combined together) in terms of the submatrices appearing in (9.4.13)?
Problem Show that the circuit arrangement of Fig. 9.4.3 is equivalent to that
9.4.2
of Fig. 9.4.2.
[Hint: Show that the impedance Z(s) is (sZ s -1- 4)j(sz s -I- I).]
Problem Synthesize the symmetric positive real impedance
+
9.4.3
Z(s)
=
+
,.
s 2 +2 r + 4
+ s 1
using the method described in this section. Then note that two resistances,
two capacitances, and two inductances are required.
Problem Combine the ideas of this section with those of Section 9.3 to derive a
9.4.4
resistance-extraction reciprocal synthesis of a symmetric positive real
Z(s). The synthesis will use twice the minimum numbers of reactive and
resistive elements [4,5l.
REFERENCES
[ 11 B. D. 0. A ~ o ~ n s oand
N R. W. NEWCOMB,
"Impedance Synthesis via State-
Space Techniques," Proceedings of the IEE, Vol. 115, No. 7, 1968, pp. 928-936.
[2] B. D. 0. ANDERSON
and R. W. NEWCOMB,"Lossless N-Port Synthesis via
State-Space Techniques," Rept. No. SEL-66-04>. TR No. 6558-8, Stanford Electronics Laboratories, Stanford, Calif., April 1967.
[ 3 ] B. D. 0.ANDERSON
and R. W. B n o c ~ m"A
, Multiport State-Space Darlington
Synthesis," IEEE Transactionson Circuit 7Xeory. Vol. CT-14, No. 3, Sept. 1967,
pp. 336-337.
[ 41 B. D. 0. ANOERSON
and S. VONGPANITLERI),
"Reciprocal Passive Impedance
Synthesis via State-Space Techniques," Proceedings of the 1969 Hawaii Internarional Conference on System Science, pp. 171-1 74.
[ 5 ] S. VONGPANITLERD,
"Passive Reciprocal Network Synthesis-The State-Space
Approach," Ph.D. Dissertation. University of Newcastle, Australia, May 1910.
[ 6 ] R. W. NEWCOMB,
Linear Multiport Synthesis, McGraw-Hill, New York, 1966.
[7] T. K o o k "Synthesis of Finite Passive N-Ports with Prescribed Real Matrices
of Several Variables," IEEE Transactions on Circuit Theory, Vol. CT-15, No. 1,
March 1968, pp. 2-23.
[ a ] E. A. GUILLEMIN,
Synthesis of Passive Networks, Wiley, New York, 1937.
[ 9 ] V. BELEVITCH,
"Minimum-Gyrator Cascade Synthesis of Lossless n-Ports,"
Philips Research Reports, Vol. 25, June 1970, pp. 189-197.
[lo] Y. OONO,'LMinim~m-Gyrat~r
Synthesis of n-Ports," Memoirs of the Faculty
of Engineering, Kyushu U~~iversity,
Vol. 31, No. I, Aug. 1971, pp. 1-4.
11I] B. D. 0. ANDERSON,
"Minimal Gyrator Lossless Synthesis," 1EEE Transactions
on Circuit Theory, Vol. CT-20, No. I, January 1973.
Reciprocal Impedance synthesiss
10.1
INTRODUCTION
In this chapter we shall continue our discussion of impedance
synthesis techniques. More specifically, we shall be concentrating entirely on
the problem of reciprocal or gyratorless passive synthesis.
In Section 9.4 a method of deriving a reciprocal synthesis was given. This
method, however, requires excess numbers of both resistances and energy
storage elements. In the sequel we shall be studying reciprocal synthesis
methods that require only the minimum possible number of energy storage
elements, and also a reciprocal synthesis method that requires only the minimnm possible number of resistances.
A brief summary of the material in this chapter is as follows. In Section
10.2 we consider a simple technique for lossless reciprocal synthesis based
on [I]. In Section 10.3 general reciprocal synthesis via reactance extraction is
considered, while a synthesis via the resistance-extraction technique is given
in Section 10.4. All methods use the minimum possible number of energy
storage elements. In Section 10.5 we examine from the state-spaceviewpoint
the classical Bayard synthesis. The synthesis procedure uses the minimum
possible number of resistances and is based on the material in [2].The results
of Section 10.3 may be found in [3], with extensions in [4].
*This chapter may be omitted at z fiatreading.
406
RECIPROCAL IMPEDANCE SYNTHESIS
CHAP.
10
We shall consider here the problem of generating a state-space
synthesis of a prescribed m x m symmetric lossless positive real impedance
Z(s). Of course, Z(s) must satisfy all the positive real constraints and also
the constraint that Z(s) = -Z1(-s).
We do not assume that any preliminary lossless section extractions, a s
considered in an earlier chapter, are carried out-for this would completely
solve the problem! We shall however assume (with no loss of generality)
that Z(m) < co. In fact, Z(m) must then bezero. The reasoning isasfollows:
since Z(s) is lossless,
But the left-hand side is simply Z(m)
of Z(s). Hence
+ Zr(m) = 2Z(w), by the symmetry
We recall that the lossless positive real properties of Z(s) guarantee the
existence of a positive definite symmetric matrix P such that
where (F, G, H ) is any minimal realization of Z(s). With any T such that
T'T = P,another minimal realization (F,, G,, H,]ofZ(s)defined by {TIT-',
TG, (T-,)'HI satisfies the equations
Now the symmetry property of Z(s) implies (see Chapter 7) the existence of
a symmetric nonsingular matrix A uniquely defined by the minimal realizaof Z(s), such that
tion (F,, G,, H,]
AF, = F', A
AG, = -HI
We shall show that A possesses a decomposition
RECIPROCAL LOSSLESS SYNTHESIS
SEC. 10.2
407
where U is an orthogonal matrix and
+
with, of course, nl n, =. n = 6[Z(s)].To see this, observe that (10.2.2)
and (10.2.3) yield AF\ = F I A and AH, = -G,, or
F,A-1 = A-IF,
A-'G, = -H,
From (10.2.3) and (10.2.6) we see that both A and A-1 satisfy the equations
XF, = P, Xand XG, = - H I . Since the solution of these equations is unique,
it follows that A = A-I. This implies that A2 = I, and consequently all
eigenvalues of A must be +1 or -1. As A is symmetric also, there exists an
orthogonal matrix U such that A = U'ZU,where, by absorbing a permutation matrix in U,the diagonal matrix Z with 4 1 and -I entries only may
be asswnedto be X = [I", -f- (-1)1,].
With the orthogonal matrix U defined above, we obtain another minimal
realition of Z(s) givenby
I t now follows from (10.2.2) and (10.2.3) that
From the minimal realization {F,,G,, Hz], a reciprocal synthesis for Z(s)
follows almost immediately; by virtue of (10.2.8), the constant matrix
satisfies
M f M'
=0
+
(Im X)M = Mf(Im4 Z)
(10.2.10)
The matrix M is therefore synthesizable as the hybrid matrix of a reciprocal
lossless network, with the current-excited ports corresponding to the rows of
(I, 4- X) that have a 4 1 on the diagonal, and the voltage-excited ports
corresponding to those rows that have a -1 on the diagonal. Now because
M satisfies (10.2.10), it must have the form
RECIPROCAL IMPEDANCE
SYNTHESIS
CHAP. 10
We conclude that the hybrid matrix M is realizable as a mulliport transformer.
A synthesis of Z(s) of course follows from that for M by terminating the
primary ports (other than the first m ports) of M in unit inductances, and all
the secondary ports in unit capacitors.
Those familiar with classical synthesis procedures will recall that there are
two standard lossless syntheses for lossless impedance functions-thc Foster
and Cauer syntheses [a], with multipart generalizations in [b]. The Foster
synthasis can be obtained by taking in (10.2.8)
(It is straightforward to show the existence of a coordinate-basis change
producing this structure of F,, G,,and Hz.) The Cauer synthesis can be
obtained by using the preliminary extraction'p~oceduresof Chapter 8 to
carry out the entire synthesis.
Problem
Synthesize the impedance matrix
10.3
RECIPROCAL SYNTHESIS VIA
REACTANCE EXTRACTION
The problem of synthesizing a prescribed m x m positive real
symmetric impedance Z(s) without gyrators is now considered. We shall
assume, with no loss of generality, that Z(w) < m. The synthesis has the
feature of always using the minimum possible number of reactive elements
(inductors and capacitors); this number is precisely n = 6[Z(s)]. Further,
the synthesis is based on the technique of reactance extraction. As should
be expected, the number of resistors used will generally exceed rank [Z(s)
z'(-s)].
We recall that the underlying problem is to obtain a state-space realization {F, G, H, J ) of Z(s) such that the constant matrix
+
SEC. 10.3
REACTANCE EXTRACTION
409
has the properties
M+M'>O
[ I , -4- ZlM =. M'[Im$
a
(10.3.2)
(10.3.3)
where 2:is a diagonal matrix with diagonal entries 4-1 or -1. Furthermore,
if (10.3.1) through (10.3.3) hold with Fpossessing the minimum possible size
n x n, then a synthesis of Z(s) is readily obtained that uses the minimum
number n of inductors and capacitors.
The constant matrix M in (10.3.1) that satisfies (10.3.2) and (10.3.3) is the
hybrid matrix of a nondynamic network; the current-excited ports are the
first m ports together with those ports corresponding to diagonal entries
of 2 with a value of $1, while the remaining ports are of course voltageexcited ports. On terminating the nondynamic network (which is always
synthesizable) at the voltage-excited ports ~n l-F capacitances and at all but
the first rn current-excited ports in 1-H inductances, the prescribed impedance
Z(s) is observed.
More precisely, the first condition, (10.3.2), says that the hybrid matrix
M possesses a passive synthesis, while the second condition, (10.3.3), says
that it possesses a reciprocalsynthesis. Obviously, any matrix M that satisfies
both conditions simultaneously possesses a synthesis that is passive and
reciprocal. In Chapter 9 we learned that any minimal realization {F,G, H, J ]
of Z(s) that satisfies the (special) positive real lemma equations
F+F'=-LL'
G=H-LW,
J + J' = WLW,
(10.3.4)
results in M of (10.3.1) fulfilling (10.3.2). [In fact if M satisfies (10.3.2) for
any (F, G, H, J), then there exist L and W , satisfying (10.3.4).] The procedure
required to derive {F, G, H, J ] satisfying (10.3.4) was also covered. We shall
now derive, from a minimal realization of Z(s) satisfying (10.3.4), another
minimal realization IF,, G,, H,, J ) for which both conditions (10.3.2) and
(10.3.3) hold simultaneously. In essence, the reciprocal synthesis problem is
then solved.
Suppose that (F, G,H, J ) is a state-space minimal realization of Z(s) such
that (10.3.4) holds. The symmetry (or reciprocity) property of Z(s) guarantees
the existence of a unique symmetric nonsingular matrix P with
As explained in Chapter 7 (see Theorem 7.4.3),any matrix T appearing in a
410
RECIPROCAL IMPEDANCE SYNTHESIS
CHAP. 10
decomposition of P of the form
P = T'XT
Z = [In,-I-(-1)IJ
(10.3.6)
generates another minimal realization {TFT-', TG, (T-')'H, J } of Z(S)that
satisfies the reciprocity condition (10.3.3). However, there is no guaranteethat
the passivity condition is still satisfied. We shall now exhibit a specific statespace basis change matrix T from among all possible matrices in the above
decompositon of P such that in the new state-space basis, passivity is presewed.
The matrix T is described in Theorem 10.3.1; first, we require two preliminary lemmas.
Lemma 10.3.1. Let R be areal matrix similar to areal symmetric
matrix S. Then S is positive definite (nonnegative definite) if
R R' is positive definite (nonnegative definite), but not necessarily conversely.
+
The proof of the above lemma is'straightforward, depending on the fact
that matrices that are similar to one another possess the same eigenvalues,
and the fact that a real symmetric matrix has real eigenvalues only. A formal
proof to the lemma is requested in a problem.
Second, we have the following result:
Lemma 10.3.2. Let P be the symmetric nonsingular matrix satisfying (10.3.5). Then P can be represented as
P = BVXV' = VXV'B
(10.3.7)
where B is symmetric positive definite, V is orthogonal, and
X = [I,, -!-(-1)1,;1. Further, VZV' commutes with B"L, the
unique positive definite square root of B; i.e.,
Proof. It is a standard result (see, e.g., [5D, that for anonsingular
matrix P, there exist unique matrices B and U,such that B is
symmetric' positive definite, U is orthogonal, and P = BU.
From the symmetry of P,
BU = U'B = (U'BU)U'
This shows that B and (U'BU) are both symmetric positivedefinite square roots of the symmetric positive-definite matrix
K = PP' = PZ.It follows from the uniqueness ofpositive-definite
SEC. 10.3
41 1
REACTANCE EXTRACTION
square roots that
B = U'BU
Hence, multiplying on the left by U,
Next, we shall show that the orthogonal matrix U has a decomposition VZV'. To do this, we need to note that Uis symmetric.
Because P is symmetric, P = BU = U'B. Also, P = UB. The
nonsingularity of B therefore implies that U = U'. Because U
is symmetric, it has real eigenvalues, and because it is orthogonal,
it has eigenvalues with unity modulus. Hence the eigenvalues of
U must be +1 or -1. Therefore, U may be written as
U = VZV'
+
where V is orthogonal and Z = [I,, (-1)1,,1. Now to prove
(10.3.8), we proceed as follows. By premultiplying (10.3.7) by
XV' and postmultiplying by V Z , we have
XV'BV = V'BVI:
on using the fact that V is orthogonal and Z2 =I.The above
relation shows that V'BV commutes with Z. Because V'BV is
symmetric, this means that Y'BV has the form
where B, and B, are symmetric positive-definite matrices of
dimensions n, x n, and n, x iz,, respectively. Now (V'B"aV)2
= V'BV and [B:" 4 B:11]2= [B, Bt]. By the uniqueness of
positive-definite square roots, we have V'B'lZV = [B:I2 B:'a].
Therefore
+
from which (10.3.8) is immediate.
V VV
With the above lemmas in hand and with all relevant quantities defined
therein, we can now state the following theorem.
Theorem 10.3.1. Let (F, G, H, J ] be a minimal realization of a
positive real symmetric Z(s) satisfying the special positive real
412
RECIPROCAL IMPEDANCE SYNTHESIS
CHAP. 10
lemma equations (10.3.4). Let V, B, and ): be defined as in
Lemma 10.3.2, and define the nonsingular matrix T by
T= V'BXI~
(10.3.9)
Then the minimal realization {F,,G,, HI,J ) = {TFT-', TG,
(T-l)'H, J ] of Z(s) is such that the hybrid matrix
satisfies the following properties:
Proof.From the theorem statement, we have
The second and third equalities follow from Lemma 10.3.2.
It follows from the above equation by a simple direct calculation or by Theorem 7.4.3 that T defines a new minimal realization
{F,,GI, If,,J ) of Z(s) such that
holds. It remains to check that (10.3.10) is also satisfied.
RecaUing the definition of M as
[I. :TI,
ic is clear that
or, on rewn'ting using (10.3.9),
where M,denotesp 4- V ' ] M [ I + V ] .From the fact that M + M'
2 0, it follows readily that
SEC. 10.3
REACTANCE EXTRACTION
41 3
Now in the proof of Lemma 10.3.2 it was established that
Y'BIIZVhas the special form [B;" 4-B:la], where B:I2 and B:12
are symmetric positivedefinite matrices of dimensions n, x n,
and n, x n,, respectively. For simplicity, let us define symmetric
positive-deiinite matrices Q , and Q , of dimensions, respectively,
( m n,) x (m n , ) and n, x n,, by
+
+
Q , = 11, t B:i21
Q, = By2
and let M, be partitioned conformably as
+
+
i.e., M l 1 is (m n,) x (m n,) and M,, is n, x n,. Then we
can iewrite (10.3.12), after simple calculations, as
The proven symmetry property of [I,& Z]Ml implies that the
submatrices Q i l M , ,Q l and Q i l M , , Q , are symmetric, while
Q;lM,,Q, = -(Qi1M1,Q,)'. Consequently
Now M , , and M,, are principal submatrices of M,, and so
M I , M i , and M,, $ M i , are principal submatrices of M,
M i . By (10.3.13), M , Mb is nonnegative definite, and so
M l l M',, and M,,
M i , are also nonnegative definite. On
identifying the symmetric matrix Q ; ' M l l Q , with S and the
matrix M , , with R in Lemma 10.3.1, it follows by Lemma 10.3.1
that Q i l M , , Q , is nonnegative definite. The same is true for
Q;1M,2Q,. Hence
+
+
+
+
+
and the proof is now complete.
VVO
With Theorem 10.3.1 on hand, a passive reciprocal synthesis is readily
414
RECIPROCAL IMPEDANCE SYNTHESIS
CHAP. 10
obtained. The constant hybrid matrix M, is synthesizable with a network
of multiport ideal transformers and positive resistors only, as explained in
Chapter 9. A synthesis of the prescribed Z(s) then follows by appropriately
terminating this network in inductors and capacitors.
A brief summary of the synthesis procedure follows:
1. Find a minimal realization {F,,, Go,Ha,J) for theprescribedZ(s), which
is assumed to be positive real, symmetric, and such that Z(oo) < m.
2. Solve the positive real lemma equations and compute another minimal
realization [F, G, H, J ) satisfying (10.3.4).
3. Compute the matrix P for which PF = F'P and PG = -H, and from
P find matrices B, V, and I:with properties as defined in Lemma 10.3.2.
4. With T = V'B"', form another minimal realization ofZ(s) by (TFT',
TG,(T-I)'H, J ) and obtain the hybrid matrix M,, given by
5. Give a reciprocal passive synthesis for M,; terminate appropriate ports
of this network in unit inductors and unit capacitors to yield a synthesis
The calculations involved in obtainingB, V, .and:Z.from:P are straight-.
forward. We note that B is simply the symmetricpo&tive-definitesquare root
of the matrix PP' = P Z ; the matrix B-'P (or PB-I), which is symmetric
and orthogonal, can be easily.decomposed as VXV'[q.
Example
Let the symmetric positive real impedance matrix Z(s) to be synthesized
10.3.1
be
A minimal realization for Z(s) satisfying (10.3.4) is given by
The solution P of the equations PF = F'P and PG
puted to be
for which B, V, and X are
=
-His easily com-
SEC. 10.3
41 5
REACTANCE EXTRACTION
The required state-space basis change T = V'BIIZ is then
G , , HI,J] of Z(s):
yielding another minimal realization {F,,
We then form the hybrid matrix
+
+
It is easy to see that M I M',is nonnegative definite, and (I
E)M,
is symmetric. A synthesis of MI is shown in Fig. 10.3.1; a synthesis of
Z(s) is obtained by terminating ports 2 and 3 of thisnetwork in a unit
inductance and unit capacitance, respectively, as shorn in Fig. 10.3.2.
The reader will have noted that the general theme of the synthesis given
in this section is to start with a realization of Z(s) from which a passive
synthesis results, and construct from it a realization from which a passive
and reciprocal synthesis results. One can also proceed (see [3] and Problem
FIGURE 10.3.1.
A Synthesis of the Hybrid Matrix M,.
416
'RECIPROCAL IMPEDANCE SYNTHESIS
CHAP. 10
FIGURE 10.3.2. A Synthesis of Z(s).
10.3.5) by starting with a realization from which a reciprocal, but not necessarily passive, synthesis results, and constructing from it a realization from
which a reciprocal and passive synthesis results. The result of Problem 10.3.1
actually unifies both procedures, which at first glance look quite distinct;
[4] should be consulted for further details.
Problem Let (F,G, H, J) be an arbitrary minimal realization of symmetri.~
positive real matrix. Let Q be a positive-definite symmetricmatrix satisfying the positive real lemma equations and let P be the unique symmettic
nonsingular matrix satisfying the reciprocity equation, i.e., Eqs. (10.3.5).
Prove that if a nonsingdar matrix Tcan be found such that
10.3.1
YZT = P
with 2 = [I.,f (-1)Z.l
and
T'YT= Q
for some arbitrary symmetric positive definite matrix Y having the form
[Y, YZ1,where Y 1is n, x n, and Y, is n, x n,, then the hybrid
TG, (T-')'H, J ]
matrix Mformed from the minimal realization {TET-1,
satisfies the equations
+
Note that this problem is concerned with proving a theorem offering a
SEC. 10.3
REACTANCE EXTRACTION
417
more general approach to reciprocal synthesis than that given in the
text; an effective computational procedure to accompany the theorem
is suggested in Problem 10.3.8.
Problem Show that Theorem 10.3.1 is a special case of the result given in Problem
10.3.2
10.3.1.
Problem Prove Lemma 10.3.1.
10.3.3
Problem Synthesize the foUowing impedance matrices using the method of this
10.3.4
section:
s'+4s+9
(a) Z(s) = s, + 2s +
Problem Let [F, G, H,J ] be aminimalrealization of an m
10.3.5
real Z(s) such that
x m symmetricpositive
+
(-1)1,,,1. Suppose that 3' is a positive-definite symwith X = [I.,
metric matrix satisfying the positive real lemma equations for the above
minimal realization, and let P be partitioned conformably as
where P , I is nl x nl and P Z 2is n, x n,. Define now an nl x n, matrix
Q as any solution of the quadratic equation
with the extra property that I - Q'Q is nonsingular. (It is possible to
demonstrate the existence of such Q.)
Show that, with
T
-[
-
(InaQQ')-11'2
( I - Q'Q)J2Q'
(I.,- QQ'I-'"Q
(I., - Q'Q)-l''
then the minimal realization { F l , G I ,H I ,J ]
of Z(s) is such that the hybrid matrix
satisfies the following properties:
= [TIT-',
1
TG, (T-')%I,J)
418
RECIPROCAL IMPEDANCE SYNTHESIS
M, +M',>O
[I,+ EIM,
4- XI
= MXI,
(Hint: Use the result of Problem 10.3.1.)
Can you give a computational procedure to solve the quadratic equation for a particular Q with the stated property? (See [3,4] for further
details.)
-1
Problem Suppose that it is known that there exists a synthesis of a prescribed
symmetric positive real Z(s) using resistors and inductors only. Explain
what simplifications result in the synthesis procedure of this section.
10.3.6
Problem Let Z(s) be a symmetric-positive-real matrix with minimal realization
[F,G, H,J1. [Assume that Z(w) < w.] Show that the number of inductors and capacitors in a minimal reactive reciprocal synthesis of Z(s)
can be determined by examining the rank and signature of the symmetric
Hankel matrix
10.3.7
H'G
H'FG
H'FG
H'F2G
H'Fn-1G
H'FxG
-..
...
H'Fn-I G
H'FnG
...
H'Fln-2G
Problem Show that the following procedure will determine arnatrix Tsatisfying the
equations of Problem 10.3.1. Let S , be such that Q = S',ISl, and let
St be an orthogonal matrix such that S,(S:)-'PS:S: is diagonal. Set
S = S z S I . Then S will sirnultanwusly diagonalize P and Q. From S
and the diagonalized P, find T. Discuss how all Tmight be found.
Problem Let Z(s) be a symmetric positive real matrix with minimal realization
10.3.9
(F, G, H, Jj. Suppose that XF = F'C and XG = -X where 2 has the
usual meaning. Suppose also that R = W is nonsingular. Let P be the
minimum solution of
10.3.8
PF f F'P = -(PC
- H)R-'(PC - H Y
Show that the maximum solution is (XPZ)-1.
10.4
RECIPROCAL SYNTHESIS VIA
RESISTANCE EXTRACTION
I n the last section a technique yielding a reciprocal synthesis for
a prescribed symmetric positive real impedance matrix was considered, the
synthesizing network being viewed as a nondynamic reciprocal network
terminated in inductors and capacitors. Another technique-the
resistance-
SEC. 10.4
RESISTANCE EXTRACTION
419
extraction technique-for achieving a reciprocal synthesis will be dealt with
in this section. The synthesizing network is now thought of as a reciprocal
lossless network terminated in positive resistors. The synthesis also uses
a minimal possible number of reactive elements.
With the assistance of the method given in the last section, we take as our
starting point the knowledge of a minimal realization {F, G, H, J j of Z(s)
with all the properties stated in Theorem 10.3.1; i.e.,
and
+
Suppose that E is [I,, (-I)?,J.
H conformably as shown below:
Let us partition the matrices F, G, and
Hence, F,, is n, x n,, FZ1 is n, x n,, G, and H , are el X m, and G, and
H, are n, x m. Then (10.4.2) may be easily seen to imply that
On substituting the above equalities into (10.4.1), we have
Hence the principal submatrices
- ~ , 1 ]
and I-2Faz] are individually
~ matrix R and
nonnegative
definite. Therefore, there exists an(m -k n , ) r,
.
an n,
X r,
matrix S, where r , is the rank of
of [-2F,J, such that
[
-2,j
mir,
is the m k
420
RECIPROCAL IMPEDANCE SYNTHESIS
CHAP. 10
If we partition the matrix R as
with R, m x r , and R, n,
2 J = R,R;
X
r,, then (10.4.4) reduces to
ZG, = R,Ri
-2F,, = R,R',
(10.4.6)
With these preliminaries and definitions, we now state the following
theorem, which, although it is not obvious from the theorem statement, is
the crux of the synthesis procedure.
x m symmetric positive real
impedance matrix with Z(co) < oo. Let [F, G, H, J ) be a minimal
realization of Z(s) such that (10.4.1) and (10.4.2) hold. Then the
following state-space equations define an (m f r)-port lossless
network:
Theorem 10.4.1. Let Z(s) be an m
[Here x is an n vector, u and y are m vectors, and u, and y, are
vectors of dimension r = r, r,.] Moreover, if the last r ports of
the lossless network are terminated in unit resistors, corresponding to setting
+
U I=
-YI
(10.4.8)
then the prescribed impedance Z(s) will be observed at the remaining m ports.
Proof. We shall establish the second claim first. From (10.4.8)
and the second equation in (10.4.7), we have
SEC. 10.4
RESISTANCE EXTRACTION
421
Substituting the above into (10.4.7) yields
or, on using Eqs. (10.4.6) and collecting like terms
Using now (10.4.3), it is easily seen that the above equations are
the same as
and evidently the transfer-function matrix relating U(s) to Y(s) is
J H'(sI - F)-lG, which is the prescribed Z(s).
Next we prove the first claim, viz., thatthe state-space equations
(10.4.7) define a lossless network. We observe that the transferfunction matrix relating the Laplace transforms of input variables
to those of output variables has a state-space realization, in fact
a minimal realization, (F",,
G,, H,, J,] given by
+
1
F L = - (2F -
Clearly then,
F') =
[->, 71
422
RECIPROCAL IMPEDANCE SYNTHESIS
CHAP. 10
Since these are the equations of the lossless positive real lemma
with P the identity matrix I, (10.4.7) defines a lossless network.
vvv
With the above theorem, we have on hand therefore a passive synthesis
of Z(s), provided we can give a lossless synthesis of (10.4.7). Observe that
though u is constrained to be a vector of currents by the fact that Z(s) is an
impedance matrix (with y being the corresponding vector of voltages), u,
however can have any element either a current or voltage. [Irrespective of the
definition of u,, (10.4.8) always holds and (10.4.7) still satisfies the lossless
positive real lemma.] Thus there are many passive syntheses of Z(s) corresponding to different assignments of the variables u, (and y,). [Note: A
lossless network synthesizing (10.4.7) may be obtained using any known
methods, for example, by the various classical methods described in [6] or
by the reactance-extraction method considered earlier.]
By making an appropriate assignment of variables, we shall now exhibit
a particular lossless network that is reciprocal. In effect, we are thus presenting a reciprocal synthesis of Z(s).
Consider the constant matrix
It is evident on inspection that M is skew, or
M + M'=O
and ZM is symmetric with
z = IL + (-III,,
+
-L I , ~4 In, (-I)IJ
h e s e equations imply that if M is thought of as a hybrid matrix, where ports
SEC. 10.4
m
RESISTANCE EXTRACTION
423
+
1 to m $- r , and the last n, ports are voltage-excited ports with all
others being current-excited ports, then M has anondynamiclosslesssynthesis
using reciprocal elements only. In addition, termination of this synthesis of
M a t the first n, of the last n ports in unit inductances and the remaining n,
of the last n ports in unit capacitances yields a reciprocal lossless synthesis
of the state-space equations (10.4.7) of Theorem 10.4.1, in which the first
r , entries of u1 are voltages, while the last r, entries are currents. Taking the
argument one step further, it follows from Theorem 10.4.1 that termination
of this reciprocal lossless network at its last r ports in unit resistances then
yields a reciprocal synthesis of Z(.V).
We may combine the above result with those contained in Theorem 10.4.1
to give a reciprocal synthesis via resisfance extraction.
Theorem 10.4.2. With the quantities as defined in Theorem
10.4.1, and with the first r , entries of the excitation r vector u,
defined to be voltages and the last r , entries defined to be currents,
the state-space equations of (10.4.7) define an (m r)-port lossless network that is reciprocal.
+
As already noted, the reciprocal lossless network defined in the above
theorem may be synthesized by many methods. Before closing this section,
we shall note one procedure. Suppose that the reciprocal lossless network is
to be obtained via the reactance-extraction techn~que;then the synthesis
rests ultimately on a synthesis of the constant hybrid matrix M of (10.4.9).
Now the hybrid matrix M has a synthesis that is nondynamic and lossless,
as well as reciprocal. Consequently, the network synthesizing M must be
a multiport ideal transformer. This becomes immediately apparent on permuting the rows and columns of M so that the current-excited ports are
grouped together and the voltage-excited ports togkther:
+ +
+
Evidently M,is realizable as an (m r,
n,) x (r, n,)-port transformer
with an (r, n,) X (m r, $. n,) turns-ratio matrix of
+
+
424
RECIPROCAL IMPEDANCE SYNTHESIS
CHAP. 10
The above argument shows that a synthesis of a symmetric positive real
Z(s) using passive and reciprocal elements may be achieved simply by terminating an (m r,
n,) x (r, nJ-port transformer of turns-ratio
matrix T,in (10.4.10) a t the last n, secondary ports in units capacitances,
at the remaining r, secondary ports in unit resistances, at the last n, primary
ports in unit inductances, and a t all but the first m of the remaining primary
ports in r, unit resistances.
Let us illustrate the above approach with the following simple example.
+ +
+
Example Consider the (symmetric) positive real impedance
10.4.1
sf+2.+9
1
1
Z(s) = 8sz+
16s+8=8+sZ+2.+1
A minimal state-space realization of Z(s) that satisfies the conditions in
(10.4.1) and (10.4.2) with X = [I
+ -11
is
as may be easily checked. Now
[-ZF,,]
= 0 =SS
Hence
and the 2 x 3 tums-ratio matrix T, in (10.4.10) is therefore
Note that the second column of T,can in fact be deleted; for the moment
we retain it to illustrate more effectively the synthesis procedure. A
multipart transfonner realizing T, is shown in Fig. 10.4.1, and with
appropriate terminations at certain primary ports and all the secondary
ports the network of Fig. 10.4.2 yields a synthesis of Z(s).
SEC. 10.4
FIGURE 10.4.1. A Multiport Transformer Realizing T. of
Example 10.4.1.
-
. .
.
.
FIGURE 10.4.2. A Synthesis of Z(s) for Example 10.4.1.
It is evident that one resistor is unnecessary and the circuit in Fig.
10.4.2 may be reduced to the one shown in Fig. 10.4.3. By inspection,
we see that the synthesis u s e the minimal number of reactive elements
as predicted, and, in this particular example, the minimal number of
resistances as well.
426
RECIPROCAL IMPEDANCE
FIGURE 10.4.3. A
SYNTHESIS
CHAP. 10
Simplified Version of the Network of
Fig. 10.4.2.
This concludes our discussion of reciprocal synthesis using a minimal
number of reactive elements. The reader with a background in classical
synthesis will have perceived that one famous classical synthesis, the Brune
synthesis [8I, has not been covered. I t is at present an open problem as to
how this can be obtained via state-space methods. It is possible to analyze
a network resulting from a Brune synthesis and derive state-space equations
via the analysis procedures presented much earlier; the associated matrices
F, G, H, and J satisfy (10.4.1) and (10.4.2) and have additional structure.
The synthesis problem is of course to find a quadruple (F, G, H, J j satisfying
(10.4.1) and (10.4.2) and possessing this structure.
Another famous classical synthesis procedure is the reciprocal Darlington
synthesis. Its multiport generalization, the Bayard synthesis, is covered in the
next section.
Problem Consider the positive real impedance function
Using the method of the last section, compute a realization of Z(s)
satisfying (10.4.1) and (10.4.2). Then form the state-space equations
defined in Theorem 10.4.1, and verify that on setting ul = - y , , the
resulting state-space equations have a transfer-function matrix that is
Z(s).
SEC. 10.5
A STATE-SPACE BAYARD SYNTHESIS
427
Problem With the same impedance function as given in Problem 10.4.1, and
with thestate-spaceequationsdefined inThwrem 10.4.1, giveall possible
syntheses ofZ(s)(corresponding to differingassignments for thevariables
ul and y,), including the particular reciprocal synthesis given in the text.
Compare the various syntheses obtained.
10.4.2
In earlier sections of this chapter, reciprocal synthesis methods
were considered resulting in syntheses for a prescribed symmetric positive
real impedance matrix using the minimum possible number of energy storage
elements. The number of resistors required varies however from case to case
and often exceeds the minimum pos~ible.Of course the lower limit on the
number of resistors necessary in a synthesis of Z(s)is well defined and is given
Z'(-s).
by the normal rank of Z(s)
A well-horn classical reciprocal synthesis technique that requires the
minimum number of resistors is the Bayard synthesis (see, e.g., [6,I),the
basis of which is the Gaussfactorization procedure. We shall present in this
section a state-space version of the Bayard synthesis method, culled from [2].
Minimality of the number of resistors and absence of gyrators in general
means that a nonminimal number of reactive elements are used in the synthesis. This means that the state-space equations encountered in the synthesis
may be nonminimal, and, accordingly, many of the powerful theorems of
linear system theory are not directly applicable. This is undoubtedly a complicating factor, making the description of the synthesis procedure more
complex than any considered hitherto.
As remarked, an essential part of the Bayard synthesis is the application
of the Gauss factorization procedure, of which a detailed discussion may be
found in 161. Here, we shall content ourselves merely with quoting the result.
+
Statement of the Gauss Factorization Procedure
Suppose that Z(s)is positive real. As we know, there exist an
infinity of matrix spectral factors W(s)of real rational functions such that
Z(S)
+ Z'(-S)= W'(-s)W(s)
(10.5.1)
Of the spectral factors W(s)derivable by the procedures so far given, almost
all have the property that W(s)has minimal degree; i.e., d[W(.)] = d[Z(.)1.
The Gauss factorization procedure in general yields anonminima[ degree W(s).
But one attractive feature of the Gauss factorization procedure is the ease
with which W(s)may be found.
*Thissection may be amitted at a first reading of this chapter.
428
RECIPROCAL IMPEDANCE SYNTHESIS
CHAP. 10'
The Gauss Factorization. Let Z(s) be an m x m positive
real matrix; suppose that no element of Z(s) possesses a purely
imaginary pole, and that Z(s) Z'(-s) has rank r almost everywhere. Then there exists an r x r diagonal matrix N, of real
Hurwitz* polynomials, and an m x r matrix N, of real polynomials such that
+
Moreover, if Z(s) = Z'(s), there exist N, and N, as defined above
with, also, N,(s) = N,(-s). The matrices N,(s) and Nz(s) are
computable via a technique explained in i61.t
Observe that there is no claim that N, and N, are unique; indeed, they
are not, and neither is the spectral factor
In the remainder of the section we discuss how the spectral factor W(s)
in (10.5.3) can be used in developing a stateapace synthesis of Z(s). The
discussion falls into two parts:
1. The construction of a state-space realization (which is generally nonminimal) of Z(s) from a state-space realization of W(3). This realization
of Z(s) yields readily a nonreciprocal synthesis of Z(s) that uses a minimum number of resistors.
2. An interpretation is given in statespace terms of the constraint in the
Gauss factorization procedure owing to the symmetry of Z(s). An
orthogonal matrix is derived that takes the state-space realization of
Z(s) found in part 1 into a new state-space basis from which a reciprocal
synthesis of Z(s) results. The synthesis of course uses the minimum
number of resistors.
Before proceeding to consider step 1 above, we first note that the Gauss
factorization requires that no element of Z(s) possesses a purely imaginary
pole; later when we consider step 2, it will be further required that Z(m)
be nonsiugular. Of course, these requirements, as we have explained earlier,
are inessential in that Z(s) may always be assumed to possess these properties
without any loss of generality.
*For our purposes, a Hnrwitz polynomial will he delined as one for which all zeros
have negative real parts.
?The examples and problems of this section will involve matrices N , ( . ) and A',(.) that
can be computed without need for the fomal technique of [q.
SEC. 10.5
A STATE-SPACE BAYARD SYNTHESIS
429
Construction of a State-Space Realization of Z(s)
via the Gauss Spectral Factorization
Suppose that Z(s) is an m x m positive real impedance matrix to
be synthesized, such that no element possesses a purely imaginary pole.
Constraints requiring Z(s) to be symmetric or Z ( m ) to be nonsingular will
not be imposed for the present. Assuming that Z(s) Z'(-s) has normal
rank r, then by the use of the Gauss factorization procedure, we may assume
that we have on band a diagonal r x r matrix of Hunuitz polynomials N,(s)
and an m x r matrix of polynomials N,(s) such that a spectral factor W(s)
of
+
is.
:
The construction of a state-space realization of Z(s) will be preceded by
construction of a realization of W(s). Thus matrices F, G, L, and Wo are
determined, in a way we shall describe, such that
[Subsequently, F and G will be used in a realization of Z(s).J
The calculation of W, is straightfonvard by setting s = w in (10.5.3).
Of course, W, = W(-)< m since Z ( m ) < w. The matrices F and L will
be determined next. We note that
where each p,(s) is a Hunvitz polynomial. Defne
matrix with pj(s) as characteristic polynomial; i.e.,
and define I,= [0 0
for each i; further,
... 0 I]'.
As we h o w ,
fi
to be the companion
[fi,41 is completely controllable
430
RECIPROCAL IMPEDANCE SYNTHESIS
as may be readily checked. Define now Pand 2as
Obviously, the pair [E', 21 is completely controllable because each [E,fi] is.
Having Iixed the matrices i? and 2,we now seek a matrix 8 such that
G,e]
Subsequently, we shall use the triple (3,
to define the desired triple
(F, G, L) satisfying (10.5.4). From (10.5.3) we observe that the (i,j)th entry
of V(s) = W'(s) is
is v,,(co), and the
where n,,,(s) is the (i, j)th entry of N,(s) of (10.5.3).
degree of ii,,(s) is less than the degree ofp,(s) with Rri,,(s) the residue of the
polynomial n,(s) modulo p,(s). Define B:, to be the row vector obtained
from the coeficients of ii,,(s),arranged in ascendingpower of s. Then because
(sf- 83-'i can be checked to have the obvious form
SEC. 10.5
A STATE-SPACE BAYARD SYNTHESIS
431
it follows that
re:,
e,:
..,
a:.1
A few words of caution are appropriate here. One should bear in mind
that in constructing 8'and t,if there exists apj(s) that is simply a constant,
then the corresponding block must evanesce, so that the rows and columns
corresponding to the block fi in the P' matrix must be missing. Correspondingly, the. associated rows in the i matrix occupied b y the vector 1, will
evanesce, but the associated single column of zeros must remain in order to
give the correct dimension, because if E' is n x n, then is n x. r, where
r is the normal rank ofZ(s) Z'(-s). Consequently, the number of columns
in i i s always r. The matrix e' will have fib, zero for all q, and the corresponding columns will. also evanesce because G' has n columns.
The matrices p,6, and i defined above constitute a realization of W(s)
- W,. Weshall however derive another realiz@ion.viaa simple state-space
basis change. For each i, define P,as the symmetric-positive-definite solution
of p,Ft
= -ti:. Of course, such a P,always exists because the eigenvalues of R, lie in Re Is] < 0 [i.e., the characteristic polynomial pi($) of 4 is
Hurwitz], and the pair [R,f,] is completely controllable. Take T, to be any
square matrix such that T:T,= P,, and define T as the direct sum of the
Ti. With the definitions F; = T$?T;',li = ( T : ) - ~ ( F
, = T ~ T - Iand
, L=
(T)-li,
it follows that c. F; = ---1,1;and
+
+ RP,
+
Finally, define G = TC?.
Then [F, G, L, W.) is a realization of W(s), with the additional properties
that (10.5.6) holds and that [F,L] is completely 06servable (the latter property
following from the complete obsewability of. [f,L]or, equivalently, the
complete controllability of I#', in.
Define now
We claim that with F, G, and H de$ned as above, and with J = Z(m), (F, G,
H, J ] consfitutes a realization for Z(s). To verify this, we observe that
'a<
432
CHAP. 10
REClPROCAL IMPEDANCE SYNTHESIS
W'(-s)W(s)
+ WbL'(sI - fl-'G + G'(-.d
+ G'(-sl- F3-'LL'(sI- F)"G
= w; w, + W'&'(sI - q - ' G + G'(-sI
= WiW,
+
=
r
2%
.%
,I.S
- F')-'LW,
i:!
7
- F')-'L W,
GI(-SI - F')-l[-sI - F' + sl- q(sI - F)-IG
wgw,+ (G' + WiL')(sI - n - ' G
using (10.5.6)
,
.:
+ G)
4- G'(-sI - F')-'(Lw,
= Wb W, H'(s2- F)-lG
+
i;
+ G'(-$1-
PI)-'H
using (10.5.7)
+
Now the right-hand quantity is also Z(s) Z'(-s). The fact that F is a
direct sum of matrices all of which have a Hunvitz characteristic polynomial
means that H'(sI - fl-'G is such that every element can only have Ieft-halfplane poles. This guarantees that Z(s) = Z(m) H'(sl- 0 - ' G , because
every element of Z(s) can only have left-half-plane poles.
The above constructive procedure for obtaining a realization of Z(s) up to
this point is valid for symmetric and nonsymmetricZ(s). The procedureyields
immediately a passive synthesis of an arbitstry positive real Z(s), in general
nonreciprocal, via the reactance-extraction technique. This is quickly illustrated.
Consider the constant impedance matrix
+
It follows on using (10.5.6), (10.5.7), and the constraint WLW, = J
that
+ J'
+
M' is nonnegative definite, implying that M is synthesizable
Therefore, M
with passive elements; a passive synthesis of Z(s) results when all but the
first m ports are terminated in unit inductors. The synthesis is in general
nonreciprocal, since, in general, M # M'.
In the following subsection we shall consider the remaining problemthat of giving a reciprocal synthesis for symmetric Z(s).
Reciprocal Synthesis of Z(s)
As indicated before, there is no loss of generality in assuming that
Z(m) 4-Z'(m) is nonsingular and that no element of Z(s) has a purely
. .
A STATE-SPACE BAYARD SYNTHESIS
SEC. 10.5
433
imaginary pole. Using the symmetry of Z(s) and letting o = oc, it follows
that Z(m) is positive definite.
Now normal rank [Z(s) Z'(-s)] 2 rank [Z(m) Z'(--)I
= rank
[2Z(co)]. Since the latter has full rank, normal rank [Z(s) Z'(-s)] is m.
Then W(s), N,(s), and N,(s), as defined in the Gauss factorization procedure, are m x m matrices. Further, WbW, = 2J, and therefore W, is
nonsingular. As indicated in the statement of the factorization procedure,
N,(s) may be assumed to be even. We shall make this assumption.
Our goal here is to replace the realization [F, G, H, J ] of Z(s) by a realization {Tm-',
TG, (T-')'H, J ] such that the new realization possesses additional properties helpful in incorporating the reciprocity constraint. To
generate T,which in fact will be an orthogonal matrix, we shall define a
symmetric matrix P. Then P will be shown to satisfy a number of constraints, including a constraint that all its eigenvalues be tl or -1. Then
Twill be taken to be any orthogonal matrix such that P = T'XT, where X
is a diagonal matrix, with diagonal entries being +1 or - I .
+
+
+
Lemma 10.5.1. With F,G, H, L, and W, as defined in this
section, the equations
define a unique symmetric nonsingular matrix P. Moreover, the
following equations also hold:
P C = -H
PF = F'P
To prove the lemma, we shall have to call upon the identities
L'(sl-
- F')-'L
(10.5.10)
F)-'L = G'(sI - F ) - ' L
(10.5.11)
F)-'L = L'(sl
and
-H'(sl-
The proof of these identities is requested in Problem 10.5.2. The proof of the
second identity relies heavily on the fact that the polynomial matrix N,(s)
appearing in the Gauss Factorization procedure is even.
Proof. If {F, L, L) is aminimal realization of L'(sl- F)-'L, the
existence, uniqueness, symmetry, and nonsingularity of P follow
434
RECIPROCAL IMPEDANCE SYNTHESIS
CHAP. 10.
from the important result giving a state-spacecharacterization of
the symmetry property of a transfer-function matrix. Certainly,
[F, L] is completely observable. That [F, L] is completely controllable can be argued as follows: because iF',L] is completely
controllable, [F' - LK', L] is completely controllable for all K.
Taking K = -L and using the relation P F' = -LL', it
follows that [F, L] is completely controllable.
The equation yielding P explicitly is, we recall,
+
P[L F'L
..-
(F')"-'L]
= [L
FL
... F"-'L]
(10.5.12)
Next, we establish (10.5.9). From the identity (10.5.1 I), we have
G'(d- F')-'L
-H'(sI - F)-'L
P-'FP)-'P-'L
= -H'P(sI - F')-'L
=
= -H'P(sl-
Conskquently, using the complete controllability of [F', L],
This is the first of (10.5.9), and together with the definition of H
as G LWb, it yields
+
or, again, using (10.5.7),
This is the second equation of (10.5.9). Next, from F
-LLt, we have the third equation of (10.5.9):
- PLL'
- -FP - LL'P
-
+ F' =
PF = -PF'
= -(F
= F'P
by (10.5.8)
+ LL')P
using (10.5.6)
This equation and (10.5.8) yield that P-'L = L and FP-' =
P-'F', or, in other words, P-1 satisfies FX = XF' and XL = L.
Hence the last equation of (10.5.9) holds, proving the lemma.
vvv
SEC. 10.5
A STATE-SPACE BAYARD SYNTHESIS
435
Equation (10.5.9) implies that "
P
I, and thus the eigenvalues of P must
be +I or -1. Since P is symmetric, there exists an orthogonal matrix T
such that P = W T ' , with X consisting of +1 and -1 entries in the diagonal
positions and zeros elsewhere.
We now define a realization {F,, G,, H,, J] of Z(s) by F1 = TYT, GI
= TG, and H , = TH; the associated hybrid matrix is
This hybrid matrix is the key to a reciprocal synthesis.
Theorem 10.5.1. Let Z(s) be an m x m symmetricpositive real
impedance matrix with Z(oo) nonsingular and no element of
Z(s) possessing a purely imaginary pole. With a realization
IF,, G,, H,, J ) of Z(s) as defined above, the hybrid matrix M of
of (10.5.13) satisfies the following constraints:
+
1. M
M' 2 0, and has rank VJ.
2. (Im X)M is symmetric.
+
Proof. By direct calculation
M+M'==(I,+T)
25
G-H
+ T )[W:wo-LW,
= (Im
= (1.
+
C T)
rw:]-
-H'+ G
-F-P
-w:r
LL'
w0 -L2
](1, 4(I,,,& T')
Hence M M' is nonnegative definite. The rank condition is
has rank no less than W,,
obviously fi&i!Jed, since [W,
which has rank m.
It remains to show that/ = J', ZG,= - H I , and XF, = F i x .
The first equation is obvious from the symmetry of Z(s). From
(10.5.91. T'XTG = -Hor U r n = -TH. That is. ZG,= p H . .
k s o , f;bm (10.5.9), T'ZTF ;
F'T'ZT or ZTFT = TF'TX. hat
is, ZF, = F i x .
VVV
-a
With the above theorem in hand, the synthesis o f Z ( s ) IS immediate.
The matrix M is realizable as a hybrid matrix, the current-excited ports
corresponding to the rows of I, -i-I:that have a +I on the diagonal, and
the voltage-excited ports corresponding to those rows of I,
I: that have
a -1 on the diagonal. The theorem guarantees that M is synthesizable using
436
CHAP. 10
RECIPROCAL IMPEDANCE SYNTHESIS
only passive and reciprocal components. A synthesis of Z(s) then follows
on terminating current-excited ports other than the h t min urtit inductances,
and the voltage-excited ports in unit capacitances. Since M M' has rank
m, the synthesis of M and hence of Z(s) can be achieved with m resistancesthe minimum number possible, as aimed for.
As a summary of the synthesisprocedure, the followingsteps may be noted:
+
1. By transformer, inductance, and capacitance extractions, reduce
the general problem to one of synthesizing an rn x m Z(s), such that
Z(m) is nonsingular, rank [Z(s) Z'(-s)] = in, no element of Z(s)
possesses a pole on the jw axis, and of course Z(s) = Z'(s).
2. Perform a Gauss factorization, as applicable for symmetric Z(s). Thus
W(s) is found such that Z'(-s)
Z(s) = W'(-s)W(s) and W(s)
= [N,(s)]-ll-'N:(s), where N l ( . )is a diagonal m x rn matrix of Hunvitz
polynomials, and N,(s) is an rn x m matrix of even polynomials.
3. Construct P using companion matrices, and 1 and G from W(s).From
g, 2, and 6 determine F, L, and G by solving a number of equations
of the kind ~ , f e p j = -[,& [see (10.5.6) and associated remarks].
4. Find H = G LW,,and P as the unique solution of FP = PF' and
PL = L.
5. Compute an orthogonal matrix T reducing P to a diagonal matrix Z of
+1 and -1 elements: P = TTT.
6. With F, = TFT', GI = TG, and H, = TH, define the hybrid matrix
M of Eq. (10.5.13) and give a reciprocal synthesis. Terminate appropriate ports in unit inductors and capacitors to generate a synthesis
of Z(s).
+
+
+
Example
10.5.1
+
Let us synthesize the scalar positive real impedance
Z(S)
=
s2+2S+4
~2 f S
+1
As 6 s ) has no poles on the jw axis and z(co) is not zero, we compute
It is readily seen that candidates for W(s)are f l ( s 2
+ s + 2)/(s2+ s
+ 1 ) or 0 ( s 2- s + 2)/(@ + s + 1). But the numerators are not
even functions. To get the numerator of a W(s)to be an even function,
multiply 4 s ) z(-s) by 6% 3 9 4)/(s+ 3 9 4); thus
+
+
+
+
SEC. 10.5
A STATE-SPACE BAYARD SYNTHESIS
437
The process is equivalent to modifying the impedance z(s) through
multiplication by ( 9 s 2)i(sz s 2), a standard technique used
in the classical Darlington synthesis 181. (The Gauss factorization that
gives N,(s) = N,(-s) forZ(s) = Z'(s) in multiports, often only by insertingcommon factorsinto N,(s)and N,(s),
may beregarded as a generalization of the classial one-port Darlington procedure.)
We write
+ +
+ +
a realization of which is
&=[zJz
-0
-2.Jzl
wo= f l
Obtain P, through P,P + F^'P, = -Lt to give
- 3 a
One matrix T,such that TiT, = P is
438
RECIPROCAL IMPEDANCE SYNTHESIS
CHAP. 10
Therefore, matrices of a new state-space realization of W(s) are
At this stage, nonreciprocal synthesis is possible. For reciprocal synthesis.
we proceed as follows. The solution of FP = PF' and PL = L is
A STATE-SPACE BAYARD SYNTHESIS
439
Next, we form the hybrid matrix
for which the first three ports correspond to currentexcited ports and the
last two ports.correspond to voltageexcited ports. It can be verified that
(1 X)M is symmetric; M f M'is nonnegative definite and has rank 1.
A synthesis of Z(s) is shown in Fig. 10.5.1. Evidently, the synthesis uses
one resistance.
+
Problem Synthesize the symmetric~positive-realimpedance
using the procedure outlined in this section.
Problem With matrices as defined in this section, prove the identity
10.5.2
L'(s1- F)-'L = L'(sI
by showing that L'fsZ
- F')-'L
- F)-IL is diagonal.
Prove the identity
[Hint: The (i,j)th element of W'(s)
=
W, + G(sI
-
F T L Lis
with n;, = fi:lTi. The polynomial n,,](s) is even if Z(s) is symmetric.
Show that the dimension of F, is even and that
440
RECIPROCAL IMPEDANCE SYNTHESIS
CHAP. 10
FIGURE 10.5.1. A Final Synthesis forZ(s) in Example 10.5.1.
nz,(s)
- F:) + n;, tadj (-st - F;)]!,
+ F;) + n:, iadj (sI 4- F$)]Ij
= det (st - Fj)[vOlj - njj(sI + F$)-'I;1[1 - l;(sl - Fj)"lj]
= woo det (-st
= s o ,det (sl
= det (sl
- Fj)tvo, - (vo,,l$ + n:,)(sI - F,)-'ljl
from which the result follows.]
Problem (For those familiar with the classical Darlington synthesis.) Given the
T 0.5.3
impedance function
give a synthesis for this z(s) by (a) the procedure of this section, and
(b) the Darlington method, if you are familiar with this technique.
Compare the number of elements of the networks.
Problem It is known that the Gauss factorization does not yield
10.5.4
Consider the symmetric positive real impedance matrix
a unique W(s).
CHAP. 10
REFERENCES
441
Using the Gauss factorization, obtain two different factors W(s) and the
corresponding state-space syntheses for Z(s).
Compare the two syntheses.
REFERENCES
[ 1I S. VONGPANITLW,
"Reciprocal Lossless Synthesis via Statevariable Tech-
niques," IEEE Transactions on Circuit Theory, Vol. CT-17, No. 4, Nov. 1970,
pp. 630632.
[ 2 ] S. VONOPANITLERD
and B. D. 0. ANDERSON,
"Passive Reciprocal State-Space
Synthesis Using a Minimal Number of Resistors," Proceedings of the IEE, Vol.
117, No. 5, May 1970, pp. 903-911.
[ 3 I R. YARLAGADDA,
"Network Synthesis--A State-Space Approach," IEEE
Transactions on Circuit iZzeory, Vol. (3T-19, No. 3, May 1972, pp. 227-232.
[ 4 1 S. VONGPANITLERD,
"Passive Reciprocal Network Synthesis-the State-Space
Approach," Ph.D. Dissertation, University of Newcastle, Australia, May 1970.
[5] F. R. GANTMACHER,
The Theory of Matrices, Chelsea, New York, 1959.
[ 61 R. W. NEWCOMB,
Linear Multipart Synthesis, McGraw-Hill, New York, 1966.
[ 7 I M. BAYARD,"R&olution du probl&mede la synthhe des &earn de Kirehhoff
par la dbtermination de r&eaux purement rbctifs," Cables et Tramission,
Voi. 4, No. 4, Oct. 1950, pp. 281-296.
181 E. A. GUILLEMIN,
Synthesis of Passive Nerworks, Wiley, New York, 1957.
Scattering Matrix Synthesis
11-1 INTRODUCTION
In Chapter 8 we formulated two diierent approaches to the problem of synthesizing a bounded real m x m scattering matrix S(s); the first
was based on the reactance-extraction concept, while the second was based
on the resistanceextraction concept. In the present chapter we shall discuss
solutions to the synthesis problem using both of these approaches.
The reader should note in advance the following points. First, it will be
assumed throughout that the prescribed S(s) is a normalized scattering matrix;
i.e., the normalization number at each port of a network of scattering matrix
S(s) is unity [I, 23.
Second, the bounded real property always implies that S(s) is such that
S ( M ) < m. Consequently, an arbitrary bounded real S(s) always possesses
a state-space realization {F, G, H, JJ such that
Third, we learned earlier that the algebraic characterization of the bounded
real property in state-space terms, as contained in the bounded real lemma,
applies only to minimal realizations of S(s). Since the synthesis of S(s) in
state-space terms depends primarily on this algebraic characterization of its
bounded real property, it foIIows that the synthesis procedures to be considered will be stated in terms of a particular minimal realization of S(s).
SEC. 11.1
INTRODUCTION
443
Fourth, with no loss of generality, we shall also assume the availability of
minimal realization (F, G, H, J) of S(s) with the property that for some real
matrices L and W,, the following equations hold:
F + F 1 = - H H 1 - LL'
- G = HJ+LW,
I - J'J= WbW,
(11.1.2)
That this is a valid assertion is guaranteed by the following theorem, which
'follows easily from the bounded real lemma. Proof of the theorem will be
omitted.
Theorem 11.1.I.Let {F,,Go,H,, J] be an arbitrary minimal
realization of a rational bounded real matrix S(s). Suppose that
{P,L,, W,} is a triple that satisfies the bounded real lemma for
the above minimal realization. Let T be any matrix ranging over
the set of matrices for which
T'T = P
(11.1.3)
Then the minimal realization (F, G, H, J] given by {TF,T-', TG,,
( T 1 ) ' H , ,J) satisfies Eq. (11.1.2), where L = (TL)'L,.
Note that F,G, H, L, and Wo are in no way unique. This is because there
are an infinity of matrices P satisfying the bounded real lemma equations for
any one minimal realization (F,,Go,Ho, J) of S(s). Further, with any one
matrix P, there is associated an infinity of L and W, in the bounded real
lemma equations and an infinity of T satisfying (11.1.3). Note also that the
dimensions of L and W,are not unique, with the number of rows of L' and
W , being bounded below by p = normal rank [I- S'(-s)S(s)].
Except for lossless synthesis, (1 1.1.2) will be taken as the starting point in
all the following synthesis procedures. In the ,case of a lossless scattering
matrix, we know from earlier considerations that the matrices L and W, are
actually zero and (1 1.1.2) reduces therefore to
A minimal realization of a lossless S(s) satisfying (1 1.1.4) is much simpler to
derive than is a minimal realization of a lossy S(s) satisfying (11.1.2). This is
because the lossless bounded real lemma equations are very easily solved.
The following is a brief outline of the chapter. In Section 11.2 we consider
444
CHAP. 11
SCATTERING MATRIX SYNTHESIS
the synthesis of nonreciprocal networks via the reactanceextraction
approach. Gyrators are allowed as circuit components. Lossless nonrecip
rocal synthesis is also treated here as a special case of the procedure. In
Section 11.3 a synthesis procedure for reciprocal networks is considered,
again based on reactance extraction. Also, a simple technique for lossless
reciprocal synthesis is given. The material in these sections is drawn from
[3] and 141. Finally, in Section 11.4 syntheses of nonreciprocal and reciprocal
networks are obtained that use the resistance-extraction technique; these
syntheses are based on results of 151. All methods in this chapter yield minimal reactive syntheses.
11.2
REACTANCE-EXTRACTION SYNTHESIS
OF NONRECIPROCAL NETWORKS
As discussed earlier, the reactance extraction synthesis problem of
finding a passive structure synthesizing a prescribed m x m bounded real
S(s) may be phrased as follows:
From S(s), derive a real-rational W(p) via a change of complex
variables
for which
Then End a Statespace realization (F,, G,, H,,Jw]for W(p) such
that the matrix
satisfies the inequality
As seen from (11.2.2), the real rational matrix W(p) is derived from the
bounded real matrix S(s) by the bilinear transformations = (p I)/@ - 1)
of (11.2.1), which maps the left half complex s plane into the unit circle in
the complexp plane. Of greater interest to us however is the relation between
minimal statespace realizations of S(s) and minimal state-space realizations
of W(p). Suppose that (Po,Go,H,, J ] is a minimal realization of S(s). Now
+
SEC. 11.2
REACTANCE-EXTRACTION SYNTHESIS
Consider only the term [PI - (F.
[PI - (4
+
+ I)(Fa - I)-'1-'(p - 1). We have
- I)"l-'[pl-
x [PI - (F,
= [PI - (F,
I1
+ I)(Fa - I)"]-'
= [ P I - (F,
445
+ I)(F, - I)-'
+ I)(F. -
- I + (8
+ I)(F, - I-)-']
+
[ P I - (F, 4-I)(F, - I)-'
2(F0 - I)-']
=I
2[pI - (F, f I)(F, - I)-'l-'(F0 - I)''
X
+
Therefore
Evidently, a state-space realization {F,, G,,, H,,, J.} may be readily identified
to be
Several points should be noted.
1 . The quantities in (11.2.5) are well defined; i.e., the matrix (Fa - I)-'
exists by virtue of the fact that S(s) is analytic in Re [s] > 0 and, in
particular, at s = 1. In other words, f l can not be an eigenvalue of F,.
2. By a series of simple manipulations using (11.2.5), it can be verified
easily that the transformation defined by (11.2.5) is reversible. In fact,
{F,,Go,If,, 4 is given in terms of {Fvo,G.,, H,,,
J,].by exactly the same
equations, (11.2.5), with {F,, Go,H,, J) and IF,, G,,, If,,, JJ interchanged.
3. The transformations in (11.2.5) define a minimal realization of W(p)
if and only if (Fa, G o ,H,,JJ is a minimal realization of S(s). (This fact
follows from 2 incidentally; can you show how?)
446
SCATTERING MATRIX SYNTHESIS
CHAP. 11
We now state the following lemma, which will be of key importance in
solving the synthesis problem:
Lemma 11.2.1. Suppose that (F, G, H, J ) is oneminimal realization of S(s) for which (11.1.2) holds for some real matrices L and
W,. Let IF,, G,, H,,3.1 be the associated matrices computed via
the transformations defined in (11.2.5). Then IF,, G,, H,, Jw)are
such that
for some real matrices L, and W,.
Proof. From (11.2.5) we have
D e h e matrices L, and Wwby
We then substitute the above equations into (11.1.2). The first
equation of (11.1.2) yields
This reduces simply, on premultiplying by F:
plying by F, - I, to
- I and
postmulti-
which is the first equation of (11.2.6). The second equation of
(11.1.2) yields
SEC. 11.2
REACTANCE-EXTRACTION SYNTHESIS
447
Straightforward manipulation then gives
Using the first equation of (11.2.6), the above reduces to
This establishes the second equation of (1 1.2.6). Last, from the
third relation of (11.1.2), we have
On expansion we obtain
The second equality above of course follows from the use of the
first and the second relations of (11.2.6). Thus (11.2.6) is established, and the lemma proved.
VVV
Suppose that we compute a minimal realization IF., G,, H,, J,) of W(p)
from a particular minimal realization (F, G, H, Jl satisfying (1 1.1.2). Then,
using Lemma 11.2.1, we can see that IF,, G,, Hw,J.] possesses the desired
448
CHAP. 7 1
SCATfEflING MATRIX SYNTHESIS
property for synthesis, viz., the matrix M defined in (11.2.3) satisfies the
inequality (11.2.4). The following steps prove this:
I - M'M=
1 - J,J,
-(H,J.
=
- GLG,
--(H.J.
+- FLG,)'
+ F: G,) I - H,H:
1
- FLF,
L.,
The second equality follows from application of (11.2.6). The last equality
obviously implies that I- M'M is nonnegative definite. Therefore, as discussed in detail in an earlier chapter, the problem of synthesizing S(s) is
reduced to the problem of synthesizing a real constant scattering matrix S ,
which in this nonreciprocal case is the same as M. Synthesis of a constant
hounded real scattering matrix is straightforward; a simple approach is to
apply the procedures discussed earlier,* which will convert the problem of
synthesizing S, to the problem of synthesizing a constant impedance matrix
2,.We know the solution to this problem, and so we can achieve a synthesis
of S(s).
A brief summary of the synthesis procedure follows:
(F,,Go,H,,4 for the bounded real m x
m S(s).
2. Solve the bounded real lemma equations and compute another minimal
realization [F, G, H, J1 satisfying (1 1.I .2).
3. Calculate matrices F,, G
.
, H., and J, according to (11.2.5), and obtain
the scattering matrix S, given by
1. Find a minimal realization
The appropriate normalization numbers for So, other than the first m
ports, are determined by the choice of the reactance values as explained
in the next step.
4. Choose any set of desired inductance values. (As an alternative, choose
all reactances to be capacitive; in this case the matrices G, and F, in
S, are replaced by -G, and -F,, respective1y.t) Then calculate the
*InUlccarlierdiscussionitwas aluaysassumed lhar thcscatteringrnarrixwas normallred.
If S. is not initially nonalixd, ir must firs1 bc converted to a norm~lizedmatrix (see
chapter 2).
+This point will become clear if the reader studies carefully thematerial in Section 8.3.
SEC. 11.2
REACTANCE-EXTRACTION SYNTHESIS
449
normalization numbers r,,,, I = 1,2, . . .,n with r,,, = L, (or r,,,
= 1/C, if the reactances are capacitive). Of course, the normalization
numbers at the first rn ports are iixed by S(s), being the same as those
of S(s), i.e., one.
5. Obtain a nondynamicnetwork N,synthes~zjngthe real constant scattering matrix S. with the appropriate normalization numbers for S, just
defined. Finally, terminate the last n ports of N, appropriately in inductors (or capacitors) chosen in step 4 to yield a synthesis for the prescribed S(s).
Note that at no stage does the transfer-function matrix W(p) have to be
explicitly computed; only the matrices of a state-space realization are
required.
Lossless Scattering Matrix Synthesis
A special case of the above synthesis that is of significant interest
is lossless synthesis. Let an m x rn scattering matrix S(s) be a prescribed
lossless bounded real matrix; i.e.,
1 - sr(-s)s(~) = o
(I 1.2.9)
Now one way of synthesizing S(s) would be to apply the "preliminary"
extraction procedures of Chapter 8. For lossless S(s), they are not merely
preliminary, but will provide a complete synthesis. Here, however, we shall
assume that these preliminary procedures are not carried out. Our goal is
instead one of deriving a real constant scattering matrix
where [F., G,,H,, J,] is a statespace realization of
with
such that M is lossless bounded real, or
Let IF, G, H,JJ be a minimal realization of S(s). Assume that F is an
n x n matrix, and that (11.1.4) holds; i.e.,
450
SCATTERING MATRIX SYNTHESIS
CHAP. 11
F + F'= -HH'
-G = HJ'
(11.2.11)
I-J'J=O
Let one minimal realization (F., G,, H,, J,] of W(p) be computed via the
transformations defined in (11.2.5). Then, following the proof of Lemma
11.2.1, it is easy to show that because of (11.2.11), the matrices F,, G,, H,,
and J, are such that
FLFw - I = -HwHL
-FLG, = H,J,
(11.2.12)
I - JLJ, = GwG,
With the above IF,, G,, H,, J,] on hand, we have reached our goal, since
it is easily checked that (11.2.12) implies that the real constant matrix M of
(11.2.3) is a lossless scattering matrix satisfying (11.2.10). Because M is lossless and constant, it can be synthesized by an (m n)-port lossless nondynamic network N, containing merely transformer-coupled gyrators. A
synthesis of S(s) then follows very simply.
Of course, the synthesis procedure for a general bounded real S(s) stated
earlier in this section also applies in this special case of lossless S(s). The only
modification necessary lies instep 2, which now reads
+
2'. Solve the lossless bounded real lemma equations and compute another
minimal realization {F, G, H,J) satisfying (1 1.2.11).
The following examples illustrate the reactance extraction synthesis procedure.
Example
Consider tlie bounded real scattering function (assumed normalized)
11.2.1
S(s) =
1
4
1+2r
s++
It is not difficultto verify that I-&, 1 / n , 1 / n , 0) is a minimal
realization of S(s) that satisfies (11.1.2) with L = 1/JZ and W p= -1.
From (11.2.5) the realization (F,,G,, H,,J,] is given by (-4, -f,+,
$1, so
It is readily checked that I - S:S. 2 0 holds.
Suppose that the only inductance required in the synthesis is chosen to
be 1 H. Then S. has unity normalizations at both ports. To synthesize
S., we deal with the equivalent impedance:
REACTANCE-EXTRACTION SYNTHESIS
451
The symmetric part (Z.),, has a synthesis in which ports 1 and 2 are
decoupled, with port 1 simply a unit resistance and port 2 a short circuit;
the skew-symmetric part (Z.),, has a synthesis that is simply a unit
gyrator. The synthesis of Z. or of Sois shown in Fig. 11.2.1. Tbe final
synthesis for S(s)shown in Fig. 11.2.2 is obtained by terminating port 2
of the synthesis for S. in a 1-H inductance. On inspection and on noting
FIGURE 11.2.1. Synthesis of Z. in Example 11.2.1.
FIGURE 11.2.2. Synthesis of S(s) of Example 11.2.1.
452
SCATTERING MATRIX SYNTHESIS
CHAP. 11.
that a unit gyrator terminated in a unit inductance is equivalent io a unit
capacitance, as Fig. 11.2.3 illustrates, Fig. 11.2.2 reduces simply to a
series connection of a capacitance and a resistance, as shown in Fig.
11.2.4.
Example As a further illustmtion, consider the same bounded real scattering func11.2.2
tion as in Example 11.2.1. But suppose that we now choose the only
reactance to be a I-Fcapacitance; then the real constant scattering matrix
Sois given by
The matrix I- SLS; may be easily checked to be nonnegative definite.
. has unity normalization numbers
Since the capacitance chosen is 1 F, S
a t all ports. Next, we convert S. to an admittance function*
Ye= (I - S*)(Zt so)-'
FIGURE 11.2.3. Equivalent Networks
FIGURE 11.2.4. A Network Equivalent to that of Figure
11.2.2.
*Note that an impedance function for SI does not exist because I - S. is singular. Of
course. if we like. we can (with the aid of an orthogonal
transformer) reduce S. to a lower
.
order be such that I is now nonsingular--see the earlier preliminary simplification
procedures.
s.'
SEC. 11.2
REACTANCE-EXTRACTION SYNTHESIS
453
Figure 11.2.5 shows a synthesis for Yo;termination of this network in a
unit capacitance at the second port yields a final synthesis of S(s), as
shown inFig. 11.2.6. Ofcourse thenetwork of Fig. 11.2.6 can be simplified
to that of Fig. 11.2.4 obtained in the previous example.
Problem Prove that the transformations defined in (11.2.5) are reversible in the
11.2.1
sense that the matrices CF,, Go,Ha,
J) and (Fw0,G,., H,., J,) may be
interchanged.
Problem Show that among the many possible nonreciprocal network syntheses
for a prescribed S(s) obtained by the method considered in this section,
thereexist syntheses that are minimal resistive; i.e. the number of resistors
11.2.2
FIGURE 11.2.5. Synthesis of
Y. in Example 11.2.2.
1F
FIGURE 11.2.6. Synthesis of S(s) in Example 11.2.2.
454
SCATTERING MATRIX SYNTHESIS
CHAP. 11
employed in the syntheses is normal rank [I- S(-s)S(s)]. How are
they to be obtained?
Problem Svnthesize the lossless bounded real matrices
11.2.3
(a) S(s) =
-
-1
-10s
+4
1
Problem Give two
11.2.4
(s'
s
syntheses for the bounded real scattering function S(s) =
one with all reactances inductive, and the
other with all reactances capacitive.
+ + I)(&' + 2s + I)-',
11.3
REACTANCE-EXTRACTION SYNTHESIS
OF RECIPROCAL NETWORKSa
In this section the problem of reciprocal or gyratorless passive
synthesis of scattering matrices is discussed. The reactance-extraction technique is still used. One important feature of the synthesis is that it always
requires only the minimum number of reactive or energy storage elements;
this number n is precisely the degree 6[S(s)] of S(s), the prescribed m X m
bounded real scattering matrix. The number of dissipative elements generally
exceeds rank [I - S(-s)S(s)] however.
Recall that the underlying idea in the synthesis is to find a statespace
realization IFw, G,, H,,J.] of W(p) = s ( p l)j(p I)] such that the real
constant matrix
+
-
possesses the properties
(I) I - M'M 2 0,
(11.3.2)
and
(2) (1,
4- X)M
= M'(1,
f- X).
(11.3.3)
Here Z is a diagonal matrix with diagonal entries being + I or -1 only.
Recall also that if the above conditions hold with F, possessing the minimum
possible size n x n, then a synthesis of S(s) can be obtained that uses the
minimum number n of inductors and capacitors.
The real constant matrix M deiines a real constant matrix
*This section may be omitted at a first reading
SEC. 11.3
RECIPROCAL REACTANCE-EXTRACTfON SYNTHESIS
455
which is the scattering matrix of a nondynamic network. The normalization
numbers of S, are determined by those of S(s) and by the arbitrarily chosen
values of inductances and capacitances that are used when terminating the
nondynarnic network to yield a synthesis of S(s). In effect, the first condition
(1 1.3.2) says that the scattering matrix S, possesses a passive synthesis, while
the second condition (11.3.3) implies that it has a reciprocal synthesis. When
both conditions are fulfilled simultaneously, then a synthesis that is passive
and reciprocal is obtainable.
In the previous section we showed that if a minimal realization (Fw,G,,
H,, J,] of W(p) satisfies
F',F,-
I = -H,HI
-FLG,
I - J:J,=
= H.J,
- L,Lk
4- L.W,
W:W,+
(11.3.5)
GLG,
for some real constant matrices L,,and W,, then the matrix M of (11.3.1)
satisfies (11.3.2). Furthermore, the set of minimal realizations (Fw,G,, H,,
J,] of W(p) satisfying(11.3.5) derives actually from the set of minimal realizations (F, G, H, JJ of S(s) satisfying the (special) bounded real lemma equations
F+F'=-HH'-LL'
-G = H J + LW,
I-J'J=
(I 1.3.6)
WLW,
whereL and W,are some real matrices. We also noted the procedure required
to derive a quadruple [F,G, H, J ) satisfying (11.3.6), and hence a quadruple
IFw, G,, H,, Jv) fulfilling(1 1.3.5). Our objective now is to derive from a minimal realization IF,, G., H.,J,] satisfying (11.3.5) another minimal reallzation {F.,, G.,, H,,, J,) for which both conditions (11.3.2) and (11.3.3) hold
simultaneously; in doing so, the reciprocal synthes~sproblem will be essentially solved.
Assume that (F,, G,, X,,J,] is a state-space minimal realization of W(p)
[derived from S(s)] such that (11.3.5) holds. Now W(p)is obviously symmetric
because S(s) is. The symmetry property then guarantees the existence of a
unique symmetric nonsingular matrix P with
PF, = F I P
PG, = H,
(The same matrix P can also be obtained from PF = F'P and PG = -H
456
SCATTERING MATRIX SYNTHESIS
CHAP. 11
incidentally.) Recall that any matrix T appearing in a decomposition of P
of the form
P = T'XT
+ (-1)IJ
X = [In,
generates another minimal realization {TF,TF1,TG,, (T-')'H,, Jw] of WO,)
that satisfies the reciprocity condition of (11.3.3) (see Chapter 7). However,
there is no guarantee that after the coordinate-basis change the passivity
condition remains satisfied. Our aim is to exhibit a specific statespace basis
change matrix T from among all possible matrices in the above decomposition of P such that in the new state-space basis, passivity is preserved. The
required matrix Twill be given in Theorem 11.3.1, but first we require the
following two results.
Lemma 11.3.1. Let C be a real matrix similar to a real symmetric
matrix D; then I D'D is positive definite (nonnegative definite)
if I- C'C is positive definite (nonnegative definite), but not
necessarily converseIy.
-
By using the facts that matrices that are similar to one another possess
the same eigenvalues, and that a real symmetric matrix has real eigenvalues
only [6], the above lemma can be easily proved. We leave tlie formal proof
of the lemma as an exercise.
Thc second result we require has, in fact, been established in Lemma 10.3.2;
for convenience, we shall repeat it here without proof.
Lemma 11.3.2. Let P be the symmetric nonsingular matrix
satisfying (11.3.7), or equivalently PF = F'P and PG = -H.
Then P can be represented as
P = BVXV' = VXV'B
where B is symmetric positive definite, V is orthogonal, and X
= [I.,/ (-l)I#,J. Further, VZV'commutes with B1/Z,the unique
positive definite square root of B;i.e.,
VXV'B'J2 = B'J2VXV'
(1 1.3.9)
These two lemmas provide the key raults needed for the proof of the main
theorem, which is as follows:
Theorem 11.3.1. Let [F, G, H, 4 be a minimal realization of
a bounded real symmetric S(s) satisfying the special bounded real
lemma equations (11.3.6). Let (F,, G,, H,, J,] denote the minimal
realization of the associated real symmetric matrix W@) =
SEC. 11.3
RECIPROCAL REACTANCE-EXTRACTION SYNTHESIS
457
S[(P -I- l)l(p - 1)l derived from {F, G, H,J j through the transformations
F,= (F+I)(F-I)-'
Gw= &(F - I)-%G
Hw = - p ( F - I)-IH
J, = J - H'(F - I)-'G
(11.3.10)
With V, B, and 2 defined as in Lemma 11.3.2, define the nonsingular matrix T by
Then the minimal realization [F",,G., H,,, J.) = [TF,T", TG,,
(T1)'H., J,) of W@) is such that the wnstant matrix
has the following properties:
I-M$Ml>O
+ Z)M,
(Im
= M',(Im Jr
Z)
Proof. We have, from the theorem statement,
where the second and thud equalities are implied by Lemma
11.3.2. As noted above, it follows that T defines a new minimal
realization {FwF,,, G,,, H,,,
JJ of W(p)such that
holds.
Next, we shall prove that (11.3.12) is also satisfied. With M
defined as [l'
G.
*'I,
F.
it is dear that
M,= I1 4- T]M[I 4- T-'1
or, on using (11.3.11),
458
SCAnERING MATRIX SYNTHESIS
+
CHAP. 11
+
where M, denotes [I Vi]MII V ] . from the theorem statement we know that I - M'M >0; it follows readily from the
orthogonal property of I V that
+
Equation (11.3.15) is to be used to establish (I1.3.IZ).
On multiplying on the left of (1 1.3.14) by I X, we have
+
But from (11.3.9) of Lemma 11.3.2, it can be easily shown that
XY'B1"V = J"BL/2VX,or that Z commutes with V'BL/2V.Hence
I X commutes with I iV'B1"V. Therefore, the above equation reduces to
+
Observe that (13.3.15) can be rewritten as
Observe also, using (11.3.16), that ( I + X)Mo is similar to
( I X)M,, which has been shown to be symmetric. It follows
from Lemma 11.3.1, on identifying ( I + Z)M, with C and
(I Z)M, with D, that
+
+
whence
I-M:MI~O
VVV
Theorem 11.3.1 converts the problem of synthesizing a bounded real symmetric S(s) to the problem of synthesizing a constant bounded real symmetric
So. The latter problem may be tackled by, for example, converting it to an
impedance synthesis problem using the procedures of Chapter 8. In effect,
then, Theorem 11.3.1 solves the synthesis problem. A summary of the synthesis procedure is as follows:
SEC. 11.3
459
RECIPROCAL REACTANCE-EXTRACTION SYNTHESIS
I. Find a minimal realization {Fn,Go, H,, J ) for the prescribed m x m
S(s), which is assumed to be bounded real and symmetric.
2. Solve the bounded real lemma equations and compute another minimal
realization [F, G, H, J ) satisfying (11.3.6).
3. Calculate matrices Pw, G,, H
., and J, according to (11.3.10).
4. Compute the matrix P for which PF, = F',P and PC, = H., and from
P find matrices B, Y,and 2 with properties as defined in Lemma 11.3.2.
5. WithT= V'BIJa,calculate{FwF,,, G,,, HVx,J,) = (TF,T", TG,, (TL)'H,,
J,j and obtain the scattering matrix S, given by
where the appropriate normalization number for S. other than the first
m ports is determined by the choice of desired reactances, as explained
explicitly in the next step.
6. Choose any set of n, inductance values and n, capacitance values. Then
define thenormalizationnumbers r ,, I = 1,2, . . . ,n, at the (m 1)th
through (m -tn)th port by
+
r,,,=L,
I= 1,2,..., n,
and
+
I=n,+l,
..., n
The normalization numbers at the first m ports are the same as those
of S(s).
7. Give a synthesis N, for the constant symmetric scattering matrix S,
taking into account the appropriate normalization numbers of So.
Finally, terminate the last n ports of N, appropriately in n, inductors
and n, capacitors to yield a synthesis for the prescribed S(s).
Before illustrating this procedure with an example, we shall discuss the
special case of lossless reciprocal synthesis.
Lossless Reciprocal Synthesis
Suppose now that the m x m symmetric bounded real scattering
matrix S(s) is also lossless; i.e., S(-s)S(s) =I.Synthesis could be achieved
by wrying out the "preliminary" lossless section extractions of Chapter 8,
which for lossless matrices provide a complete synthesis. We assume here
that these extractions are not made.
Let IF, G, H, J ] be a minimal realization of the prescribed lossless S(s).
460
SCATTERING MATRIX SYNTHESIS
CHAP. 11
As pointed out in the previous section, we may take (F, G, H, J ] to be such
that
m e last equation follows from the symmetry of S(s).] Now the symmetry
property of S(ss) implies (see Chapter 7) the existence of a symmetric nonsingular matrix A, uniquely defined by the minimal realization {F, G, H, J]
of ~(s), such that
AF = F'A
AG= -H
One can then show that A possesses a decomposition
A
=
U'CU
(1 1.3.19)
in which U is an orthogonal matrix and
+
with n, n, = n = SIS(s)]. The argument is very similar to one in Section
10.2; one shows, using (11.3.17), that A-I satisfies the same equation as A,
so that A = A-' or A2 = I. The symmetry of A then yields (11.3.19).
Next we calculate another minimal realization of S(s) given by
IF,. G , , H , , J ] = {UFLI', UG,UH, J ]
(11.3.21)
with U as defined above. It follows from (11.3.17) and (11.3.18) that
(F,,G,, H,, J ] satisfies the following properties:
F,+F\=-H,H;
I-JZ=O
CG, = -Hz
-G,=H,J
Zfi, = F'J
(1 1.3.22)
J=J'
From the minimal realization IF,, G,, H,, J ] , a reciprocal synthesis for
S(s) is almost immediate. We derive the real constant matrix
SEC. 11.3
461
RECIPROCAL REACTANCE-EXTRACTION SYNTHESIS
where {F,, G,, H,, J,] are computed from IF,, GI,
H,, J ) by the set of transformations of (11.3.10), viz.,
Then it follows from (1 1.3.22) that
It is a trivial matter to check that (1 1.3.24) implies that S, defined by (1 1.3.23)
is symmetric and orthogonal; i.e., S = I. The ( m n) x (m n) real
constant matrix S,, being symmetric and orthogonal, is easily realizable with
multipart transformers, open circuits, and short circuits. Thus, by the reactance-extraction technique, a synthesis of S(s) follows from that for S..
(We terminate the last n, ports of the frequency independent network
synthesizing S. in capacitors, and all but the first m ports in inductors.)
+
+
Example Consider the bounded real scattering function
11.3.1
A minimal state-space realization of S(s) satisfying (11.3.6) is
Then we calculate matrices F,, C,, H,, and J, in accordance with
(11.3.10).Thus
These matrices of course readily yield a passive synthesis of S(s), as
the previous section shows; the synthesis however is a nonreciprocal one,
as may be verified easily.
We compute next a symmetricnonsingularmatrixP with PFw = FLP
and PC, = H.. The result is
462
SCATTERING MATRIX SYNTHESIS
CHAP. 21
for which
Therefore
Then we obtain [ F , G,, H , Jw] = {TF,T-',
lows:
TG,, (T'YH., J.1 as fol-
It can be readily checked that
is a constant scattering ma& satisfying both the reciprocity and tbe
passivity conditions. From this point the synthesis procedure is straightforward and will be omitted.
Problem Prove Lemma 11.3.1.
11.3.1
Problem From a minimal realization [F, G,H, J) of a symmetric bounded real
S(s) satisfying (11.3.6), derive a reciprocal passive reactance extraction
11.3.2
synthesis of S(s) under the condition that no further state-space basis
trwsformation is lo be applied to (F,G, H,Jf above. Note that the
transformations of (11.3.10) used to generate IFw,G., H,, J.) may be
used and that there is no restriction on the number of reactive elements
required in the synthesis. [Hint: Use the identity
Problem Using the method outlined in this section, give reciprocal syntheses for
11.3.3 (a) The symmetric lossless bounded real scattering matrix
SEC. 11.4
RESISTANCE EXTRACTION SYNTHESIS
463
(b) The bounded real scattering function
11.4
RESISTANCE !XTRACTION SCATTERING
MATRIX SYNTHESIS
In this section we present a state-space version of the classical
Belevitch resistance extraction synthesis of bounded real scattering matrices
[q.The key idea of the synthesis consists of augmenting a prescribed scattering matrix S(s) to obtain a Iossless scattering matrix s,(~) in such a manner
that terminating a lossless network N, which synthesizes S,(s) in unit resistors
yields a synthesis of the prescribed S(s) (see Fig. 11.4.1). This idea has been
discussed briefly in Chapter 8.
FIGURE 11.4.1.
Resistance Extraction.
Suppose that the required lossless scattering matrix S,(s) is given by
where S(s) denotes the rn x m prescribed scattering matrix and the dimension of S,,(s) is p x p, say. Here what we have done is to augment S(s) by
p rows and columns to obtain a lossless S,(s). There is however one important constraint-a result stated in Theorem 8.3.1-which is that the number
p must not be smaller than p = normal rank [I - S'(-s)S(s)], the minimum
number of resistors required in any synthesis of S(s). Now the IossIess constraint on s,(~), i.e.,
requires the following key equations to hold:
464
SCATTERING MATRIX SYNTHESIS
CHAP. 11
In the Belevitch synthesis and the later Oono and Yasuura synthesis [a], p
is taken to be p; i.e., the syntheses use the minimum number of resistors.
The crucial step in both the above syntheses lies in the factorization of I St(-s)S(s) to obtain S,,(s)and I - S(s)S'(-s) to obtain S,,(s)[see (1 l.4.2)].
The remaining matrix Sa,(s)is then given by -S,,(s)S1(-s)[S:,(-s)]-I
where precise meaning has to be given to the inverse. To obtain S,,(s)
and Sl,(s),Belevitch uses a Gauss factorization procedure, while Oono
and Yasuura introduce a polynomial factorization scheme based on the
theory of invariant factors. The method of Oono and Yasuura is far more
complicated computationally thqn the Belevitch method, but it allows the
equivalent network problem to be tackled.
An inherent property of both the Belevitch and the basic Oono and
Yasuura procedures is that 6[SL(s)]
> 6[S(s)]usually results; this means
that the syntheses usually require more than the minimum number of reactive
elements. However, the difficult Oono and Yasuura equivalent network t h e
ory does provide a technique for modifying the basic synthesis procedure to
obtain a minimal reactive synthesis. In contrast, it is possible to achieve
a minimal reactive synthesis by state-spacemeans in a simple fashion.
Nonreciprocal Synthesis
Suppose that an m X m bounded teal S(s) is to be synthesized.
We have learned from Chapter 8 that the underlying problem is one of finding
constant real matrices FL,G,, HL, and JL-which constitute a minimal realization of the desired SL(s)--such
that
1. The equations of the lossless bounded real lemma hold:
where P is symmetric positive definite.
2. With the partitioning
where G,, and HL,have m columns, and J,, has m columns and rows,
IF,, G,,, H,,, J,,) is a realization of the prescribed S(s).
SEC. 11.4
RESISTANCE EXTRACTION SYNTHESIS
465
If a lossless network N, is found synthesizing the lossless scattering matrix
H',(sI - FA-' G,, termination of all but the first m ports of this network
N, in unit resistances then yields a synthesis of S(s).
Proceeding now with the synthesis procedure, suppose that IF, G, H, J] is
a minimal realization of S(s), such that
J,
+
-G = HJ+ LW,
wgw.
I-J'J=
with ifand W, having the minimum number of p rows. (Note that p =
normal rank [T - S(-s)S(s)].)
We shall now construct an (m p) x (m p) matrix S,(s), or, more
precisely, constant matrices [FL,G,, H,, J,], which represent a minimal realization of S,(s).
We start by computing the (m p) x (m p) real constant matrix
+
+
+
+
The first m columns of J, are orthonormal, by virtue of the relation I - J'J
= W;Wo. It is easy to choose the remaining p columns so that the whole
matrix J, is orthogonal; i.e.,
Next, we define real constant matrices F,, G, and H, by
With these definitions, it is immediate from (11.4.4) that
hold, that is, Eqs. (11.4.3) hold with the matrix P being the identity matrix
I. Also it is evident from the definitions of F,, G,, HL,
and J, in (11.4.5) and
(11.4.7) that the appropriate submatrices of G,, H,, and J, constitute with
F, a realization, in fact, a minimal one, of the prescribed S(s).
We conclude that the quadruple IF,, G,, H,, Jd of (1 1.4.5) and (11.4.7)
defines a lossless scattering matrix for the coupling network N,. A synthesis
of the lossless S,(s), and hence a synthesis of S(s), may be obtained by available classical methods [9], the technique of Chapter 8, or the state-space
approach discussed in Section 11.2. The latter method is preferable in this
case, since one avoids computation of the lossless scattering matrix S,(s) =
466
J,
SCATTERING MATRIX SYNTHESIS
CHAP. 11
+
H:(sl- F,)"G,;and,
more significantly, the matricesF,, G,, H,, and
J, are such that a synthesis of the lossless network is immediately obtainable
-without any further need to change the state-space coordinate basis. This is
because fF,, G,, H,, J,] satisfies the lossless bounded real lemma equation
with P = f.
A quick study of the synthesis procedure will immediately reveal that
6[SL(s)]= dimension of F, = 6[S(s)]
This implies that a synthesis of SL(s) is possible that uses b[S(s)] reactive
components. Therefore, the synthesis of S(s) can be obtained with the minimum number of reactive elements. Moreover, the synthesis is also minimal
resistive. This follows from the fact that p resistors are required in the synthesis, while also p = normal rank [I- S'(-s)S(s)].
. .
Example
11-4.1
.
Consider the bounded real scattering function
A minimal realization of S(s) satisfying (11.4.4) is
with
Next, matrices IFL,GL,
HL,JL] are computed according to (11.4.5)
and (11.4.7); the result is
It can be checked then that S,(s), found from the above IFL, C,, HA,JLl,
is given by
SEC. 11.4
RESISTANCE EXTRACTION SYNTHESIS
467
and is a lossless bounded real matrix; i.e., I - Si(-s)S,(s) = 0. Furthermore, the (I, 1) principal submatdx is obviously s ( ~ )The
. remainder of
the synthesis is straightforward and will be omitted.
Note that in the above example S(s) is symmetric but S,(s) is not. This is
true in general. Therefore, the above method gives in general a nonreciprocal
synthesis. Let us now study how a reciprocal synthesis might be achieved.
Reciprocal Synthesis*
We consider now a prescribed m x m scattering matrix S(s) that
is bounded real and symmetric. The key idea in reciprocal synthesis of S(s)
is the same as that of the nonreciprocal synthesis just considered. Here we
shall seek to construct a quadruple IFp G,, H, JJ, which represents a minimal realiyation of an (m r ) x (m $- r) lossless s,(~), such that the same
two conditions as for nonreciprocal synthesis are fulfilled, and also the third
condition
+
The number of rows and columns r used to border S(s) in forming S,(s),
corresponding to the number of resistors used in the synthesis, is yet to be
specified. This of course must not be less than p = normal rank [IS'(-s)S(s)]. The third condition, which simply means that S&) = s;(~),
is
added here, since a reciprocal synthesis of S(s) follows only if the lossless
coupling network N, synthesizing S,(s) is reciprocal.
As we have stated several times, it is not possible in general to obtain
a reciprocal synthesis of S(s) that is simultaneously minimal resistive and
minimal reactive; with our goal of a minimal reactive reciprocal synthesis,
the number of resistors r may therefore be greater than the normal rank p
of [ I - S(-s)S(s)l. For this reason it is obvious that the method of
nonreciprocal synthesis considered earlier needs to be modified. As also
noted earlier, preliminary extractions can be used to guarantee that
I - S(-m)S(m) is nonsingular. We shall assume this is true here. It then
follows that normal rank [I- S'(-s)S(s)] = m.
Before we can compute a quadruple IFL,G,, H, J,), it is necessary to
derive an appropriate minimal realization {F,,G , , H,, J ) for the prescribed
S(s), so that from it (FA,G,, IT,, J,J may be computed.
The following lemma sets out the procedure f o this.
~
Lemma 11.4.1. Let {F, G, H, J ] be a minimal realition of
a symmetric bounded real S(s) such that, for some L and W,,
*The remainder of this s d o n may be omitted at a first reading.
468
CHAP. 11
SCATTERING MATRIX SYNTHESIS
F f F'= -HH1--LL'
-C= H J f LWo
I - J'J = WbW,
(1 1.4.4)
Let P be the unique symmetric matrix, whose existence is guaranteed by the symmetry of S(s), that satisfies PF = F'P and PC
- -H, and let B be positive definite and V orthogonal such
that P = BVXV'= VXY'B. (See Lemma 11.3.2.) Here X =
[I,,f(-I)&,] with n, 4- n, = n = 6[S(s)]. Define a coordinate
basis change matrix T by
and a new minimal realization of S(s) by
(F,, GI, H,,J ) = {TFT-', TG, (TF1)'H,J]
(11.4.11)
Then the following equations hold:
EF, = F',X
XG, = -HI
J = J'
(11.4.12)
and
A complete proof will be omitted, but is requested in the problems. Note
that the initial realization {F, G, H, J ] satisfiesthe special bounaed real lemma
equations, but in general will not satisfy (11.4.12). The coordinate basis
change matrix T, because it satisfies T'XT = P (as may easily be verified),
leads to (11.4.12) holding. The choice of a particular T satisfying T'ET = P,
rather than an arbitrary T , leads to the nonnegativity constraint (11.4.13).
The argument establishing (11.4.13) is analogous to a number of earlier
arguments in the proofs of similar results.
The construction of matrices of a minimal realization of SL(s)is our prime
task. For the moment however, we need to study a consequence of (1 1.4.12)
and (11.4.13).
Lemma 11.4.2. Let IF,, G , , H , ,J]andZ bedefinedas in Lemma
11.4.1. Then there exists an n, X n, nonnegative symmetric Q,
and an n, x n, nonnegative symmetric Q, such that
RESISTANCE EXTRACTION SYNTHESIS
SEC. 11.4
469
Proof. That the left side of (1 1.4.14) is nonnegative definite follows from (11.4.13). Nonsingularity of I - S'(-w)S(m) implies
nonsingularity of I - JZ.NOWset
Simple manipulations lead to
Now premultiply and postmultiply by X. The right-hand side is
seen to be unaltered, using (11.4.12). Therefore
which means that Q = Q, / Qz as required.
QQQ
With Lemmas 11.4.1 and 11.4.2 in hand, it is now possible to define a minimal realization of SL(s)of dimension (2m fn) x (2m n), which serves to
yield a resistance-extractionsynthesis. Subsequent to the proof of the theorem
containing the main result, we shall indicate how the dimensions of SL(s)
may sometimes be reduced.
+
Theorem 11.4.1. Let (PI,G,, HI,J] and X be defined as in
Lemma 11.4.1 and Q, and Q , be defined as in Lemma 11.4.2.
Define also
F,,= F,
G,, = [GII-H,(I - J1)JJa
- (G, H,JXI - Ja)-'I2 J;(-Q:fl)
+
HL = [HI;-(GI
+ H,J)(I-
+ Qrz]
(11.4.15)
JZ)-1/2/QiJz -j- Q;/l]
-J
0
.
.
Then
1. The top left m x m submatrix of S,(s) is S(s).
2. SL(s) is lossless bounded real; in fact,
F, -k Fi =. -H+H,'.
-GL= HLJL
I- J;J, = 0
'
. (1 1.4.16)
470
SCAnERING MATRIX SYNTHESIS
CHAP. 11
3. S,(s) is symmetric; in fact,
Proof. The first claim is trivial to verify, and we examine now
(11.4.16). Using the definitions in (11.4.15), we have
which is zero, by (11.4.14). Also,
+
HLJL= [ H , J - (G, HIJ)I HI([-- J2)'I2
(G, HIJ)(I - J3-"'J! 8:'' -k (-Q:I2)]
+
+
- -GL
Verification of the third equation in (11.4.16) is immediate.
Finally, (11.4.17) must be considered. The first equation in
(11.4.17) is immediate from the definition of FL and from
(11.4.12). The third equation is immediate from the definition of
J,. Finally,
2GL = [ZG, j -XH,(I - J31'2
- (ZG, XH, J)(I - Jz)-1/2J / (-Q:")
= 1-H, G,(l - JZ)Lf2
+
+ (-QYa)]
+
-t ( H , G,J)(I - JZ)-"z Ji (-Qin) -!-(-Q~"I1
= [-HI j (G, + HIJ)(I - J2)-li2j (-Qil')
-k (-Qkf2)]
--HL
-
VVV
The above theorem implicitly contains a procedure for synthesizing S(s)
with r = (m n) resistors. If Q is singular, having rank nS < n, say, it
becomes possible to use r = (m + n*) resistors. Define Rl and R, as matrices
having a number of rows equal to their rank and satisfying
+
Between them, R,and R, wilt have n* rows.Let R, haven: rows and& have
n2; rows. Define E* = Inl (-l)In:. Then instead of the definitions of
(11.4.15) for G,, H,, and J,, we have
+
RESISTANCE EXTRACTION SYNTHESIS
SEC. 11.4
[
:.- J = " = I
JL = (I - Jf)"=
...
471
. .
-J
0
z*
".
Thqe claims may be verified by working through the proof of Theorem
11.4.1 with but minor adjustments.
Let us now summarize the synthesis procedure.
1. ~ r o ai minimal realization {F, G, H , J ] of ~ ( s )that satisfies (11.4.4),
calculate the sy&netricnonsingular matrix P satisfying PF = F'P and
PG = -H.
2. Express P in the form BVZV', with B positive definite, V orthogonal,
and T: = [I,,,i
(-l)I,].
Then with T = V'B!/Z, form another minimal
realization {F,,GI,H I ,J ] of S(s) given by {TFT-I, TG, (T-')'H, J ) .
3. With Q, and Q, defined as in (1 1.4.14), define F,, G,, H,, and J, as in
(11.4.15), or by (11.4.18) and (11;4.19).
4. Synthesize S,,(s) by, for example, application of the preliminary extra&
tion procedures of Chapter 8 or via the reactance-extraction procedure,
and terminate in unit resistors all but the firstm ports of the network
synthesizing S,(s). Note that because the quadruple (F,, G , H,, J,)
satisfies (11.4.16) and (11.4.17), reactance-extraction synthesis is particularly straightfokard. No coordinate basis change is necessary, and
S,(s) itself does not
have
to be calculated.
.
.
Example
11.4.2
Consider the same bounded
real
+atte<i function
.
.
.. .
S(s) =
+
ZrZ+3s+5
as in Example 11.4.1. A minimal realization of S(s) that satisfies(11.4.4)
has been found to be.
Then the symmetric nonsingular matrix P, satisfying PF
PG = -H, is computed to be
= F'P
and
472
SCATTERING MATRIX SYNTHESIS
from which I: = 11 /
CHAP. 1 1
-11 and
Thus, another minimal realization of S(s) is
Using (11.4.14), we obtain
(11.4.15) then yield
Q1 = 0.58
and Qa = 0.32. The formulas
Synthesis from this point is straightforward.
:
Problem Rove Lemma 11.4.1.
11.4.1
Problem Given the bounded real scattering function
1f.4.2
give a nonrecipkcal and a reciprocal synthesis.
Problem Synthesize the symmetric bounded real scattering matrix
7.1.4.3
using the reciprocal synthesis procedure of this section.
.
.
Problem Show that Lemma 11.4.2and Theorem 11.4.1 will cover the case when
11.4.4 I J a is singuk, if (1 JZ)-' is replaced when it occurs by (1 J')s,
,
is., the pseudo inverse of I - JZ. [The pseudo inverse of a symmetric
matrix A is defined as A* = V'(B-1 f O)V, where V is an orthogonal
matrix such that A = V'(B 0)V with 3 n o n s ~ a r . 1
-
-
-
+
CHAP. 11
REFERENCES
473
REFERENCES
[ 1] H. J. C m m , "The Scattering Matrix in Network Theory," IRE Transactions
on Circuit Theory, Vol. CT-3, No. 2, June 1956, pp. 8&97.
.. .
.
[ 2 ] R. W. NEWCOMB,
"Reference Terminations for the Scattering Matrix,"
Electronics Letters, Vol. 1, No. 4, June 1965, pp. 82-83.
[ 3 ] S. VONGPANITLERD
and B. D. 0. ANDERSON,
"Scattering Matrix Synthesis
via Reactance Extraction," IEEE Transactions on Circuit Theory, Vol. CT-17,
No. 4, Nov. 1970, pp. 511-517.
[ 4 ] S. VONCIPANT~LERD,
"Reciprocal Lossless Synthesisvia State-Variable Techniques," IEEE Transactions on Circuit Theory, Vol. CT-17, No. 4, Nov. 1970,
pp. 630-632.
[ 5 j S. VONOPANITLE~,
"Passive Reciprocal Network Synthesi+The StateSpace Approach," Ph.D. 'Dissertation, University of Newcastle, Australia,
May 1970.
[ 6 ] F. R. GANTMACAER,
The Theory of Matrices, Chelsea, New York, 1959.
[ 7 ] V. B E L ~ C "Synth6se
H,
des rhseaux blectriques passifs An paires de bornes de
matrice de repartition pr6ditermin&," Rnnales de Tt~i~mmunications,
Vol. 6,
No. 11, Nov. 1951, pp. 302-312.
[ 81 Y. OONOand K. YAS~URA,
"Synthesis of Finite Passive 2n-Terniinal Networks
with Prescribed Scattering Matrices," Memoirs of the Faculty of Engineering,
Kyush~Universiiy, Vol. 14, No. 2, May 1956, pp. 125-177.
[ 9 ] R. W. NEWCOMB,
Linear Multipyf Synthesis, McGraw-HiU,.New York, 1966.
. .
.
. .
Transfer-Function Synthesis
Probably the most practical application of passive network
synthesis lies in the synthesis of transfer functions or trader-function
matrices rather than immittances, though, to be sure, immittance synthesis
is often a concomitant part of transfer-function synthesis. For example, in
matching an antenna to a transmitter output stage, a passive coupling
network is required, the dcsign of which may be based on impedance matching at the two ports of the coupling network.
As we shall see, state-space transfer-function synthesis is rather less
developed than immittance or scattering synthesis. Even the approaches we
shall present are somewhat disjoint, though the same could probably be said
of the many classical approaches to transfer-function synthesis. There is
probably scope therefore for great advancement, with practical payoffs, in
the use of state-space procedures for transfer-function synthesis. To give
some idea for the basis of this statement, we might note that much of classical
theory demands particular structures, for example, the lattice and ladder
structures. h any given situation, it may well be that neither of these stnrctures offers especially good sensitivity characteristics; i.e., the performance
of a filter using one of these structures may deteriorate severely in the face
of variation of one or more of the element values. Now workers in control
systems have found that the state-space description of systems offers a convenient vehicle for analysis and synthesis of prescribed sensitivity character-
SEC. 12.1
INTRODUCTION
475
istics. Thus, presumably, one could expectlow sensitivity network designs to
follow via state-space procedures.
We now describe the basic idea behind the transfer-function (and transfirfunction matrix) syntheses of this chapter. For convenience in describing,the
approach, but with no loss of generality, suppose that the input variables
are currents (or a single.current in the nonmatrix situation) and the output
variables are all voltages. We shall also suppose that any port at whichthere
is an exciting input variable is not a port at which we measure.an output
variable. Figure 12.1.1 illustrates the situation; here, I,is a vector of currents,
Network
. .
FIGURE 12.1.l. Response and Excitation Variables Associated with Different Ports.
the inputs, and V, a vector of voltages, the outputs. We are given the transferfunction matrix relating I, to V,, and we seek a network that synthesizes
this matrix. Generally, the transfer-function matrix is in state-space form.
The key step in the synthesis is to conceive of the prescribed transferfunction matrix as being a submatrix of a passive impedance matrix. Thus
if T(s) is the transfer-function matrix, we conceive of a positive real Z(s) such
that
In synthesizing Z(s) with a network, we thereby obtain a synthesis of T(s).
For observe that
where the port voltage and current vectors have been partitioned to conform
with the partition of Z(s). If now Z,(s) = 0, i.e., the second set of ports are
open circuit, we have
VA-9 = T(s)Zl(s)
(12.1.3)
This is, of course the relation required.
As will be shown, passage from T(s) to an appropriate positive real Z(s) is
476
TRANSFER-FUNCTION SYNTHESIS
CHAP. 12
not especially difficult. In fact, we can make life easy for ourselves in the
following way. First, instead of passing from the frequency-domain quantity
T(s) to a frequency-domain quantity Z(s), we shall pass from a state-space
description of T(s) to a state-space description of Z(s). Second, we shall
ensure that the matrices appearing in the state-space description of Z(s)
satisfy the equations of the positive real lemma with the matrix P of that
lemma equal to the identity matrix; i.e., if IF,, G,, H,, J=] is the stat-space
realization of Z(s) formed from a state-space realization of T(s), then
for some L, and W,,. As we know, Z(s) may easily be synthesized given a
realization satisfying (12.1.4). Therefore, we shall regard the transfer-functionmatrix synthesis problem as solved if a Z(s) can be found with a realization
satisfying (12.1.4), and we shall not carry the synthesis procedure further.
The above discussion has been based on the assumption that all input
variables have been taken to be currents and all output variables as voltages.
This assumption can be removed by embedding the transfer-function matrix
T(sfin a hybrid matrix X(s) rather than an impedance matrix. Consider,
for example, the case when T(s) is a scalar current-to-current transfer function. Then X(s) would be found as the hybrid matrix of a two port, with
equations
Because of the practical importance of gyratorless circuits, we can seek to
ensure satisfaction of a symmetry constraint on Z(s), in addition to the other
constraints. In view of our earlier discussion of reciprocal synthesis, we may
and shall regard the problem of reciprocal transfer function synthesis as
solved if we can embed the prescribed T(s) in a symmetric Z(s), the latter
with a state-space realization {F,,G,, Hz, Jz) satisfying (12.1.4).
In ail the synthesis procedures, as we have remarked, we begin with a
statespace realization, actually a minimal one, of the prescribed transferfunction matrix T(s). Cill this realization (F,, G,, H,, &). We shall generally
require the additional constraint
or even
If the constraint is not satisfied for an arbitrary realization, and it may well
477
INTRODUCTION
SEC. 12.1
not be, we can easily carry out a coordinate-basis change to ensure that it
is satisfied, provided that the following assumption holds.
Assumpti~n12.1 .I. The prescribed transfer-function matrix
T(s) is such that the poles of every element lie in Re [sl< 0, or
are simple on Re [s] = 0.
(The physical meaning of this restriction is obvious.) To see that (12.1.6)
can be achieved starting with an arbitrary realization, we proceed as follows.
Let (F,,,G,,, H,,, J,,) be an arbitrary realization. If F,, has all eigenvalues
in Re [s] < 0, take Q as an arbitrary positive definite matrix, and define P
as the positive definite solution of
With R any matrix such that R'R = P, take
Then (12.1.8) and (12.1.9) imply that
Notice that the computation of P and R is straightforward (see, e.g., 111).
If F,, has eigenvalues with zero real part, we must first find a matrix R,
such that
where ST,has all eigenvalues with negative real parts, and 5,. is skew. A
further transformation R, can be found so that
+
with 5,,
< 0. [This can be done by the technique described in Eqs.
(12.1.8) through (12.1.10) and the associated remarks.] Now set
R = R,R,
(12.1.11)
and define 4, G,, and H, by (12.1.9). It folIows from the properties of
ST,and 5,, that
(12.1.6)
FT -I-F> S 0
Note that if F, has zero real part eigenvalues, it is impossible to have
FT
+ F& < 0
(The lemma of Lyapunov would be contradicted in this instance.)
(12.1.7)
478
TRANSFER-FUNCTION SYNTHESIS
CHAP. 12
We shall now give a brief summary of the remainder of this chapter. In
Section 12.2 we consider two techniques for nonreciprocal synthesis, the
first being based on a technique due to Silverman 121. In Section 12.3 reciprocal synthesis is considered. In Section 12.4 we consider synthesis with
prescribed load terminations, discnssing results of [3].
One approach to state-space transferifunction synthesis that we do not
discuss in detail is that based on ladder network synthesis 14-7l. As was
noted in Section 4.2, the network of Fig. 12.1.2 has the state equations
SEC. 12.2
NONRECIPROCAL TRANSFER-FUNCTION SYNTHESIS
FIGURE 12.1.2.
479
Ladder Network with State Equation
(12.1.12).
The transfer function relating V(s) to I,,@) is of the form
and the a, in (12.1.13) are related to the element values of the circuit of
Fig. 12.1.2 by straightforward formulas set out in the references. Thus,
hardly with intervention of state-space techniques, (12.1.13) can be synthesized by a ladder network. Synthesis of transfer admittances with numerators
other than a constant can also be achieved, as discussed in [7], by using
the same basic network, but adjusting the points at which excitations are
applied or responses are measured.
Problem
12.1.1
Show that procedures for the synthesis of a transfer-function matrix
T(s)with T(m)infinite, but with lim,, T(s)lsfinite, can be derived from
procedures for synthesizing transfer-function matrices T(s) for which
T(m)is finite.
12.2
NONRECIPROCAL TRANSFER-FUNCTION
SYNTHESIS
In this section we shall translate into more precise terms the
transfer-function synthesis technique outlined in the previous section. We
suppose that we are given a transfer-function matrix T(s) with a minimal
realization [F,,G,, HT,
J,). We suppose moreover that
Without loss of generality, we shall assume that T(s) is a current-to-voltage
transfer-function matrix. We shall suppose too that FT is n x n, G, is n x p,
HTis rr x nz, andJ, is rn x p.
We now define quantities F?, G,, Hz,
and J, by
480
TRANSFER-FUNCTION SYNTHESIS
CHAP. 12
The quadruple IF,, G,, H,, J,] is a realization, actually a minimal one, of
a matrix Z(s) that. is @ $ m) X ( p m). If Z(s) is partitioned as
+
with Z,,(s) a p
X
p matrix, it is easy to verify that
Observe also, using (12.2.1) and (12.2.2); that
where the existence of L, folIows from (12.2.1) and the fact that F;= F,.
Equations (12.2.5) are the positive real lemma equations with the matrix
P equal to the identity matrix. Therefore, the synthesis problem is essentially
solved.
Notice that the computations required for transfer-function synthesis
are much simpler than for immittance synthesis. The positive red lemma
equations never have to be solved.
Notice also that if the synthesis is completed by, say, the reactanceextraction technique, the number of reactive elements used will be n, where
n is the dimension of F, and F,. Since F, is of minimal dimension, the
synthesis therefore uses a minimal number of reactive elements. It does not
necessarily use a minimal number of resistive elements, however. Problem
12.2.1 develops a synthesis for scalar transfer functions using the minimal
number of resistors, and also asks the student to look at such syntheses
for transfer-function matrices.
Example
Consider the Butterworth function
12.2.1
A minimal realization for T(s) is easily derived as
Notice that F, has nonpositive-definite symmetric part. Next, we construct
SEC. 12.2
NONRECIPROCAL TRANSFER-FUNCTION SYNTHESIS
481
To synthesize the impedance of which these matrices are a minimal
realization, we fonn
and find a nondynamic network of which this is the impedance matrix.
We note that
and
Thus the nondynamic network of impedance matrix M is that of
Fig. 12.2.1, and the two-port network having the prescribed transfer
function is obtained by terminating the network above at its last two
ports in unit inductances, as shown in Fig. 12.2.2.
As the reader will appreciate, the synthesis of a transfer-function matrix
has a high degree of nonuniqueness about it. The procedure we have given
has the merit of being simple, but ensurqs that T(s)is embedded in a positive
real Z(s) that is uniquely determined by T(s).Of course, there are an in6nity
of positive real Z(s) for which T(s) is a submatrix. So it may well be that
482
TRANSFER;FLlNCTlON SYNTHESIS
FIGURE 12.2.1.
CHAP. 12
Nondynamic Network of Example 12.2.1.
there are other synthesis procedures almost, if not equally, as convenient,
which depend on T(s) being embedded in a Z(s) different from that used
above. We shall examine one such procedure here.
In the procedure now to be presented we shall restrict consideration to
those T(s)for which all eigenvalues of the matrix F, in a minimal realization
of T(s)lie in Re [s] < 0. Thus we can ensure that
.
.
Without loss of generality, we shall take T(s) to be an m x p current-tovoltage transfer-function matrix, and we will embed it in a ( p + m ) x (p m)
. .
Z(s) of the form . . .
+
SEC. 12.2
NONRECIPROCAL TRANSFER-FUNCTION SYNTHESIS
FIGURE 12.2.2.
:O' + f l s
483
Synthesis of Transfer Function
+ 1)-'.
where J,, and J,, are constant. Notice that ihis implies that there is zero
transmission from the second set of ports to the first set; the voltage at the
first set of ports depends purely on the current at those ports and is independent of the current at the second set of ports. (It is easy to envisage
practical situations where this may be helpful.) Extension to the case when
the zero block is replaced by a nonzero block is also possible (see Problem
12.2.6).
We deiine a realization of Z(s) in the following way:
The matrix J, will be defined shortly. First, we shall define L,and W,,. With
484
TRANSFER-FUNCTION SYNTHESIS
CHAP. 12
T(s) an m x p matrix, let s = min {m,p}. With F, an n
any nonsingular n X n matrix such that
FT -I- F&= -SS'
x n matrix, let S be
(122.9)
(A triangular S is readily computed.) Then we define
Sl
(12.2.10)
+ F: = -L,L:
(12.2.11)
L,=[O...
Notice that
F,
The matrix L, has n rows and s f n columns; the matrix W,, will have
s n rows and p -I-m columns. Let us partition it as
+
where W , , is s x p. The dimensions of the remaining submatrices are then
automatically defined. Next, temporarily postponing definition of W , , and
W , , , we d e h e
W , , = -S-IG,
W,,
= S-'H,
(12.2.13)
Now observe that
It remains to define Jz,W , , , and W,,. We let W , , and W 1 2be any matrices
such that
Notice that JT is m x p, while W , , is s x p and W , , is s x m. Since s =
min {m,p}, there certainly exist W , , and W,, satisfying (12.2.15). Finally,
define
Equations (12.2.12), (12.2.13, and (12.2.16) imply that
SEC. 12.2
NONRECIPROCAL TRANSFER-FUNCTION SYNTHESIS
486
Equations (12.2.8) and (12.2.16) define the minimal realization of Z(s), with
the matrices W , in (12.2.16) defined by (12.2.9) through (122.15). Equations
(12.2.11), (12.2.14), and (12.2.17) show that the realization IF,G,, Hz,
Jz)
satisfies the positive real lemma equations with the P matrix equal to the
identity matrix. Equations (12.2.8) imply that
and, taken in conjunction with (12.2.16), this means that Z(s) has the desired
form of (12.2.7):
The synthesis procedure is therefore essentially complete. Notice that,
again, a minimal number of reactive elements is used. The number of resistive
elements will, however, be large (see Problem 12.2.4).
As a summary of the above synthesis procedure, the following steps are
noted:
1. Check that the prescribed current-to-voltage transfer function T(s)
fulfills the condition that poles of each element all lie in the strict left
half-plane, Re[s] < 0.
2. Obtain a minimal realization (F,, C,, H,, J,) of T(s)for which F, fF$
< 0.
3. Calculate any nonsingular matrix S such that F, Fi = - S S .
4. Compute matrices W,,,
W,,using (12.2.13) and W , , , W , , for which
(12.2.15) holds.
5. Obtain F,, C,, H, simply via (12.2.8) and J, from (12.2.16). Give
a synthesis by any known procedure for the positive real impedance Z(s)
for which IF,, G,, Hz,
J,] is a minimal realization. The network so
obtained synthesizes the prescribed T(s).
+
Example We take
again
'I2.2.2
For a minimal realization, we adopt
486
CHAP. 12
TRANSFER-FUNCTION SYNTHESIS
so that, following (12.2.81,
A matrix S satisfying (12.2.9) is given by
and so, with s = 1,
and W,, aregiven from (12.2.13) by
Next, Wzl
. .
Matrices Wlz
and WI1
satisfyin&' (12.2.15) are
w,,= w,z=o
From (12.2.16) this leads to
With the quantitie F,,C, Hz, and J, known, a synthesis of Z(s) can
be achieved by, for example, the. reactance-extraction approach. We
form the constant impedance matrix
SEC. 12.2
NONRECIPROCAL TRANSFER-FUNCTION SYNTHESIS
487
We have then
and
A synthesis for the impedance matrixM is shown in Fig. 12.2.3, while
a final circuit realizing the prescribed transfer function is given by
Fig. 12.2.4.
488
TRANSFER-FUNCTION SYNTHESIS
FIGURE 12.23, Nondynamic Network of Example 12.2.2.
Problem This problem investigates synthesis using a minimal number of resistive
12.2.1
elements. Suppose that T(s) is a scalar transfer function, with minimal
realization [FT,,gT,, &,,jT,]and with all eigenvdues of FT, in Re tsl
< 0. Because [F,,, &,1 is a completely observable pair, there exists a
positive definite symmetric P such that
PFT,
+ F$,P = -h,hk,
Carry out a coordinate-basis change, derived from P,and show that T(s)
can be embedded in a 2. x 2 positive real matrix Z(s)with minimal reali-
SEC. 12.2
489
NONRECIPROCAL TRANSFER-FUNCTION SYNTHESIS
-
10
2
1
1
1'
--
2'0
FIGURE 12.2.4. Synthesis of Transfer Function
k(sZ f l s
I)-'.
zation (Fz,G,, Hz,
J,) and such that (12.2.5) is satisfied with L, a vector.
Conclude the existence of a synthesis using only one resistor. Show also
that there is no synthesis containing no resistors. What happens for
matrix T(s)?
+
+
Problem Show that if T(s)is a transfer-functionmatrix such that each element has
12.2.2
polethat are always simple and with zero real part, then T(s) can be
synthesized by a lossless network.
490
TRANSFER-FUNCTION SYNTHESIS
CHAP. 12
Problem Synthesize the scalar voltageto-voltage transfer function
12.2.3
Problem Compute the number of resistors resulting from the second synthesis
procedure presented.
Problem Synthesize the scalar voltage-to-voltage transfer function
12.2.4
12.2.5
using the second method outlined in the text.
Problem Suppose that a passive synthesis is required of two current-to-voltage
12.2.6
transfer-function matrices T,z(s)
and T,,(s). These transfer-function
I&), V,(s),
and VL(s)according to the
matrices relate variables Il(s),
equations
V I (=~T~i(s)Iz(s) VA-9 = TzI(s)~,(s)
with I,(s)and V,(s)being associated with the same set of ports, and
likewise for iz(s) and Vl(s).Develop a synthesis procedure following
ideas like those used in the second procedure discussed in the text.
By taking T,,(s)= T',,(s)can
, one embed TI&) in a symmetric Z(s)?
12.3
RECIPROCAL TRANSFER-FUNCTION
SYNTHESIS*
In this section we demand that the network synthesizing a prescribed transfer function be reciprocal. We shall start with a minimal realization &, G,, H,, Jr of a prescribed m x p transfer-function matrix T(s),
assumed for convenience to be current to voltage. We shalt moreover suppose
that
The strict inequality is equivalent to a requirement that the poles of each
element of T(s) should all lie in Re [s] < 0, rather than some poles possibly
lying on Re [s]= 0. A straightforward way to remove this restriction is not
known.
As the fist step in the synthesis we shall change the coordinate basis so
that in addition to (12.3.1), we have
*This section may be omitted at a first reading.
SEC. 12.3
RECIPROCAL TRANSFER-FUNCTION SYNTHESIS
+
491
where Z is a diagonal matrix of the form I., (-1,J. Then we shall construct a minimal realization {F,, G,, Hz,
J,) of a O, $ m) x @ m) positive
real impedance matrix Z(s) satisfying the equations
F,
+
+ F: = -L,L:
G, = Hz - LzW,,
J,
+
(12.3.3)
J: = WLWo,
and
ZF*= F:z
CG,= -Hz
J. = J:
(12.3.4)
Equations (12.3.3) imply that a passive synthesis of Z(s) may easily be found,
while (12.3.4) imply that this synthesis may be made reciprocal. Of course,
Z(s) has T(s) as a submatrix.
Naturally, the calculations required for reciprocal synthesis are more
involved than those for nonreciprocal synthesis. However, at no stage do the
positive real lemma equations have to he solved.
We now proceed with the first part of the synthesis. Suppose that we have
a minimal realization {F,,, G,,, H,,, J,,} of T(s) such that
while ZFT,# Fh,Z. Lemma 12.3.1 explains how a coordinate-basis change
may be used to derive FT satisfying (12.3.1) and (12.3.2).
Lemma 12.3.1. Suppose that FT, satisfies Eq. (12.3.5). With A
any matrix such that AFT, = F&,A,there exist a positive dehits
symmetric B, an orthogonal V, and a matrix Z = I,, (-1.3
such that
+
A 4- A' = BVZV'
=
VZV'B
(12.3.6)
Further, with
and
the matrix Fr satisfies (12.3.1) and (12.3.2).
This lemma is a special case of Theorem 10.3.1, and its proof will not be
included.
492
TRANSFER-FUNCTION SYNTHESIS
CHAP. 12
With Lemma 12.3.1 in hand, we can now derive a minimal realization of
an appropriate impedance Z(s), such that T(s) is a submatrix and such that
Eqs. (12.3.3) and (12.3.4) are satisfied by the matrices of the state-space
realization of Z(s).
The matrices 1P,, G,, and Hzare defmed by
We shall define J, shortly. Notice that, except for the equation J. = Jl,
the reciprocity equations (12.3.4) hold.
To define J,, we need first to define L,and W,,. Recalling that FT ',+ Fh is
negative definite, let S be an n x n matrix (where FT is n x n) such that
(It is easy to compute such an S, especially a triangular S.)Let s = min ( p , m},
and set, similarly to the procedure of Section 12.2,
L,=[O.,,
s]
(12.3.11)
+ F: = -LJ:
(12.3.12)
Observe that
F,
as required. Next, again following Section 12.2, we form W,,, which is a
matrix of (s $ n) rows and (g m) columns. We shall identify submatrices
of WW,,
according to the partitioning
+
where W,, is s x p and the dimensions of the other submatrices are automatically defined. Now we set (in wntrast to Section 12.2)
These definitions are made so that the equation
holds. To check this, observe from (12.3.11) and (12.3.13) that
SEC. 12.3
RECIPROCAL TRANSFER-FUNCTION SYNTHESIS
493
Next, as in Section 12.2, we define W,, and W,, as any pair of matrices
satisfying
W'l,Wll = 2J,
- W\,W,,
(12.3.17)
Notice that J is m X p, while W , , and W,,are, respectively, s x p and
s X m. With s = min { p , m), it is always possible to find W,,and W I z
satisfying this equation. Finally, we define
This ensures that J, is symmetric, that
J,
+ J: = wb,wo,
(12.3.19)
and finally that the 2-1 entry of J, is, by (12.3.17), precisely J,. This fact,
together with Eqs. (12.3.9), implies that T(s) is them x plowerleft submatrix
of the (p $. m ) X (p m) matrix Z(s) with realization IFz, G,, Hz,
J,}.
Equations (12.3.12), (12.3.16), and (12.3.19) jointly amount to (12.3.3), while
the reciprocity equations (12.3.4) have also been demonstrated. This completes a statement of the synthesis procedure.
Notice that if the remainder ofthe synthesisproceeds by, say, the reactanceextraction approach, n reactive elements will be used, where n is the size of
F, and F,. Since the dimension of F, is minimal, this means that T(s) is
synthesized with the minimum number of reactive elements.
To summarize, the synthesis procedure for a current-to-voltage transferfunction matrix is now given:
+
1. Examine whether poles of each element of the prescribed T(s) lie in
the left half-plane Re [s] < 0.
2. Compute a minimal realization IF,,, G,,, H,,,
J,J of T(s) with F,,
-+ F;,< 0.
3. With the coordinate-basis change R found according to Lemma 12.3.1,
another minimal realization {F,, G,, H,, J,) of T(s) is computed.
4. Calculate a nonsingular matrix S such that F,
P; = -SS.
5. From (12.3.14) and (12.3.15) calculate matrices WZV,,
and W,,, respectively; then obtain any pair of matrices W, and W,, for which (12.3.17)
holds.
6. Calculate J, from (12.3.18) and F,, G,,and H, from (12.3.9).
7. Give a synthesis for the positive real impedance matrix Z(s) for which
{Fz,G,, H,, J,] is a minimal realization.
+
494
Example
CHAP. 12
TRANSFER-FUNCTION SYNTHESIS
we shall synthesize the current-to-voltage transfer function
12.3.1
t
T(s)= s2 + I/ZS + 1
AS a
minimal.realization, we take
Observe that 'CF,= F Z ,where
Following the procedure specified, we take
A matrix S satisfying FT
+ F; = -SS'
is given by
whence
and
WZz= S-I(I:f
Using (12.3.17),
we see that W,,= W l l
= 0 is adequate, so that by
(12.3.18).
SEC. 12.3
RECIPROCAL TRANSFER-FUNCTION SYNTHESIS
495
with
Now we synthesize Z(s) with minimal realization {F,,G,,Hz, J,].
This requires us to form the hybrid matrix of a nondynamic
network as
The upper 3 x 3 diagonal submatrix
corresponds to an impedance matrix, while the lower diagonal submatrix
[l/J7J is an admittance matrix. The network of Fig. 12.3.1 gives a synthesis of the hybrid matrix M. Terminating this network at port 3 in a
unit inductor and at port 4 in a unit capacitor then yieIds a synthesis of
Z(s), as shown in Fig. 123.2.
Problem Give a reciprocal synthesis of the voltageto-voltage transfer function
12.3.1
Problem Can you extend the synthesis technique of this section to cover the case
12.3.2 when F, may have pure imaginary eigenvalues? (This is an unsolved
problem.)
Problem Show that if F, has all real eigenvalues, and that if T(s) is a current-to12.3.3 voltage transfer-function matrix, then there exists a synthesis using only
inductors, resistors, and transformers.
496
TRANSFER-FUNCTION SYNTHESIS
CHAP. 12
Open-Circuit
Ports
FIGURE 12.3.1. Nondynamic Network of Example 12.3.1.
FIGURE 12.3.2. Reciprocal Synthesis of the Transfer
Function l(sZ 0 s I)-'.
+
+
Problem Give a reciprocal synthesis of the current-to-voltage transfer function
12.3.4
using no capacitors. Use the procedure obtained in Problem 12.3.3.
SYNTHESIS WITH LOADING
SEC. 12.4
12.4
497
TRANSFER-FUNCTION SYNTHESIS
WITH PRESCRIBED LOADING
We shall restrict our remarks in this section to the synthesis of
current-to-voltage transfer-function matrices. Extension to the voltage-tovoltage and other cases is not difficult.
The general situation we face can be described with the aid of Fig. 12.4.1.
(If we were dealing with, for example, voltage-to-current transfer functions,
a-m2
Unknown
Network
-
FlGURE 12.4.1.
Synthesis Arrangement with R, and R2
Known.
Fig. 12.4.2 would apply.) The new feature introduced in Fig. 12.4.1 is the
loading at the input and output sets of ports. (Note that I, will be a vector
and R , a diagonal matrix if the number of ports at the left-hand side of the
unknown network exceeds one.)
Unknown
Network
Synthesis Arrangement for Voltage-toCurrent Transfer Function.
FIGURE 12.4.2.
The synthesis problem is to pass from a prescribed T(s), R,, and R , to a
network N such that the arrangement of Fig. 12.4.1, with Nreplaciug the
unknown network, synthesizes T(s), in the sense that
As soon as dissipation is introduced into the picture, it makes sense not
to consider T(s) with elements possessing imaginary poles on physical
TRANSFER-FUNCTION SYNTHESIS
498
CHAP. 12
grounds. Therefore, we shall suppose that every entry of T(s)is such that all
its poles lie in Re [s] < 0.
Because of the way we have defined the loading elements-as resistorswe rule out the possibility of R, or R, being singular. (This avoids pointless
short circuiting of the input current generators or the output.) We do permit
R i l and R;' to become singular, in the sense that diagonal entries of R , and
R, may become infinite; reference to Fig. 12.4.1 will show the obvious
physical significance.
As we shall see, if either of R;' or Ri' is zero, synthesis is always possible,
while if both quantities are nonzero, synthesis is not always possible. A
simple argument can be used to illustrate the latter point for the single-input,
single-output case. Observe that the maximum average power flow into the
coupling network obtainable from a sinusoidal source is R,I:/4, where I,
is the root-mean-squarecurrent associated with the source. Clearly, the power
disGpated in R, cannot exceed this quantity, and so if V , is the root-meansquare voltage at the output,
It follows that
for all o.If a transfer function is specified together with values for R, and
R, such that this inequality fails for some w, then synthesis will be impossible.
Because of the importance of the cases Ryl = 0 and Ryl = 0, we shall
consider these cases separately. Of course, the treatment in earlier sections
has been based on R;' = R i l = 0.Here we shall demand that one of R;I
and Ri' be nonzero.
Absence of Input Loading
First, we shall suppose that R i l = 0 (see Fig. 12.4.3). Before
explaining a synthesis procedure, it proves advantageous to do some analysis.
Let us suppose that the unknown network has an impedance-matrix description identical to the impedancematrix description we derived in Section 12.2
in the first synthesis procedure. In other words, we suppose that for some
{RT, fiT,jT}we have
e,,
and the unknown network is described by the state-space equations
SYNTHESIS WlTH LOADING
SEC. 12.4
FIGURE 12.4.3.
499
Arrangement for Output Loading Only.
aT,
If R i l were zero, and if also the quadruple {p,, kT, j,] werearealitation
of T(s), then (12.4.4) would define state-space equations for a readily synthesizable positive real Z(s), the synthesis of which would also yield a synthesis of T(s). .
From Eqs. (12.4.4) and the relation
it is not difficult to derive state-space equations for the impedance matrix
of the unknown network when terminated with R,. Substitution of (12.4.5)
into (12.4.4) yields these equations as
The transfer-function matrix relating I, to V, (with I, = 0) is seen by inspection to be
This concludes our analysis, and we turn now to synthesis. In the synthesis
problem we are given T(s),and we require the unknownnetwork. Theanalysis
problem suggests one way to do this. We could start with a minimal reali%ation (F,,G,, H,, JT] of T(s). Then we could try to find quantities [E,, G,,
$,.fT]such that (12.4.7) holds. One obvious choice of suchquantities that
500
CHAP. 12
TRANSFER-FUNCTION SYNTHESIS
we might try is
Then,provided p, has nonpositive definite symmetric part, we could synthesize a network defined by the state-space equations (12.4.4), appropriately
terminate its right-hand ports in R,, and thereby achieve a synthesis of the
desired T(s) with an appropriately loaded network.
The question rcmains as to whether pT will have nonpositive definite
symmetric part. Using (12.4.8), we see that we need to know if there is a
minimal realization IF,, G,, H,, J,} of the prescribed T(s) such that
If (12.4.9) can be satisfied, synthesis is easy. Let us now show how (12.4.9)
can be achieved. Let (F,,, G,,, H,,, J,,) be an arbitrary minimal realization
of T(s) for which (12.4.9) does not hold. Form the equation
where Q is nonnegative definite, and such that this equation is guaranteed
to have apositive definite solution. [Note: Conditions on Q are easy to find;
if R;' is nonsingular, the complete observability of (F,,, H,,) guarantees by
the lemma of Lyapunov that any nonnegative definite Q will work. In any
case, any positive definite Q will work. See also Problem 12.4.1.1
Let S be any matrix such that
and set
FT = SFT,S-!
G, = SG,,
H, = (S-')'H,,
With these definitions, Eq. (12.4.10) implies
This inequality is the same as (12.4.9).
Example
12.4.1
We take
T(s) = s ~ + f1l s - t l
(12.4.12)
SYNTHESIS WITH LOADING
SEC. 12.4
501
with
and suppose that R, = 2. Observe that
Therefore, no coordinate-basis transformation is necessary. Using
(12.4.8), we have
Using Eqs. (12.4.4), we see that a state-spacerealization of the impedance
of the "unknown network" of Fig. 12.4.3 is [F,, C,, Hz, J,), where
The impedance can be synthesized in any of the standard ways.
Absence of Output Loading
Now we suppose that R i Lis nonzero, but R;' = 0. We shall see
that the synthesis procedure is very little different from the case we have
just considered, where R;' = 0 but R i Lis nonzero. Figure 12.4.4 now applies.
As before, we analyze before we synthesize. Thus we suppose the unknown
q-E2
-
FIGURE 12.4.4.
Unknown
Network
-
Arrangement for Input Loading Only.
502
TRANSFER-FUNCTION SYNTHESIS
CHAP. 12
network has state-space equations
i = Frx
+ [re, %I
(12.4.13)
Also, we require
F,+&<O
We now have
1, = I , - R;'V,
(12.4.14)
which leads to the following state-space equations relating the input variables
I, and I, to the output variables V, and V,:
The transfer function relating I, to V2is obtained by setting I, = 0, and is
For the synthesis problem, we shall be given a minimal realization {F,, G,,
H,, J,] of T(s). A synthesis of T(s) will follow from this minimal realization
if we set
and synthesize (12.4.13) as the "unknown network," with the procedure
working only if (12.4.3) holds. From (12.4.17) it follows that (12.4.3) is
equivalent to
F,
+ F&+ 2G,R;'CT < 0
(12.4.18)
Though this condition is not necessarily met by an arbitrary minimal realization of T(s),the means of changing the coordinate basis to ensure satisfaction
of (12.4.18) should be almost clear from the previous discussion dealing with
the case of output loading. The equation replacing (12.4.10) is
FT,P
+ PF;, = -2G,,R;'Gk,
-Q
(12.4.19)
SEC. 12.4
SYNTHESIS WITH LOADING
503
With S satisfying SS' = P, we set F, = S-'F,,?, G, = S"G,,, and
H, = S'H,,. Essentially, the remainder of the calculat~onsare the same, and
we will not detail them further.
Input and Output Loading
Now we suppose that both R;-' and R i l are nonzero. We shall
also suppose that T(s) is such that T(oo) = 0, because the calculations
otherwise become very awkward. This is not a significant restriction in
practice, since most transfer-function matrices whose syntheses are desired
will probably meet this restriction.
Our method for tackling the problem is largely unchanged. With reference
to Fig. 12.4.1, we assume that we have available a state-space description of
the impedance of the unknown network of the form
Observe that in (12.4.20) there is no direct feedthrough term; i.e., V, and
V, are linearly dependent directly on x, rather than x, I,, and f2.As we shall
see, this represents no real restriction. Of course, as before, 2, is constrained
by
(12.4.21)
3; < 0
r;.+
Using the equations
it is not hard to set up state-space equations in which the input variables
are I, and I,. These will of course be state-space equations of theimpedance
of the network comprising the unknown network and its terminations. The
actual equations are
We see from these equations that the transfer-function matrix T(s) relating
I,(s) to V,(s) is
(12.4.24)
T(s)= &[SI - ( f T - C?,R;'@:. - l?T~s'8T)]-16,
504
CHAP. 12
TRANSFER-FUNCTION SYNTHESIS
The synthesis problem requires us to proceed from a minimal realization
{F,, G,, H,} of T(s) to a network synthesizing T(s). Study of (12.4.24) shows
that this will be possible by reversing the above procedure if we define
6, = G,
H, = H,
pT = Fr + G,Rr1GL
+ H,Ri'H:
(12.4.25)
and if the resulting 2, satisfies (12.4.21), i.e., if
F,
+ F& 4-2G,R;'Gr + 2H,Ri1H'
7
-0
<
(12.4.26)
In general, (12.4.26) will not be satisfied by the matrices of an arbitrary
minimal realization. So, again, we seek to compute a coordinate-basis change
that will take an arbitrary minimal realization {F,,, G,,, H,,} to this form.
If such a coordinate-basis change can be found, then the synthesis is straightforward. The following lemma is relevant:
Lemma 12.4.1. Let (F,,, G,,, H,,) be an arbitrary minimal realization of T(s). Then there exists a minimal realization IF,, C,, H,}
satisfying (12.4.26) if and only if there exists a positive-definite
symmetric P such that
PF,,
+ F&,P+ 2PG,,R;'G>,P + 2H,,Ri1H:,
<0
(12.4.27)
Proof. Here we shall only prove that (12.4.27) implies the existence of {F,, G,, Hr] satisfying (12.4.26). Proof of the converse
is demanded in a problem. Let P = S'S, where S is square;
because P is nonsingular, so is S. Then set
F, = SF,,S-'
C, = SG,,
H,
= (S-')'H,,
(12.4.28)
Equation (12.4.26) follows by multiplying both sides of (12.4.27)
oti the left by (S-I)' and on the right by S-'.
VVV
The significane of the lemma is as follows. If, given an arbitrary minimal
realization (F,,, G,,, H,J of T(s),a nonsingular matrix P can be found satisfying (12.4.27), then the synthesis problem is, in essence, solved. For from P
we can compute S, and then F,, G,, and H, according to (12.4.28). Next,
$,.,
and
follow from (12.4.25), with the fundamental restriction
(12.4.21) holding. We can then synthesize the unknown network using its
impedance description in state-space terms of (12.4.20).
Lemma 12.4.1 raises two questions: (I) when can .(12.4.27) be solved, and
(2) how can a solution of (12.4.27) be computed? To answer these questions,
let us define the transfer-function matrix a s ) by
eI',
SEC. 12.4
SYNTHESIS WITH LOADING
505
Problem 12.4.4 requests a proof of the following result.
Lemma 12.4.2. With Z(s) defined as in (12.4.29), there exists
a positive definite P satisfying (12.4.27) if and only if Z(s) is a
bounded real (scattering) matrix.
The matrix P in the bounded real lemma equation for X(s) is the same as
the matrix P in (12.4.27). Thus Lemma 12.4.2 answers question 1, while
earlier work on the calculation of solutions to the positive and bounded real
lemma equations answers question 2.
It is worthwhile rephrasing the scattering matrix condition slightly. In view
of the fact that F,, has all its poles in Re [s] < 0, it follows that Z(s) is a
bounded real scattering matrix if and only if
I- (-jo)(jw)
I - 4R;'/%&,(-joI
0
for all real a,
- F~,)"H,,R;'H:,(jwI
- F,,)-1G,,Ri1/2
20
for all real o
or
4
>
--
j R 1 T ( j o
for all real w
(12.4.30)
This is a remarkable condition, for it is the matrix generalization of condition (12.4.2) that we derived by simple energy considerations. Therefore,
even though we have assumed a specialized sort of unknown network (by
demanding that its impedance matrix have a certain structure), we do not
sacrifice the ability to synthesize any transfer-function matrix that can be
synthesized. Further, we see that the class of synthesizable transfer-function
matrices is readily described by simple physical reasoning.
In contrast to synthesis procedures presented earlier in this chapter, the
procedure we have just given demands as muchcomputation as animmittance
or scattering matrix synthesis. Notice though that a minimal number of
reactive elements is used, just as for the earlier transfer-function syntheses.
Bccept in isolated situations, the network resulting from use of this synthesis
procedure will be nonreciprocal. A reciprocal synthesis procedure has not
yet been developed.
Example
12.4.2
We take again
506
TRANSFER-FUNCTION SYNTHESIS
CHAP. 12
with minimal realization
Suppose that a current-to-voltage synthesis is desired with R, = R,
= 1. Notice that
>0
for all real w
Therefore, synthesis is possible.
Because of the low dimensionality of T(s),it is easy to soIve (12.4.27)
directly. If P = (p,), and the equality sign is adopted in (12.4.27), we
obtain-by equating to zero components of the mauix on the left side of
(12.4.27&&
equations
from which
Next, a matrix S satisfying S'S = P i s readily found as
Using (12.4.28). we define a minimal realization for T(s) as
Finally, we have Grand
fir identical with GT and HT,while
SYNTHESIS WITH LOADING
SEC. 12.4
507
The unknown network therefore has impedance matrix Z(s) with
state-space realization
and synthesis follows in the usual way.
Problem Consider the equation
12.4.1
PFT,
+ Fk,P = - 2 H T , R t 1 H k , - Q
with all eigenvalues of F,, possessing negative real parts. Show that P
is positive definite if and only if [F,,, [H,,Ri'/' Dl) is completely
observable, where D is any matrix such that DD' = Q. (Hint:Vary the
standard proof of the lemma of Lyapunov.)
Problem Synthesize the voltage-to-voltage transfer function
12.4.2
for the following two cases:
(a) There is a resistor in series with the source of 2 Q and an opencircuit load.
(b) There is a load resistor of 2 and no source resistance.
Problem Complete the proof of Lemma 12.4.1.
12.4.3
Problem Let {F,,, G,,, H,,] be a minimal realization of T(s), and let [F,,,
& T G , , R r l / ' , f i H , , R ; 1 / 2 ] be a minimal realization of a matrix transfer function T(s). Using the bounded real lemma, show that X(s) is a
bounded real scattering matrix if and only if there exists a positive
definite P such that
1.2.4.4
Problem Attempt to obtain reciprocal synthesis techniques along the lines of the
nonreciprocal procedures of this section.
12.4.5
508
TRANSFER-FUNCTION SYNTHESIS
CHAP. 12
REFERENCES
[ 11 F. R. GANTMACHER,
The Theory of Matrices, Chelsea, New York, 1959.
[21 L. M. SILVEAMAN,''Synthesis of Impulse Response Matrices by Internally
Stable and Passive Realizations," IEEE Transactions on Circuit Theory, Vol.
CT-15, No. 3, Sept. 1968, pp. 238-245.
[ 3 ] P. DEWILDE,
L. M. SILVERMAN,
and R. W. NEWCOMB,
"A Passive Synthesis
for Time-Invariant Transfer Functions," IEEE Transactions on Circuit Theory,
Vol. CT-17, No. 3, Aug. 1970, pp. 333-338.
[4] R. YARIAOADDA, "An Application of Tridiagonal Matrices to Network
Synthesis," SIAMJournal of Applied Mathematics, Vol. 16, No. 6, Nov. 1968,
pp. 1146-1162
[ 5 ] I. NAVOT,"The Synthesis of Certain Subclasses of Tridiagonal Matrices with
Prescribed Eigenvalues," SIAMJournal of Applied Mathematics, Vol. 15, No. 12,
March 1967.
[6] T. G.MARSHALL,"Primitive Matrices for Doubly Terminated Ladder Networks," Proceedings 4th Allerton Conference on Circuif and System Theory,
University of Illinois, 1966, pp. 935-943.
[7] E.R.FOWLER
and R. YARIAGAD~A,
"A StaWSpace Approach to RLCTTwoPort Transfer Function Synthesis," IEEE Transactions on Circuit Theory, Vol.
CT-19,No. 1, January 1972, pp. 15-20.
Part VI
ACTIVE RC NETWORKS
Modem technologies have brought us to the point where
network synthesis using resistors, capacitors, and active
elements, rather than all passive elements, may be preferred in
some situations. In this part we study the synthesis of transfer
functions using active elements, with strong emphasis on
state-space methods.
Active State-Space Synthesis
13.1
INTRODUCTION TO ACTIVE RC
SYNTHESIS
In this chapter we aim to introduce the reader to the ideas of
active RC synthesis, that is, synthesis using resistors, capacitors, and active
elements. The trend to circuit miniaturization and the advent of solid-state
devices have led to a rethinking of the aims of circuit design, w~ththe conclusion that often the use of active devices is to be preferred to the use of
inductors and transformers.
Here we shall be principally concerned with the use of state-space methods
in active RC synthesis, and this leads to the use of operational amplifjers as
active elements. Other active devices can be and are used in active RC synthesis such as the negative resistor, controlled source, and negative impedance converter. A few brief remarks concerning these devices will be offered
later in this chapter, but for any extensive account the reader should consult
any of a number of books, e.g., Il-l].
In Section 13.2 we introduce the operational amplifier in some detail, and
illustrate in Section 13.3 its application to transfer-function-matrix synthesis,
paying special attention to scalar transfer function synthesis. Section 13.4
carries the ideas of Section 13.3 further by describing a circuit, termed the
biquad, that will synthesize an almost arbitrary second-order transfer function. Finally, in Section 13.5 we discuss briefly ofher approaches to synthesis,
611
512
ACTIVE STATE-SPACE SYNTHESIS
CHAP. 13
including immittance synthesis as distinct from transfer-function synthesis.
In the remainder of this section, meanwhile, we offer brief comments on
various practical aspects of active synthesis.
In our earlier work on passive-circuit synthesis we paid some attention to
seeking circuits containing a minimal number of elements, especially a minimal number of reactive elements. Likewise, early active RC syntheses were
concerned with minimizing the number of reactive elements; however, there
was also concern to minimize the number of active elements, and many
syntheses only contained one active element. But the decreasing cost of
active elements, either in discrete or integrated form, coupled with advantages
to be derived from using more than one, has reduced the emphasis on minimizing the number in a circuit. At the same time, the desire to integrate
circuits has led to the requirement to keep the total value of resistance down.
Therefore, notions of minimality require rethinking in the context of active
KC synthesis. For further discussion, see especially 11-31.
One of the major problems that has had to be overcome in active-circuit
synthesis (and which is not a major problem in passive synthesis) is the sensitivity problem. It has been found in practice that the input-output performance of some active circuits is very sensitive to variations in absolute values
of the components and to parasitic effects, including phase shift. These problems tend to be particularly acute in circuits synthesising high Q transfer
functions. To a certain extent the problems are unavoidable, being due simply
to the noninclusion of inductors in the circuits (see, e.g., [I, 51). Many techniques have, however, been developed for coping with the sensitivity problem;
a common one in transfer-function synthesis is to synthesize a transfer function as a cascade of syntheses of first-order and second-order transfer f u n s
tions. Whether a circuit is discrete or integrated also has a bearing on the
sensitivity problem; in an integrated circuit there is likely to be correlation
between element variations to a greater degree than in a discrete-component
circuit. Although this may be a disadvantage in some situations, it is often
possible to use this correlation to an advantage. Integrated differential amplifiers, for example, have extremely low sensitivity to parameter variations,
precisely because any gain fluctuation owing to a deviation 'in one element
value tends to be canceled by a corresponding variation in another. Again,
when the element variations are due to temperature fluctuation, it is often
possible to arrange the temperature dependence of integrated resistors and
capacitors to be such as to cause compensating variations.
Another practical problem arising with active RC synthesis, and not associated with passive-circuit synthesis, is that of circuit instability. One class
of instability can arise simply through element variation away from nominal
values, which may cause a left half-plane transfer-function pole to move into
the right half-plane--a phenomenon not observed in passive circuits. This
phenomenon is essentially a manifestation of the sensitivity problem, and
SEC. 13.2
OPERATIONAL AMPLIFIERS AS CIRCUIT ELEMENTS
513
techniques aimed at combatting the sensitivity problem will serve here. More
awkward are instabilities arising from parasitics contained within the active
elements; a controlled source may he modeled as having a gain K, when
the gain is actually Ka/(s a) for some large a. Designs may neglect the
presence of the pole and not predict an oscillation, which would be predicted
were the pole taken into account. Techniques for combatting this problem
generally revolve around the addition of compensating resistor and capacitor
elements to the basic active element (see, e.g., [31 for a discussion of some of
these techniques).
+
13.2
OPERATIONAL AMPLIFIERS
AS
CIRCUIT ELEMENTS
In this section we examine the operational amplifier, which is
the basic active device to be used in most of the state-space synthesis procedures to be presented. We are interested particularly in its use to fonn
finite gain, summing and integrating elements, both inverting and noninverting.
Basically, an operational amplifier is simply an amplifier of very highgain,
usually with two inputs, called noninverting and inverting. A signal applied
at the inverting input appears at the output both amplified and reversed in
polarity. The usual representation of the differential-input operational amplifier is shown in Fig. 13.2.la; for many applications the noninverting input
terminal will be grounded, and the resulting single-ended operational ampli-
FIGURE 13.2.1. (a) Differential-Input Operational Amplifier; (b) SingQEnded Operational Amplifier.
514,
ACTIVE STATE-SPACE SYNTHESIS
CHAP. I 3
fier is depicted as shown in Fig. 13.2.lb. Inversion of the signal polarity is
implicitly assumed.
The DC gain of an actual operational ampli6er may be anywhere between
10' to 10' with fall-off at comparatively low frequencies; the unity gainfrequency is typically a few megahertz or more. Ideally, the input impedance of
an operational amplifier is infinite, as is its output admittance. In practice,
a @re of 100 kilohms (kQ) is more likely to be attained for the input impedance and 50 Z1 for the output impedance. Ideally, amplifiers will be linear for
arbitrary input levels, show no dependence on temperature, offer equal input
impedance and gains at the input terminals with no cross coupling, and be
noiseless. In practice again, none of these idealizations is totally valid. Further, it is sometimes necessary to add additional components externally to
the operational amplifier to combat instability problems associated with
pbase shift in the amplifier. All these points are generally dealt with in manufacturers' literature.
Achieving a Finite Gain Using an Operational
Amplifier
Consider the arrangement of Fig. 13.2.2. Assuming that the
amplifier has a gain K and infinite input impedance and output admittance,
it is readily established that
FIGURE 13.2.2. Single-Ended Operational Amplifier Generating Gain Element of Gain -(Rz/R1).
-
R,
F - - R L [ l + (J/G(l+ (R~/RI))I
v o
so that for very large K, we can write
In this way we can build constant negative-voltage-gain amplifiers. Note
that by taking R, = R,, the gain is -1, so that an inverting amplifier is
SEC. 13.2
OPERATIONAL AMPLIFIERS AS CIRCUIT ELEMENTS
515
obtained. A positive voltage gain is then achievable by cascadjngan inve~ing
amplifier with a negative-gain amplifier. Alternatively, a positive-voltage-gain
amplifier is achievable from a differential-input amplifier. Rather than discussing this separately, we can regard it as a special case of the differentialinput summer, considered below.
Summing Using a n Operational Amplifier
Consider the arrangement of Fig. 13.2.3, which generalizes that
of Fig. 13.2.2. Straightforward analysis will show that for infinite gain, input
impedance, and output admittance of the amplifier, one has
FIGURE 13.2.3. Single-Ended Operational Amplifier Gen-
erating a Summer.
This equation shows that, in effect, weighted summation is possible, with
the gain coefficients arbitrary except for their negative sign. limoduction of
inverting amplifiers or, alternatively, use of the differential-input operational
amplifier allows the achieving of positive gains in a summation. Consider
the arrangement of Fig. 13.2.4. In this case, again assuming a n ideal amplifier, we have
Integrating Using an Operational Amplifier
The single-ended operational arnpli6er circuit for integrating is
reasonably well known and is illustrated in Fig. 13.2.5, It is easy to show
51 6
ACTIVE STATE-SPACE SYNTHESIS
CHAP. 1 3
FIGURE 13.2.4.. Differential-Input Operational Amplifier
Generatinga Summer.
that the transfer function relating Vi to V. under the usual idealization
assumptions is
The noninverting integrator circuits are perhaps not as familiar; two of
them are shown in Fig. 13.2.6.The resistance bridge integrator of Fig. 13.2.6a
has a transfer function
FIGURE 13.2.5. Inverting Integrator Circuit.
SEC. 13.2
OPERATIONAL AMPLIFIERS AS CIRCUIT ELEMENTS
517
FIGURE 13.2.6. (a) Resistance Bridge Non-Inverting Integrator; (b) Balanced Time Constant Integratdr.
provided that R,R, = R,R,.The balanced time-constant integrator of Fig.
13.2.6b has a transfer function
provided that R,C,= R,C,.
It may be thought that in all cases one should use differential-input amplifiers if positive gains are required in an active circuit, rather than singleended amplifiers followed or preceded by an inverting amplifier. However,
518
ACTIVE STATE-SPACE SYNTHESIS
CHAP. 13
alignment or tuning of the circuit is usually easier when inverting amplifiers
are used, and there is lower sensitivity to variations in the passive elements.
Less compensation to avoid instability is also required, since when a singleended amplifier is used to provide a gain element, a summer, or an integrator,
there is no positive feedback applied around the amplifier. This is in contrast
to the situation prevailing when differential-input amplifiers are used with
feedback to the positive terminal, as in the circuit of Fig. 13.2.6a, for example.
Combinations of Integration and Summation
It is straightforwardto combine the operations of integration and
summation; Fig. 13.2.7 illustrates the relation
c
FIGURE 13.2.7. Summation and Integration.
A further important arrangement is that of Fig. 13.2.8, which illustrates
the equation
This equation may also he written
One can think of the R, feedback as turning an ideal integrator into a lossy
integrator-a notion better described by (13.2.8)-or
as providing negative
feedback of V. in conjunction with integration-a notion described by
(13.2.9).
TRANSFER-FUNCTION SYNTHESIS
FIGURE 13.2.8.
Problem
13.2.1
519
Summation with Lossy Integration.
Verify Eq. 13.2.3.
Verify that Eqs. (13.2.5) and (13.2.6) hold, subject to the resistance bridge
and time-constant balance conditions.
Problem Inthis section the fact that operational amplifiers can be used to provide
13.2.3
constant gain elements has been established. Show also that they can be
used to provide gyrators 161, negative resistors, and negative impedance
converters [7]. (A negative resistor is a one port. dehed by u -Ri
for a positive constant,R. A negative impedance converter is a two port
defined by equations u, = kv2, il = kiz for some constant k.)
Problem
13.2.2
;=
13.3
TRANSFER-FUNCTION SYNTHESIS USING
SUMMERS AND INTEGRATORS
As we shall see in this section, state-space synthesis of transferfunction matrices is particularly straightforward when the available elements
are resistors, capacitors, and operational amplifiers. We shall begin our
discussions by considering the synthesis of transfer-function matrices, and
then specialize first to scalar transfer functions, and then to degree 1 and
degree 2 transfer functions. The synthesis of degree 2 transfer functions will
be covered at greater length in the next section.
Suppose that T(s)is an m x p transfer function; for convenience, we shall
suppose that no input is to be observed at the same port as any output. We
520
ACTIVE STATE-SPACE SYNTHESIS
shall suppose too that T(w)
known for T(s):
CHAP. 13
< oo, and that a state-spaceset of equations is
Consider the arrangement of Fig. 13.3.1, which constitutes a diagrammatic
representation of (13.3.1). Observe further that practical implementation of
the scheme of Fig. 13.3.1 requires constant gain elements, summers, and
integrators, and these are precisely what operational ampli6ers can provide.
In other words, Eq. (13.3.1) are immediately translatable into an active RC
circuit, using the arrangement of Fig. 13.3.1 in conjunction with the schemes
discussed in the last section for the use of operational amplifiers.
FIGURE 13.3.1. Block Diagram Representation of State
Space Equations.
Example
13.3.1
Consider the state equations
Figure 13.3.2a shows a block diagram containing integrators, gain blocks,
and summers that possesses the requisite state-space equations relating
ul and uz to y. Figure 13.3.2b shows the translation of this into a scheme
involving operational amplifiers, resistors, and capacitors. Notice that
SEC. 13.3
TRANSFER-FUNCTION SYNTHESIS
521
the outputs of amplifiers A , and Az are -xl and -x,, respectively; it
would be pointless to use two inverting amplifiers to attain xl and xz,
since another inverting amplifier would be needed to obtain y from xl
and x,.
Notice that it is straightforward to obtain an infinite number of syntheses
of a prescribed T(s). Change of the state-space coordinate basis will cause
changes in F, G, and H in (13.3.1) with resultant changes in the gain values,
but the general scheme of Fig. 13.3.1 will of course remain unaltered.
Suppose now that T(s) is a scalar, given by
For this scalar T(s) we can write down canonical forms, and study the structure of the associated circuit synthesizing T(s) a little more closely. As we
know, a quadruple (F, g, h, j) realizing T(s) is given by
Y
(a
FIGURE 13.3.2. (a) Synthesis for Example 12.3.1, Showing
Integrators, Gain Blocks and Summers.
522
ACTIVE STATE-SPACE SYNTHESIS
CHAP. 1 3
FIGURE 13.3.2. (b) Synthesis of Example 12.3.1, Showing
Operational Ampliiiers, Resistors and Capacitors.
A scheme for synthesizing T(s) based on (13.3.3) is shown in Fig. 13.3.3.
Notice that, in contrast to the scheme of Fig. 13.3.1, all variables depicted
are scalar. The fact that so many of the entries of the F matrix and g vector
are zero meaas that the scheme of Fig. 13.3.3 is economic in terms of the
FIGURE 13.3.3. Block Diagram Representation of a State-Space Realization of a Scalar Transfer Function.
524
ACTIVE STATE-SPACE SYNTHESIS
CHAP. 13
number of gain and summing elements required, and this may be an attractive feature. On the other hand, there may be reasons against use of the
scheme. First, the dynamic range of some of the variables may exceed that
achievable with the active components; second, the performance of the circuit may be very sensitive to variations in the individual components. In
rough terms, the basic reason for this is that the zeros of a polynomial may
vary greatly when a coefficient of the polynomial undergoes small variation.
Therefore, small variation in a component synthesizing a, in (13.3.3) and
(13.3.2) may cause large variation in the zeros of the denominator of T(s);
consequently, if such a zero is near the jw axis, T(jw) for jw near this zero
will vary substantially.
The dynamic-range problem can sometimes be tackled by scaling. Let A
be a diagonal matrix of positive entries, chosen so that the entries of the
state vector 2 = Ax take values that are more acceptable than those of
the state vector x. Then we simply build a realization based on MA-', Ag,
A-'h; notice that M A - ' and Ag have the same sparsity of elements as do
F and g, and almost the same resultant economy of realization. (Some unity
entries in F and g may become nonunity entries in AFA-' and Ag. Extra
elements may then be required in the circuit simply as a result of this change.)
Application of the scaling'idea is of course not restricted to the simulation
of scalar transfer functions.
A basis for tackling the sensitivity problem is the observation that the
problem is not particularly significant in the case of transfer functions of
degree 1 and degree 2. This is borne out by practical experience and by
extensive calculations made on the performance of the biquad circuit, which
is a circuit synthesizing a degree 2 transfer function to be discussed in the
next section. An arbitrary degree n transfer function T(s) can always be
expressed as the product (or the sum) of a set of degree 1 and degree 2 transfer functions. Expression as a product requires factoring the numerator and
the denominator, and pairing numerator terms with denominator terms. We
allow pairing of constant, linear, or quadratic numerator terms with a quadratic denominator term, and pairing of a constant or linear numerator term
with a linear denominator term. Expression of T(s) as a sum of degree 1 and
degree 2 transfer functions requires factoring of the denominator of T(s) and
the derivation of a partial fraction expansion. Care must be taken in obtaining the factors if T(s) is not already in factored form; it may be quite difficult
to obtain accurate coefficients in the factors if high-order polynomials need
to be factored.
When T(s) is expressed in product form, each component of the product
is synthesized individually, and the collection is cascaded. When T(s) is
expressed in sum form, each summand is still synthesized individually, but
the collection is put in parallel in an obvious fashion. From the viewpoint
SEC.
13.3
TRANSFER-FUNCTION SYNTHESIS
525
of eliminating sensitivity problems, the product form is preferred. Problem
13.3.4 asks for an argument justifying this conclusion.
As already noted, synthesis of scalar, degree 2 transfer functions will be
studied in the next section. Synthesis of degree 1 transfer functions is almost
trivial, but it is helpful to note that frequently no active element will be
required at all. The circuit of Fig. 13.3.4 has a (voltage-to-voltage) transfer
function
FIGURE 13.3.4. Realization of a First Order Transfer
Function Using Passive Components Only.
where C,= c,, C, = (I - c,), R, = c;', and R, = (a, - c,)-'. These equations show that it is always possible to set C,or C, equal to zero; they also
show that the coeBicients in (13.3.4) are constrained by c, 5 1 and c,/a,
< 1. Inclusion of a constant gain element at the output (such as might be
needed for buffering anyway) enables these constraints to be overcome.
FIGURE 13.3.5. Realization of a First Order Transfer
Funa'on, Using Rwistors, Capacitors and an Operational
Amplifier.
526
ACTIVE STATE-SPACE SYNTHESIS
CHAP. 13
Degree 1 transfer functions can also be synthesized with the circuit shown
in Fig. 13.3.5. The associated transfer function is
A right-half-plane zero can be obtained with an extra amplifier.
Note that it is never possible to synthesize a degree 2 transfer fmction
possessing complex poles with resistor and capacitor elements only. This
fact is proved in many classical texts (see, e.g., [a]).
Problem Draw three block diagrams to illustrate three different syntheses of the
voltage-to-voltage third-order Buttemoth transfer functions T(s) =
ll(s3 232 2s 1) obtained by not factoring the denominator, and
by expressing T(s) both as a sum and a product.
Problem Draw circuits showing resistor and capacitor values corresponding to
13.3.2 each block diagram obtained in Problem 13.3.1.
Problem How may a transfer function matrix synthesis problem be brokenup into
13.3.3
elemental degree 1 and degree 2 synthesis problems7
Problem Argue why, from the sensitivity point of vjew, it is better to express
13.3.4
T(s) as a product than a sum and to synthesize accordingly.
13.3.1
+ + +
The general transfer function synthesis problem is, as we have
seen, reducible to the problem of first-order and second-order transfer-funo
tion syothesis. We have already dealt with first-order synthesis. Here we
study second-order synthesis, exposing a most important circuit, originating
with 191 and developed further in 110-121. In [ll] it was named the biquad.
Our discussion will lirst cover a general description of the circuit; then we
shaU discuss questions of sensitivity, following this by consideration of a
number of other practical points.
Our starting point is the general second-order transfer function
We shall assume throughout this section that T(s)has its poles in Re [J] < 0.
Further, we shall only discuss in detail the case when T(s) has its zeros in
Re [s] < 0. This latter restriction is however inessential.
Provided that c,a2 - c, is nonzero, T(s)has a minimal realization
ACTIVE STATE-SPACE SYNTHESIS
528
where A, and 2, are arbitrary positive numben. [Pioblem 13.4.1 requests
verification that (13.4.2) defines a minimal realization of T(s).]
A circuit synthesis suggested by (13.4.2) is shown in Fig. 13.4.1 for the case
c,a, > c, and c3al > c , . (Straightforwardvariations apply if these inequalities do not hold.) Element values are
R,C,, and C, arbitrary
R, =
1
A,(c3a, - c 3 c l
R,
= A,R
I c a - c,
R, = -2
=
R
2, c,a, - c,
(13.4.3)
R
R, = c,
The entry x , of the state vector x = [x, x,r will be observed at the output
of the operational amplifier A , , and x, at the output of the operational amplifier A,.
The amplifiers A,, A,, and A , constitute the main loop, with amplifier A3
serving simply as an inverting amplifier. Note that one of the first two amplifiers could be replaced by a noninverting integrator, but at the expense of
heightening any potential sensitivity and instability problem associated with
the nonideal nature of the amplifiers. Generally, the gain around the loop
should be equally distributed between the two integrating amplifiers, so that
R,C, = R,C,, or 1, = 1.
The amplifier A, serves simply to construct the output a . a weighted sum
of the input and the components of the state vector. With the signs assumed,
this amplifier is used in a single-ended configuration, though it may well
have to be used in a differential-input configuration (or in conjunction with
an inverting amplifier) for certain parameter values.
Sensitivity
The critical aspects of input-output performance of thebiquad
are the inverse damping factor Q and resonant frequency a, associated with
THE BIQUAD
SEC. 13.4
a
529
the denominator of the transfer function. In our case, Q =
and w,
= &. The quantities of interest are the sensitivity coefficients SF and
where x i s an arbitrary circuit parameter, and
e,
The sensitivity coefficients for x identified with any passive component in
the biquad circuit come out to be never greater than 1 in magnitude; some
are zero, others are $. These sorts of sensitivity coefficientsare of the same
order as apply when purely passive syntheses of a transfer function (including
inductor elements) are considered. Sensitivities with respect to operational
amplifier gains are at the most in magnitude approximately 2Q/K,, where
K, is the gain. It follows that a variation of 50 percent in an operational
amplifier gain-as may well occur in practice--will cause a variation
no greater than approximately 1 percent in Q or w, if Kt2 l00Q; the
conclusion is that a Q of several hundred can be obtained with high precision.
Particularly when integrated circuits are used, sensitivity problems associated with temperature variation can often be dealt with by arranging that
the effect of some components cancels that of others. References [lo] and [I 11
contain additional details.
Miscellaneous Practical Points
A number of points are made in [1&12], which should be studied
in detail by anyone interested in constructing biquad circuits. Some of these
points will be outlined here.
Standardization. Subject to restrictions such as c,a, > c,, etc., an identical
form of circuit is used for all second-order transfer functions; only the resistor
values need be changed.Adjustment of key circuit parameters can be effected
by adjusting one resistor per parameter (with no interaction between the
adjustments). This means that the design of variable filters is simplified.
Cascading. Because the output is inherently low impedance (and the input
can be made to be a moderately high impedance), cascading to build up
syntheses of higher-order transfer functions is straightforward.
Absolute Stability. The natural frequencies remain in the left half-plane
irrespective of passive component values. In this restricted sense, the circuit
possesses absolute stability. However, stability difficulties can arise on
account of nonideal behavior of the operational amplifiers.
530
ACTIVE STATE-SPACE SYNTHESIS
CHAP. 1 3
Integrability. The full circuit can be integrated if desired. Trimming may
be carried out solely with resistor adjustment, and technology allows such
trimming to be effected through anodizing of thin-film resistors. It is also
possible to largely overcome the problem of limitation of the total resistance
in any integrated version of the circuit with an ingenious resistance multiplication scheme [12].
Q Enhancement. The finite bandwidth of the operational amplifiers causes
the actual Q to exceed the design Q, even to the point of oscillation. Reference.
[Ill includes formulas for Q enhancement and indicates compensation techniques for minimizing the problem.
Noise. Satisfactory output signal-to-noiseratios are achievable, e.g., 80 dB,
with, if necessary in the case of low signal level, downward adjustment of
the gain-determining resistor R, of Fig. 13.4.1. Note that, in effect, such an
adjustment is equivalent to scaling.
Distortion. It is claimed in [Ill that better distortion figures can often be
obtained with the biquad than with some passive circuits, particularly when
R, is adjusted.
Problem Verify that the state-space realization defined as (13.4.2) is a realization
of T(s), as given in (13.4.1).
Problem Suggest changes to the design procedure for the case when csaz- ez = 0.
13.4.1
13.4.2
Problem Which resistors may most easily be varied in the biquad circuit to alter
Q,wo,and the voltage gain at resonance (see [Ill)?
13.4.3
13.5
OTHER APPROACHES TO ACTIVE RC
SYNTHESIS
Before leaving the subject of active-circuit synthesis, we mention
briefly two additional topics. First, we shall comment on the use of gyrators,
negative resistors, negative impedance converters, and controlled sources in
active synthesis. Then we conclude the section by pointing out how a dynamic
synthesis problem, that is, a problem involving synthesis of a nonconstant
transfer-function matrix or even hybrid matrix, can be replaced in a straightforward way by a problem requiring synthesis of a constant matrix.
Synthesis with Gyrators
Though gyrators are constructible at microwave frequencies and
can be operated with no external power source, this is not the case at audio
SEC. 13.5
OTHER APPROACHES TO ACTIVE RC SYNTHESIS
531
or even RF frequencies. In order to build a gyrator, either a Hall-effect device
can be used, or a circuit comprising transistors and resistors. This means
though that any gyrator in a circuit is replaceable by active devices and
resistors, and since, as we know, any inductor can be replaced by a gyrator
terminated in a capacitor, a mechanism is available for replacing inductors
by a set of active RC elements. Further, as we noted in Chapter 2, a transformer may be replaced by a gyrator cascade, so even transformers may be
replaced by a set of active RC elements. Consequently, any passive syntltesis
of an impedance or transfer-function matrix can be replaced by an active RC
synthesis, with the aid of active realizations of the gyrator.
The resulting structure may be inefficient or awkward to actually construct;
for it does not follow that a structure developed for passive synthesis should
be satisfactory for active synthesis. Nevertheless, the method does have some
appealing features. As pointed out by Orchard [13], active RCcircuits derived
this way can be expected to enjoy very good sensitivity properties when
passive element variation is considered. Variations within active elements,
including the active circuits providing a gyrator, may, however, cause a problem. For example, phase shift within the gyrators can cause instability.
Reference [I] gives one of the most extensive treatments of the use of
gyrators in active~ircuitdesign. Active circuits realizing gyrators will be
found there, together with variations on passive-circuitsynthesis procedures
designed to accommodate the peculiarities of practical gyrators.
Negative Resistor.6, Negative Impedance Converters,
and Controlled Sources
Historically, these three classes of elements were the first to be
considered and used in active RC synthesis. Negative resistors are probably
the least interesting and were the least used. Negative resistors do not exist
in practice, and so they must be obtained with active devices. Two very
straightforward ways of obtaining a negative resistor are through the use of
a negative impedance converter (dealt with below), and through the use of
a controlled source. In a sense then, active RC synthesis with negative resistors is subsumed by active RC synthesis using negative impedance converters
or controlled sources.
The negative impedance converter PIC) is a two-port device with the
property that if an impedance Z(s) is used to terminate one port, an impedance -Z(s) is observed at the other port. Very many of the earliest attempts
at active synthesis relied on use of negative impedance converters with one
NIC being permitted per circuit (see, e.g., [14]). Most of the early circuits
were particularly beset with sensitivity problems.
The controlled source as an active RC circuit element has enjoyed greater
popularity than the negative resistance or the NIC. In ideal terms, it is a con-
632
ACTIVE STAE-SPACE SYNTHESIS
CHAP. 1 3
stant-gain voltage amplifier, with zero input admittance and output impedance. Departures from the ideal prove less embarrassing and are more easily
dealt with than in the case of the negative resistor and the NIC; these factors,
combined with ease of construction, perhaps account for its popularity since
the early suggestion of [15]. Important refinements are still being made,
which make the circuits more suitable for integrating (see, e.g., [l] where
unpublished work of W. J. Kerwin is discussed). The controlled source
would appear to have greater sensitivity problems associated with its use than
the operational amplifier, at least in most situations. The fact that a lower
gain is used than in the operational amplifier is however a point in its favor.
Conversion of a Dynamic Problem to a Nondynamic
Problem
In view of the above discussion, it should be clear that any technique that may be used for passive synthesis may also he used as a basis of
active RC synthesis; if such a technique requires the use of gyratdrs, inductors, or transformers, the active equivalents of these components are available. In fact, the effort involved in such techniques can be greatly reduced,
since passivity is never really needed at any point in the synthesis. We now
illustrate this point in detail.
We shall consider the question of hybrid-matrix synthesis; this, as we
know, includes immittance synthesis and transfer-function-matrix synthesis.
I n case we aim to synthesize a transfer-function matrix, we conceive of its
being embedded wilhia a hybrid matrix.
Let X(s) be an rn x m hybrid matrix-not necessarily passive-possessing an n-dimensional state-space realization IF, G,H,J j . Let us conceive of
X(s) being synthesized by a nondynamic m
n port N, terminated in
capacitors (see Fig. 13.5.1). For convenience in the succeeding calculations
+
Nondynomic
H(s)
FIGURE 13.5.1. Idea Behind Synthesis of X(s).
we shall assume the capacitors to all have unit value; however, this is not
essential to the argument.
As we know, X(s) will be observed as required if N, is chosen to have
a hybrid matrix
OTHER APPROACHES TO ACTIVE
SEC. 13.5
RC SYNTHESIS
533
In this hybrid description the last n ports of N, are assumed to be current
excited, while excitation a t the first m ports coincides with that used in defining exciting variables for X(s).
Evidently, the problem of synthesizing the hybrid matrix X(s) is equivalent to the problem of synthesizing the constant hybrid matrix M. This conversion of a dynamic synthesis problem to a nondynamic one has been
achieved, it should be noted, at the expense of having to compute matrices
of a state-space realization, which is hardly a severe problem.
How may M be synthesized? There are a number of methods, but the
simplest would appear to be a method based on the use of operational amplifiers as summers, together with voltageto-current and current-to-voltage
converters, as required. Problem 13.5.4 asks for details. As far as is known,
this technique has yet to be used for practical synthesis, and it is not known
how it would compare with the earlier techniques described.
using gyrators comprised of active devices and resistors, it is particularly helpful to be able to ground two of the gyrator terminals. (One
reason is that connections to a power supply can be achieved more
straightforwardly.) Demonstrate the equivalence of Fig. 13.5.2, which is
benefit in convertine classical ladder structures to active RC
of
- - oractical
=
structures (see [la).Demonstrate also the equivalence of Fig. 13.5.3
Problem In
13.5.1
-
FIGURE 13.5.2. InductorEquivalenceInvolvingGrounded
Capacitor and Gyrators.
FIGURE 13.5.3. Replacement of Ungrounded Capacitor
by Grounded Elements.
534
ACTIVE STATE-SPACE SYNTHESIS
CHAP. 13
(see [I 71); this equivalence can be useful when a capacitor is to be realized
in an integrated circuit.
Problem Verify the circuit equivalence of Fig. 13.5.4, allowing replacement of
13.5.2
coupled coils by a capacitor-gyrator circuit. Obtain relations between
the element values t181.
FIGURE 33.5.4. Replacement of Coupled Coils by Capaciton and Gyrators.
Problem Show how a negative resistance can be obtained using a controlled source,
such as an ideal voltage gain amplifier, and a positive resistor.
13.5.3
Problem Discuss details of the synthesis of the constant hybrid matrix M of
(13.5.1).
13.5.4
REFERENCES
[ 1 ] R. W. N m c o ~ eActive
,
Integrated Circuit SjWhesis, Prentice-Hall, Englewood
Cliffs, N.J., 1968.
[ Z ] P. M. CHIRLIAN,
Integrated and Active Network Analysis and Synthesis,
Prentice-Hall, Englewood Cliffs, N.J., 1967.
[3 1 S. K. MTTRA,Analysis and Sjvlfhesis of Linear Active Net&vorlcs,Wiley, New
York, 1969.
[ 4 ] K. L. Su, Active Network Synthesis, McGraw-Hill, New York, 1965.
[S] R. W. NEWCOMB
and B. D. 0.ANDERSON,
"Transfer Function Sensitivity for
Discrete and Integrated Circuits," Microelectronics and Reliability, Vol. 9,
No. 4, July 1970, pp. 341-344.
[ 6 ] A. S. MORSEand L. P. HUELSMAN,"AGyrator Realization Using Operational
Amplifiers," IEEE Transactions on Circuit Theory, Vol. CT-11, No. 2, June
1964, pp. 277-278.
[7]A. W. KEEN,"Derivation of Some Basic NegativeImmittance Converter
Circuits," Rectronics Letters, Vol. 2, No. 3, March 1966, pp. 113-115.
181 E. A. GUILLEMW,
Synthesis of Passive Networks, Wdey, New York, 1957.
[ 9 ] W. J. KERWIN,L. P. HUELSMAN,
and R. W. NEWCOMB,
"State-Variable Synthesis for Insensitive Integrated Circuit Transfer Functions," IEEE Journal of
Solid-State Circuits, Vol. SC-2, No. 3, Sept. 1967, pp. 87-92.
[I01 J. Tow. "Active RC Filters-A State-Space Realization," Proceedings of the
IEEE, Vol. 56, No. 6, June 1968, pp. 1337-1139.
CHAP. 13
REFERENCES
535
[ll] L. C. THOMAS,
"The Biquad: Part I-Some Practical Design Considerations,"
IEEE Transactions on Circuit Theory, Vol. CT-18, No. 3, May 1971, pp. 350-357.
[I21 L. C. THOMAS,
"The Biquad: Part 11-A Multipurpose Active Filtering
System," IEEE ~onsactionson Circuit Theory, Vol. CT-18, No. 3, May 1971,
pp. 358-361.
1131 H. 3. ORCHARD,"Inductorless Filters," EIeclronics Letters, Vol. 2, No. 6, June
1966, pp. 224-225.
[I41 T.YANAGISAWA,
"RC Active Networks Using Current Inversion Type Negative-Impedance Converters," IRE Transactions on Circuit Theory, Vol. CT-4,
No. 3, Sept. 1957, pp. 140-144.
[I51 R. P. SALLSN
and E. L. KEY,"A Practical Method of Designing RC Active
Filters," IRE Transactions on Circuit Theory, Vol. CT-2, No. 1, March 1955,
pp. 74-85.
[I61 A. G . J. HOLTand 3. TAYLOR,
"Method of Replacing Un&xounded Inductors
by Grounded Gyrators," Electronics Letters, Vol. 1, No. 4, June 1965, p. 105.
[17l M. BHUSHAN
and R. NEWCOMB,
"Grounding of Capacitors in Integrated
Circuits," EleetronicsLetters, Vol. 3, No. 4, April 1967, pp. 148-149.
W. NEWand R. W.NEWCOMB,
"Proposed Adjustable
[IS] B. D. 0.ANDERSON,
Tuned Circuits for Microelectronic Structures," Proceedings of the IEEE, Vol.
54, No. 3, March 1966, p. 411.
EPILOGUE :
What of the Future?
An epilogue can only be long enough to make one main point,
and our main point is this. Network analysis and synthesis via .
statespace methods are comparatively new notions, and few
would deny that they are capable of significant development.
We believe these developments will be exciting and will increase
the area of practical applicability of the ideas many times.
Subject Index
A
Active networks:
state-space equations, 200
Active synthesis:
effect of integrated circuit use, 512
instabilitv.. 512
notions of minimality, 512
parasitic effects, 512-513
sensitivity problem, 512
Adam's predictorcorrector algorithm,
.
.-
76
Admittance matrix, 28
degree:
relation to scattering matrix
degree, 122
relation with other matrices, 31
symmetry property for reciprocal
network, 60
Admittance synthesis:
preliminary simpliEcation, 348
Algorithm:
order, 77
stability, 77
for state-space equation solution,
74-81
Analysis:
distinction from synthesis, 3
B
Bayard synthesis, 427
Biquad, 526
practical issues, 529-530
sensitivity, 528
Boundedreal lemma, 213,307
associated spectral factor, 3 10
equivalence to quadratic matrix
inequality, 319
general solution of equations, 319
lossless case, 312
proof, 309,313
solution of equations. 314
via ~iccatie~uation,
316
via s~ectralfactorization. 315
statem~nt,308
use to define special coordinate
basis, 443
Bounded real matrix, 307
occurrence in transfer function
synthesis, 505
540
SUBJECT INDEX
Bounded real property, 12,42,50
lossless case, 48
for rational matrices, 46
of scattering matrix, 43
tests for, 49
Degree:
of network:
relation to reactive element count,
199,329
of transfer function matrix, 116-127
dehition. 117
~ c ~ i l l definition,
an
126
C
Capacitor, 14
lossless property, 20
time-variable case, 20
when ungrounded:
equivalence to circuit with
grounded elements, 533
Cascade-load connection, 35
Cauer synthesis, 363
Cayley-HamiltonTheorem:
use in evaluating matrix exponential,
70
,/
Circle criterion, 42,260
Circuit element, 14
Classical network theory:
distinction from modern network
theory, 4 4
Com~letecontrollabilitv, 82
under cbordiate basis
change, 85
preservation under state-variable
feedback. 85
tests for, 82
Complete observability, 89
preservation under co-ordinate
basis change. 91
preservation u&r output feedback,
tests for, 90
Controllability (see also Complete
controllability), 82
Controlled source, 19
model of gyrator, 209
model of transformer, 209
Control system stability, 42
Covariance of stationary processes, 42
Current source, 1 8
E
Euler formula, 75
G
Gauss factorization, 427
Generalized positive real lemma, 259
Generalized positive real matrix, 259
Gyrator, 14
controlled source model, 209
as destroyer of reciprocity, 61
minimal number in synthesis, 385
use in inductor equivalence, 335
use in transfer function synthesis,
530
H
Hankel matrix, 98
relation with minimal realization, 99
use for Silverman-Ho algorithm, 113
Heun algorithm, 75
Ho algorithm, 104, 113
Hybrid matrix, 29
application of positive real lemma,
229
associated with every passive
network, 171
degree, 199
relation to scattering matrix
degree, 122
of reciprocal network, 61
use in state-space equation
derivation via reactance
extraction, 163
Hybrid matrix synthesis:
reciprocal case, 397
D
Darlington synthesis, 440
scalar version of Bayard synthesis,
426
I
Ideal source, 18
SUBJECT INDEX
Immittance matrix, 28
positive real property for passive
network, 52
Impedance matrix, 28
degree:
relation to scatteringmatrix
degree, 122
relation with other matrices, 31
symmetry property for reciprocal
network, 60
synthesis of constant matrix, 373
Impedance matrix synthesis:
Bayard, 427
Lossless case, 376
nonminimal reactive, minimal
resistive and reciprocal, 427
preliminary simplitication, 348
via reactance extraction, 372
reciprocal case:
via reactance extraction, 408
reciprocal lossless case,406
reciprocal synthesis for symmetric
matrix, 393
reciprocal synthesis via resistance
extraction, 418
via resistance extraction, 386
using minimal number of gyrators,
3 85
Independent source, I8
Inductor, 14
equivalence to grounded capacitor
and gyrators, 533
equivalence to gyrator and
capacitor, 335
lossless property, 20
Inverse linear optimal control problem,
42,260
Inverse transfer function matrix, 67
L
Ladder network:
state-space equations, 147
Linear circuit, 15
Linear matrix equation, 134
Linear optimal wntrol inverse
problem, 42,260
Linear systems:
impulse response description, 64
stability of, 127-138
state, 65
541
Linear svstems (cont.1:
state-space description, 65
transfer function matrix dcscriotion.
64
Lossless bounded real lemma. 312
Lossless bounded real proper&, 12,
48.50
tests for, 49
Lossless circuit element, 16
Lossless extraction:
preliminary synthesis step, 347-368
Lossless impedance synthesis, 376
Lossless network, 22
lossless bounded real property of
scatteringmatrix, 48
lossless positive real property, 56
Lossless positive real lemma, 221,229
solution of equations, 288
Lossless positive real matrix, 56
Lossless positive real proper@, 12
testing for, 57
Lossless property, 11
Lossless reciprocal synthesis:
of scattering matrices, 459
Lossless synthesis:
c
matrix,
for s y m m e ~ impedance
406
using preliminary simplification
procedures, 363
Losslessness:
wnnection betwen network and
circuit elements, 27
Lumped circuit, 15
Lyapunov lemma, 130
extensions, 135-138
M
Markov matrix, 98
Matrix exponential, 67
calculation of, 70
Maximum phase soectral factor..243..
2si, 303,505
Minimal Pvrator svnthesis. 385
Minimal reactive iynthes*, 329
Minimal realization (see also
Realization), 8 1
Minimal resistive synthesis, 345
Minimum phase spectral factor, 238
obtainable via Riccati equation,
271
SUBJECT INDEX
542
Modern network theory:
distinction from classical network
theory, 4-6
m-port network (see also Network
and Passive network), 12
Multiport ideal transformer, 16-18
Multiport network (see also Network),
12
Passive network (cant.) :
existence of scattering matrix, 38
general structure of state-space
equations, 196
positive real property of immittance
matrix, 52
properties of statespack equations,
198
-.-
~ u l t i p otransformer,
i
17
lossless property, 20
orthogonal, 32,41
Multistep algorithm, 76
N
Negative resistor, 201
Network (see Passive network)
augmented, for scattering matrix
defmition, 30
losslessness of, 22
passivity of, 21
state-space equations:
simple technique for derivation,
145
without state-space equations, 202
Norator, 21
Nullator, 21,201
0
Obsemability (see also Complete
obsemability), 89
Operational amplifier, 513
for finite gain, 514
for integrating, 515
for summing, 515
Orthogonal transformer, 32.41
P
Parallel connection, 35
Passive circuit element, 15
Passive network, 21
bounded real property of scattering
matrix, 44
deeree of:
relation to reactive element count,
199.329
existenceof hybrid matrix, 171
Passivity, 11
conn&tion between networks and
circuit elemenh. 25-27
Popovcriterion, 42,261
Positive real lemma, 213
application:
circle criterion, 260
generalized positive real lemma,
259
inverse linear optimal control
problem, 260
Popov criterion, 261
spectral factorization by algebra,
262
applied to hybrid matrix, 229
change of co-ordinate basis, 255
equivalence to quadratic matrix
inequality, 293
extensions, 223
extension to nonminimal realization,
223
generalized, 259
generating maximum phase spectral
factor, 243
lossless:
solution, 288
lossless case, 221,229
other proofs, 258-259
pmof based on energy balance
arguments, 227
proof based on network theory
results, 223
proof based on reactance extraction,
224
proof for lossless positive real
matrix, 257
singular case, 242
solution computation via quadratic
matrix equation, 270
solution computation via Riccati
equation, 270
solution of equations:
convexity property, 305
SUBJECT INDEX
543
Reactance extraction, 159,330,334
impedance matrix synthesis, 372
losdess scattering matrix synthesis,
449
9IlA
reciprocal synthesis for impedance
matrix, 408
spectral factorization proof, 243
scattering matrix synthesis, 444
statement, 218
reciprocal case, 454
sufficiency proof, 219
synthesis of scattering matrix, 340
use for co-ordinate basis change,
use in positive real lemma proof,
370
224
variational proof, 229
use in state-space equation
Positive real matrix:
derivation, 156,170
associated Riccati equation, 236
Reactance extraction synthesis, 335
decompositions, 21G218
Realization, 96
generalized, 259
construction from transfer function
properties of, 216
matrix, 104-116
Positive real property, 12,51
dimension of, 96
of immittance of passive network,
minimal, 81,96-104
52
consttuction from transfer
for lossless networks, 56
function matrix, 104-116
for rational matrices, 52
construction via completely
relation with bounded real property,
controllable realizations, 110
51
extraction from arbitrary
testing for, 57
realization, 94
time domain statement, 230
relation to other minimal
Predictor-corrector algorithm, 75
realizations, 101
due to Adams, 76,81
for transfer function, 105
Predictor formula, 75
Reciprocal lossless synthesis:
for impedance matrix, 406
Q
Reciprocal network, 58
hybrid matrix property, 61
Quadratic matrix equation:
solution via eigenvalue computation,
symmetry of immittance matrix, 60
symmetryof scatteringmatrix, 61
279
Reciprocal synthesis:
solution via eigenvector
for constant hybrid matrix, 397
computation, 277
solution via Newton-Raphson
for imwdance matrix:
~a$rd, 427
method, 283
minimal resistive. 427
solution via recursive difference
nonminimal resistive and
equation, 281
Quadratic matrix inequality:
reactive, 393
derived from positive real lemma,
for impedance matrix via reactanceextraction, 408
293
for impedance matrix via resistance
solution of, 299-303
extraction.,418
equivalent to bounded real lemma,
for scattering matrix:
319
via reactance extraction, 454
Reciprocity, 11,213
R
in state-space terms, 32&325
Rational bounded real property, 46,50 Resistance extraction, 330
Rational positive real property, 52
for impedancc synthesis, 386
Positive real lemma (cont.) :
solution of equivalent quadratic
matrix inequality, 299-303
solution via spectral factorization,
--.
544
SUBJECT INDEX
Resistance extraction (cont.):
reciprocal synthesis of impedance
matrix, 418
scattering matrix synthesis, 344,463
Resistance extraction synthesis, 33 1
Resistor, 14
negative, 201
passive Property, 20
Riccati equation, 233
associated with positive real mat&
236
obtaining minimum phase spectral
factor, 271
relation to positive real lemma, 237
solution of, 273
in solution of bounded real lemma
equations, 3 16
in solution of positive real lemma
equations, 270
solution via linear equation, 274
Round-off error, 78
Runge-Kutta algorithm, 77, 81
S
Scatteriag matrix, 30
with arbitrary reference, 32
augmented network definition, 30
bounded real property, 43
for passive network, 44
degree:
relation to immittance matrix
degree, 122
existence for passive network, 38
of lossless network, 48
relation to augmented admittance
matrix, 32
relation with other matrices, 31
symmetry property for reciprocal
network, 61
for transformer, 31
Scattermgmatrixmultiplication, 359
Scatteriog matrix sjnthesis:
lossless, 449
lossless reciprocal, 459
preliminninary simplijication, 358
via mctance extraction, 339,444
reciprocal reactance extraction. 454
via resistance extraction, 463
Sensitivity reduction in control
systems, 42
Series connection, 33
Silverman-Ho algorithm, 104, 113
Single-step algorithm, 76
Singular synthesis problem:
conversion to nonsingular problem,
348
Smith canonical form, 123
Smith-McMillan canonical form:
relation to degree, 124
Spectral factor, 221
associated with bounded real
lemma, 310
computation via recursive difference
equation, 281
degree of, 249
derived by Riccati equation, 238
maximum phase, 243,282,303,305
minimum phase, 238
obtainable via Riccati equation,
271
nonmioimal degree, 427
Spectral factorization, 219-221
by algebra, 262
Gauss factorization, 427
for matrices, 247
use in proof of positive real lemma,
243
use in solution of bounded red
lemma equations, 315
Stability :
of algorithm, 77
of control systems, 42
linear systems, 127-138
Lyapunov
lemma, 130
State:
for linear system, 65
State-space equations, 65
for active networks, 200
approximate solution of, 74
derivation via reactance extraction,
156,170
difficultiesin using topology, 145
inference of stability, 128
for ladder network, 147
for network analysis, 143
difficultiesin setting up, 144
w u m c e of input derivative,
151
for passive networks:
general structure, 196
properties, 198
SUBJECT INDEX
State-space equations (conr.) :
relation to transfer function matrix,
66
for sinusoidal response, 80
solution of, 68
of symmetric transfer function
matrix, 320-325
use of Kirchhoffs Laws for
derivation, 146
Svnthesis:
distinction from analysis, 3
relation of scatterine and im~edance
matrix, 330
-
T
Tellegen's Theorem, 21-27
proof, 23
statement, 22
Time-invariant circuit element, 15
Transfer function matrix (see also
Transfer function synthesis),
64
degree, 116-127
inference of stability, 128
inverse of, 67
relation to state-space equations, 66
symmetry of, 103
state-space interpretation,
320-325
Transfer function synthesis, 474507
with controlled source, 531
first order transfer function, 525
via first and second order transfer
functions, 524
545
Transfer function synthesis (cont.) :
with gyrators, 530
with input loading, 501
with input and output loading, 503
relevance of bounded real
conditions, 505
with ladder network, 478
with negative impedance converter,
531
with negative resistor, 531
with noireciproc~lnetwork, 479
with no reflection, 482
with output loading, 498
with reciprocal network, 490
using biquad, 526
using summers and integrators, 519
Transformer, 14
controlled source model, 209
orthogonal, 32
Transistor:
statespace equations for, 205
Truncation error, 77
U
Unwntrollable system, 86
v
Voltage source, 18
z
Zero-initial-state response, 69
Zero-input response, 69
Author Index
-
Anderson, B. D. O., 27,29,31, 36, 38,
42.52. 62.171.211.218,224,232.
Carlin, H. J., 6,7,442,473
Chien, R. T. W., 39,62
Chirlian, P. M., 511,512, 534
Chua, L. O., 74,138
D
B
Bashkow, T. R., 145,168,210
Bayard, M., 427,441
Belevitch, V., 16,62,385,404,463,
473
Bers, A,, 145, 168,210
Bhushan, M., 534,535
Branin, F. H., 74,78,79, 138, 145,
210
Brockelt, R. W., 42, 62, 109, 139,259,
260,261,265,266,371,404
Bryant, P. B., 145,210
Davies, M. C., 247,248,254,265
Desoer, C. A., 11, 12,62,64,138
Dewilde. P.. 478.508
Doob, J.'L.;~z,62
Duffin, R. J., 117, I39
F
Fadeeva, V. N., 70,73,138
Faurre, P., 244,249,259,265
Fowler, E. R., 149,211,478,479,508
Frame, J. S., 79, 139
Francis, N. D., 79, 138
Friedland, B., 64, 138
AUTHOR INDEX
G
Ganapathy, S., 79, 138
Gantmacher, P. R., 49,62,66,67,70,
73,114,123,133,134,138,251,
254,265,286, 306, 323, 324,325,
354,368,410,414,441,456,473,
477,508
Giordano, A. B.. 6.7
547
Moore, 1.B., 42, 52,62,232,235,
243,259,260, 261,262,264,265,
266,269,306.311.325
Morse, A. s., 51.9, 534
N
Navot, I., 149,211,478,508
Needier. -M. A., 79, 139
New, W., 534,535
~ewcomb,R.W., 6,7, 11, 12,27,29,
31,36,38,42.45.47.51.52.62.
.
H
Hazony, D., 117,139
Hitz, K. L., 277,281,306
Holt, A. G. L, 533,535
Huelsman, L. P., 519,526,534
0
Keen, A. w., 519,534
Kerwin, W. J., 526,534
Key, E. L., 532,535
Kim, W. H., 39,62
Koga, T., 394,404
Kuh,E.S., 11, 12,62,i45,210
L
Lasalle, J., 132,139
Layton, D. M., 224,265
Lee, H. B., 259,260,265
Lefschek, S., 132, 139
Levin, J. J., 274,306
Liou, M. L., 79,81, 138
Ogata, K., 64, 138
Oono, Y., 385,404,464,473
Orchard, H. J., 531,535
P
Plant, J. B., 79, 138
POP~V,
V.M., 218,259,261,264,266
Porter, L E., 276,278,306
Price, C. F., 145, 168,210
Purslow, E. J., 201,211
Reid, W. T:,274,306
Riddle, A. C., 247,265
Rohrer, R. A., 11, 12,62,74,78, 138,
145,209.210
M
MacFarlane, A. G. J., 276,306
McMillan,B., 117,123,124, 139
MmhaU.T.G., 149,211,478,508
Masani, L., 247,265
Massena, W. A,, 145,210
Mastascusa, E. J., 79,138
Mitra, S. K., 511, 512,513,534
S
Sage, A. P., 232,235,265
Sallen, R. P.,532,535
Schwa% R. J., 64,138
Silverberg, N.. 79,139
Silverman, L. M., 242,265,316,325,
478,508
548
AUTHOR INDEX
Su,K. L., 511,534
Subba Rao, A., 79,138
Taylor, J., 533,535
Tellegen, B. D. H., 117, 139
Thomas, J. B., 247,248,265
Thomas, L. C., 526,529,530,535
Tissi, P., 320,325,330,340,368
Tow, I., 526,529,534
v
Vongpanitlerd, S., 308,325,340,345,
368, 371,403,404,405,416,418,
427,441,444,473
W
Weinberg, L. A., 6.7
Wiener, N., 247, 265
Wilson, R. L.,145, 210
Wohlers, M. R., 6,7,45,62
Wong, E., 247,248,265
Y
Yakubovic. V. A...218.258.264
. .
yanagisaA, T., 53 1,535
Yarlaeadda. R.. 149.210.405.415,
. . .
418; 441,'478,508
Yasuura, K., 464,473
Youla, D. C., 247,248,254,265,270,
272,284,306,314,315,317,320,
325,330,340,368
z
Zadeh, L. A , 64,138
Zames, G., 260,261,265
Zuidweg, J. K., 29,62, 171,211
Download