IEEE High-Speed Data Converters for Communication Systems

Circuits
and
Systems
IEEE
ISSN 1531-636X
OU
L5
CL
Vdd
_H
IN
K
T_
L
Vdd
H
T_
OU
L2
Vss
L4
Volume 1, Number 1, First Quarter 2001
M
A
G
A
Z
I
N
E
Designing
Low-Power Circuits:
Practical Recipes
6
High-Speed Data
Converters for
Communication
Systems
26
Conferences and Workshops
8th IEEE International Conference on
Electronics, Circuits and Systems
ICECS’01
September 2–5, 2001
The Westin Dragonara Resort, Malta
http://www.eng.um.edu.mt/microelectronics/icecs2001
ICECS is a major international conference which includes regular, special and poster
sessions on topics covering analogue circuits and signal processing, general circuits and
systems, digital signal processing, VLSI, multimedia and communication, computational
methods and optimization, neural systems, control systems, industrial and biomedical
applications, and electronic education.
General Chair
Dr. Joseph Micallef
Department of Microelectronics
University of Malta
Msida MSD06, Malta
Email: jjmica@eng.um.edu.mt
Technical Program Chair
Prof. Franco Maloberti
Department of Electrical Engineering
Texas A&M University
College Station, Texas 77843
Email: franco@ele.unipv.it
ICECS 2001 Secretariat
Department of Microelectronics, Faculty of Engineering
University of Malta, Msida MSD 06, Malta, Europe
E-mail: icecs01@eng.um.edu.mt
Tel: (+356) 346760; Fax: (+356) 343577
The 2001 IEEE Workshop on
SiGNAL PROCESSING SYSTEMS (SiPS)
Design and Implementation
Antwerp, Belgium
September 26–28, 2001
http://www.imec.be/sips/
General Chair:
Joos Vandewalle
Dept. of Electrical Engineering, ESAT
Univ. of Leuven (K.U.Leuven)
Technical Program Co-chairs:
Francky Catthoor
Marc Moonen
IMEC
Univ. of Leuven (K.U.Leuven)
Leuven, Belgium
Leuven, Belgium
catthoor@imec.be
marc.moonen@esat.kuleuven.ac.be
International Conference on
Augmented, Virtual Environments
and Three-Dimensional Imaging
May 30–June 1, 2001
Ornos, Mykonos, Greece
http://www.iti.gr/icav3d/
The International Conference on Augmented, Virtual Environments and Three-Dimensional Imaging is organized by the European Union IST INTERFACE project in collaboration with the European Association for Speech, Signal and Image Processing
(EURASIP). A special session on mixed reality technologies will be organised by the
IST ART.LIVE project. The conference will be held in the picturesque Aegean island of
Mykonos in Greece, at the most delightful time of the year and will consist of sessions
with both invited and contributed papers. The conference proceedings will be published
and distributed to the workshop participants. A selection of the best papers in the conference will be published in a special issue of the Image Communication journal.
Chairmen:
Michael G. Strintzis–Informatics and Telematics Institute-CERTH, Greece
Fabio Lavagetto–University of Genoa, Genoa, Italy
Ferran Marques–UPC, Barcelona, Spain
Arrangements:
Venetia Giagourta
Tel.: +30.31.996349; Fax: +30.31.996398
E-mail: ICAV3D@iti.gr
First Asia-Pacific Workshop on
Chaos Control and Synchronization
Shanghai, June 28-29, 2001
Plenary Speech: Leon O. Chua (University of California at Berkeley, USA)
Workshop Administrative Committee:
G. Ron Chen, Zhi Hong Guan, Zheng Zhi Han, Xinghuo Yu, Jiong Yuan
Contact:
Dr. Guanrong (Ron) Chen, Fellow of the IEEE
Chair Professor and Director
Centre for Chaos Control and Synchronization
Department of Electronic Engineering
City University of Hong Kong
Tat Chee Avenue, Kowloon
Hong Kong SAR, China
Phone: (852) 2788 7922; Fax: (852) 2788 7791
E-mail: gchen@ee.cityu.edu.hk
http://www.ee.cityu.edu.hk/~cccs/apw-ccs.htm
European Conference on Circuit Theory and Design, 2001
14th Annual IEEE International ASIC/SOC Conference
“SOC in a Networked World”
September 12–15, 2000
http://asic.union.edu
Crystal City Hyatt Hotel
Washington, DC
Driven by the rapid growth of the Internet, communication technologies, pervasive computing, and consumer electronics, Systems-on-Chip (SoC) have started to become a key issue in
today’s electronic industry. The transition from the traditional
Application-Specific Integrated Circuit (ASIC) to SoC has lead
to new challenges in design methods, automation, manufacturing, technology, and test.
Conference General Chair
P.R. Mukund
Rochester Institute of Technology
prmeee@rit.edu
Technical Program Chair
John Chickanosky
IBM Microelectronics
chickano@us.ibm.com
For paper submission and updated conference information,
please visit our web site at
http://asic.union.edu
or contact the ASIC/SOC Conference office at 301–527–0900 x104
ECCTD’01
“Circuit Paradigm in the 21st Century”
August 28–31, 2001
Espoo, Finland
http://ecctd01.hut.fi
ECCTD’01 Conference will be held at Helsinki University of Technology, in Espoo, Finland (15 minutes by bus from central Helsinki). In addition to the conference technical
program, several social activities are being planned, to allow attendees ample opportunity to enjoy Finland.
Veikko Porra, Conference chair, Helsinki University of Technology (HUT)
Olli Simula, Conference co-chair, HUT
FOR MORE INFORMATION PLEASE CONTACT
Conference Secretariat:
Secretary General:
Eventra Ltd., eventra@co.inet.fi Timo Veijola, ecctd01@hut.fi
Pietarinkatu 11 B 40
Helsinki University of Technology
FIN-00140 Helsinki, Finland
P.O. Box 3000, FIN-02015 HUT, Finland
Tel. +358 9 6926 949
Tel. +358 9 451 2293
Fax. +358 9 6926 970
Fax. +358 9 451 4818
. . . continued on Page 51
Circuits
and
Systems
IEEE
M
A
G
A
Z
I
N
E
Volume 1, Number 1, First Quarter 2001
Designing Low-Power Circuits:
6 Practical Recipes
by Luca Benini, Giovanni De Micheli, and Enrico Macii
The growing market of mobile, battery-powered electronic systems,
such as cellular phones and personal digital assistants, demands
the design of microelectronic circuits with low power dissipation.
High-Speed Data Converters
26 for Communication Systems
by Franco Maloberti
. . . As a result, market challenges favor research on high speed data
converters. In turn, the results achieved lead to new architectural
solutions which create new needs. . . .
IEEE Circuits and Systems Magazine (ISSN 1531–636X) is published quarterly by the Institute of Electrical and Electronics Engineers, Inc. Headquarters: 3 Park Avenue, 17th Floor, New York, NY, 10016–5997. Responsibility for the contents rests upon the authors and not upon the IEEE, the Society, or its members. IEEE Service Center (for orders, subscriptions, address changes): 445 Hoes Lane, P.O. Box 1331, Piscataway, NJ 08855–1331. Telephone: +1 732 981 0060, +1 800
678 4333. Individual copies: IEEE members $10.00 (first copy only), nonmembers $20.00 per copy; $7.00 per member per
year (included in Society fee) for each member of the IEEE Circuits and Systems Society. Subscription rates available
upon request. Copyright and Reprint Permissions: Abstracting is permitted with credit to the source. Libraries are permitted to photocopy beyond the limits of the U.S. Copyright law for private use of patrons: 1) those post-1977 articles that
carry a code at the bottom of the first page, provided the per-copy fee indicated in the code is paid through the Copyright
Clearance Center, 222 Rosewood Drive, Danvers, MA 01923; and 2) pre-1978 articles without fee. For other copying,
reprint, or republication permission, write to: Copyrights and Permissions Department, IEEE Service Center, 445 Hoes
Lane, Piscataway, NJ 08855–1331. Copyright © 2001 by the Institute of Electrical and Electronics Engineers, Inc. All rights
reserved. Periodicals postage paid at New York, NY, and at additional mailing offices. Postmaster: Send address changes
to IEEE Circuits and Systems Magazine, IEEE Operations Center, 445 Hoes Lane, Piscataway, NJ, 08855.
A
Printed in U.S.A.
1
Circuits
and
Systems
IEEE
Editor-in-Chief
Michael K. Sain
Electrical Engineering Department
University of Notre Dame
Notre Dame, IN, USA 46556–5637
Phone: (219) 631– 6538
Fax: (219) 631–4393
E-mail: jordan@medugorje.ee.nd.edu
Publication Coordinator
Eric L. Kuehner
E-mail:jordan@medugorje.ee.nd.edu
Editorial Advisory Board
Rui J. P. de Figueiredo–Chair
University of California, Irvine
M
A
G
A
Z
I
N
E
Columns and Departments
3 Presidential Message
by Hari C. Reddy
5 From the Editor
Coming next issue:
Filter Banks....
by Michael K. Sain
37 Technical Committees
DSP within Circuits and Systems
by Paulo S. R. Diniz
Guanrong Chen
City University of Hong Kong
Wai-Kai Chen
University of Illinois, Chicago
Leon Chua
University of California, Berkeley
Giovanni De Micheli
40 People on the Move
Kang New Dean at Santa Cruz
Kailath NAS Electee
New Editors Begin Work
Stanford University
George S. Moschytz
Swiss Federal Institute of Technology
Mona Zaghloul
George Washington University
Editorial Board
Orla Feely
42 Executive and Board
Board and Officer Nominations
by George S. Moschytz
Constitution and Bylaw Changes
by K. Thulasiraman
University College Dublin
S.Y. (Ron) Hui
City University of Hong Kong
Ravi Ramahandran
Rowan University
Mehmet Ali Tan
Applied Micro Circuits Cooperation
Ljiljana Trajkovic
Simon Fraser University
IEEE Publishing Services
Robert Smrek
Production Director
IEEE Service Center
445 Hoes Lane
P.O. Box 1331
Piscataway, NJ 08855–1331, USA
Phone: (732) 562–3944
43 Meetings
First SAWCAS Report
by Juan E. Cousseau, Pablo S. Mandolesi,
and Sergio L. Netto
ICECS 2000 Report
by Mohamad Sawan and Abdallah Sfeir
APCCAS 2000 Report
by Yong-Shi Wu
47 Awards
Message from Committee Chair
by Bing Sheu
2001 CASS Fellows
Frequency of Publication
52 Calls for Papers and Participation
Quarterly
See also inside front cover.
Magazine Deadlines
Final materials for the CAS Magazine must be received
by the Editor on the following dates:
2
Issue
Due Date
First Quarter
Second Quarter
Third Quarter
Fourth Quarter
February 1
May 1
August 1
November 1
Scope: A publication of the IEEE Circuits and Systems Society, IEEE Circuits and Systems Magazine publishes articles presenting novel
results, special insights, or survey and tutorial treatments on all topics having to do with the theory, analysis, design (computer aided design),
and practical implementation of circuits, and the application of circuit theoretic techniques to systems and to signal processing. The coverage of this field includes the spectrum of activities from, and including, basic scientific theory to industrial applications. Submission of
Manuscripts: Manuscripts should be in English and may be submitted electronically in pdf format to the publications coordinator at
jordan@medugorje.ee.nd.edu. They should be double-spaced, 12 point font, and not exceed 20 pages including figures. An IEEE copyright
form should be included with the submission. Style Considerations: 1) articles should be readable by the entire CAS membership;
2) average articles will be about ten published pages in length; 3) articles should include efforts to communicate by graphs, diagrams, and
pictures—many authors have begun to make effective use of color, as may be seen in back issues of the Newsletter, available at www.nd.edu/
~stjoseph/newscas/; 4) equations should be used sparingly.
Presidential Message
Hari C. Reddy
President, IEEE Circuits and Systems Society
Launching of the New
IEEE Circuits and Systems Magazine
A
s CAS Society president for this year I am very
pleased to greet our Society members all over
the globe through the very first issue of our new CAS
Magazine. My thanks go to all those who worked
very hard and with great diligence over the past few
years in getting this magazine approved by the IEEE
and launched in the present attractive form. My sincere hope is that our magazine will be accepted very
soon as one of the best technical magazines within
the IEEE and beyond. I hope it will serve uniformly
all our Society’s technical disciplines and help in the
professional development of our members. I always
wished that CAS Society would have a magazine
attractive enough to carry in our office briefcases as
a rewarding reading material. I am sure that under
the dynamic and dedicated leadership of the founding editor-in-chief Professor Mike Sain, we will
accomplish this. I wish Mike and his talented editors and editorial board the best of luck and on behalf of the Society express our “Many Thanks”.
Challenges and Promise for
Circuits and Systems Society in 2001
According to purists, the new year 2001 is also
the beginning of the third millennium. It is an exceptional privilege, pleasure and honor for me to
serve this year as president of the IEEE Circuits and
Systems Society. I wish to start this message by expressing my warmest greetings and thanks to each
and every member volunteer of the CAS Society for
your continuing contributions in advancing our Circuits and Systems Society.
The Circuits and Systems Society is passing
through an exciting period in its 50+ year history.
The Society’s member talents are continuously providing the foundation for technological innovations
in electrical, electronics, computer and information
engineering. The Society presently enjoys a sound
financial position to initiate
new ventures that would
have a lasting impact on
CAS research. The primary goal this year for me
is to provide the leadership
and encouragement to our
volunteers so that the
IEEE-CAS Society remains the most prestigious
organization in the world
for research and technology developments in the
circuits and systems area. The second goal is to promote our Society as a truly global professional organization uniquely serving its members in the promotion of technical inventions, educational innovations, and professional development of its members.
Let me identify the following to accomplish this:
1. The Society should be highly responsive to
the challenges of emerging technologies such as
nano electronics, systems on a chip, MEMs, circuits
and systems for mobile communications, computing and related information technologies. The challenges coming from both industrial and academic
sectors must be addressed.
2. While focusing on new technologies, we
should continue to provide highest recognition to the
researchers working on basic fundamental CAS
theories. From the very inception, laying necessary
mathematical and physical foundations has been a
source of pride and brought great respect to our
Society. Consequently, during the past fifty years,
our Society was able to serve as a great incubator
of the fundamental ideas that subsequently impacted
in a very positive fashion the present day electrical
engineering. This gives us immense confidence in
facing the challenges of new emerging technologies
of the new century. So, the Society must create, promote and maintain the necessary environment for
research and development that underpins the solutions to these problems.
3
3. The IEEE-CAS Society as an international
organization must expedite the worldwide dissemination of its basic research results as well as results
pertaining to technology development that effectively
and in a practical way address these solutions.
4. Our Society must create the necessary infrastructure for the professional development of our
members. This could be in the form of well-planned
continuing education course-modules, effective utilization of internet resources, book publications,
short courses, and specialized workshops. We also
need to provide our publications on the WWW and
try to integrate our CASS website with IEEE Xplore.
5. It is necessary to mention the importance of
supporting the activities at a local chapter and regional level. If managed properly the chapters can
initiate many activities for their local area and support the needs of the local members by initiating
workshops, small conferences, and so forth. Some
of these may ultimately “grow” to regular and international events. The chapter activities will also
increase our membership and can be a source of future CAS leaders. Our Society has a very dynamic
distinguished lecturer program and chapters should
use this opportunity that is almost free.
To achieve the above objectives we need to excel in all our Society’s divisional and regional activities. The most important of all is the maintenance
of the quality of our transactions that are relevant
to both academic and industrial sectors. Faster and
thorough review and publication of submitted papers is very critical. I am happy about the recent
steps taken to address these problems. This will continue to be a top priority this year. We have strong
technical divisions and with the development of
workshops in emerging technologies, we are also
promoting collaborative efforts with other IEEE
societies. For the professional development of our
membership, it is very important that we continue
to focus on innovative educational programs. By
following all these steps, we will definitely attract
new younger membership to our Society both from
industry and academia.
One of the greatest strengths in our Society is
the way we conduct our business. We follow a trans4
parent and democratic process. The successive Society administrative teams have been highly receptive to new ideas from the Board of Governors as
well as the general membership. This feature is a rarity and is a blessing that needs to be preserved.
Concluding Comments
Since its inception, the Circuits and Systems
Society has been at the forefront of many of the significant advances in electrical engineering that have
made a welcome difference. As an example, remember the contribution that our researchers made to the
development of CAD of electronic circuits—schematic capture, SPICE, and much more. This is taken
for granted now, but was initiated by CAS members.
This is due to the dedicated scientific leadership
provided by many of our distinguished members
over the past fifty plus years. All this enabled the
CAS Society to be one of the most prestigious societies within the IEEE from the very beginning. Our
challenge is to maintain this and also make it a mainstream Society for new emerging technical areas of
the 21st century. The CAS publications, conferences, workshops, technical committees, short
courses and so forth should be geared to meet this
challenge. Promoting these requires demanding
leadership and I look forward to working very
closely during this year with the members of the
Board of Governors and a wide spectrum of important leaders in our Society. The great resource to be
utilized is the abundance of outstanding member
talents and ingenuity spread over all continents. I
sincerely appreciate receiving comments, suggestions,
and ideas from our membership at any time (California State University at Long Beach, Long Beach, CA
90840; Email: hreddy@csulb.edu, h.reddy@ieee.org;
Tel: +562 985 5106; Fax: +562 985 8887). You will
also find on the inside back cover of this issue the
2001 organizational roster for the Society consisting of volunteers from all parts of the world.
I conclude with the ideal of making the IEEECAS Society of tomorrow better than today in fundamental theoretical discoveries, technical innovations and professional development of our talented
global membership.
From the Editor
Michael K. Sain
Editor-in-Chief, IEEE Circuits and Systems Magazine
Welcome Aboard....
W
elcome to the IEEE Circuits and Systems
Magazine! The officers of the Society, and
the members of the Board of Governors, hope that
all the members of the Society will enjoy reading,
and will benefit from, the contents of this new magazine.
This publishing venture has many, many goals.
In fact, just about everyone has some goals for the
Circuits and Systems Magazine. I will ask pardon
in advance if I omit anyone’s special goal, but nonetheless I do wish to enumerate a few of them—those
of which I am aware through meetings and discussions over the last three years.
First, on the one hand, the CASS is a diverse
society, with substantial numbers of members,
which we might call “subsocieties”, pursuing very
well defined “subfields of interests”. We find evidence of these subsocieties in the multiplicity of
transactions which are published either by the CASS
alone, or in cooperation with other societies. More
evidence is seen in the great range of meetings,
workshops, and conferences sponsored by CASS,
and occurring around the clock and around the
world. These venues provide a means for the
members with well defined special interests to
interact with their colleagues and to exhange ideas
on their particular frontiers. For the member who
wishes to learn of the CASS as a whole, and to
be brought up to date about developments in all
the subfields of interest, there is of course the
ISCAS, long-time flagship of the fleet of CASS
meetings, and the rapidly expanding set of regional get-togethers with a similar motivation.
But there was not within the Society a single publication which could serve the same intent; and
one goal of the CAS Magazine is to provide such
an outlet.
Second, on the other hand, is the remarkable
commonality shared by all of us, due in part to the
ubiquitous role played by electrical circuits in all of
modern engineering. For a nice history of part of this
viewpoint, please see the piece prepared by Paulo
Diniz in our technical committees department for the
present issue. We have a proud record of participating in the launching of new technical areas within
the IEEE. No doubt, in the future, we will help to
generate some more. In some sense, each of the
subsocieties is uncovering, developing, and reporting material which can have an influence
upon us all; and another goal of the CAS Magazine is to make that sort of information available
to each and every member of CASS. We are of
course a far-flung community, and we have a finite page budget, but as time goes on we plan to
steadily pursue and to overtake this goal. We need
to consider at least three facets as we take on this
task: the transactions, the conferences, and the
technical committees. Great are the opportunities
to contribute toward this goal, and I hope that
some of you will be inclined to join in reaching
out to other members with information of general
interest.
Third, the activities of the Executive Committee, the Board of Governors, and the various Society operating committees from time to time require
communication to the whole membership, perhaps
to report milestones or decisions, perhaps to explain
new opportunities, and perhaps to celebrate achievements. This is the social side of our technical community, which can range from being a necessity to
being a joy!
Finally, I am happy to say that CASS has made
the decision to present this new publication to all
its members as a service, with no charge.
We here in the office, Eric Kuehner and myself,
hope that you all enjoy the CAS Magazine. I can
certainly say that I am honored to have been named
the founding editor-in-chief. We will be doing all
that we can to make your reading as informative and
pleasant as possible. Enjoy!
5
Designing Low-Power Circuits:
Practical Recipes
by Luca Benini *
Giovanni De Micheli
Enrico Macii
Vs
C
s
L4
Vdd
_H
IN
1531- 636X/03/$10.00©2001IEEE
6
T_
L
LK
Vdd
H
T_
U
O
L2
OU
L5
Introduction
Power dissipation has become a
critical design metric for an increasingly large number of VLSI circuits.
The exploding market of portable electronic appliances fuels the demand for
complex integrated systems that can be
powered by lightweight batteries with
* Contact Person: Luca Benini, Università di Bologna, DEIS, Viale Risorgimento 2, Bologna,
ITALY 40136; Phone: +39-051-209.3782; Fax:
+39-051-209.3073; E-mail: lbenini@deis.unibo.it
long times between re-charges (for instance, the plot in Fig. 1 shows the
evolution of the world-wide market for
mobile phones). Additionally, system
cost must be extremely low to achieve
high market penetration. Both battery
lifetime and system cost are heavily
impacted by power dissipation. For
these reasons, the last ten years have
witnessed a soaring interest in lowpower design.
The main purpose of this paper is
to provide a few insights into the world
of low-power design. We do not intend
to review the vast literature on the
topic (the interested reader is referred
to the many available surveys, e.g., [1–
3]). Our objective is to give the readers a few basic concepts to help understanding the “nature of the beast”, as
well as to provide “silicon-proven reci-
Luca Benini
Giovanni De Micheli
700
600
Million of Units
A
bstract—The growing market of
mobile, battery-powered electronic systems (e.g., cellular phones,
personal digital assistants, etc.) demands the design of microelectronic
circuits with low power dissipation.
More generally, as density, size, and
complexity of the chips continue to increase, the difficulty in providing adequate cooling might either add significant cost or limit the functionality
of the computing systems which make
use of those integrated circuits.
In the past ten years, several techniques, methodologies and tools for
designing low-power circuits have
been presented in the scientific literature. However, only a few of them have
found their way in current design flows.
The purpose of this paper is to
summarize, mainly by way of
examples,what in our experience are
the most trustful approaches to lowpower design. In other words, our contribution should not be intended as an
exhaustive survey of the existing literature on low-power design; rather, we
would like to provide insights a designer can rely upon when power consumption is a critical constraint.
We will focus solely on digital circuits, and we will restrict our attention
to CMOS devices, this technology being the most widely adopted in current
VLSI systems.
500
400
300
200
100
0
1995
1996
1997
1998
1999
2000
2001
2002
2003
Year
Figure 1. Global market for cellular phones.
pes” for minimizing power consumption
in large-scale digital integrated circuits.
The power consumed by a circuit
is defined as p(t) = i(t)v(t), where i(t)
is the instantaneous current provided
by the power supply, and v(t) is the
instantaneous supply voltage. Power
minimization targets maximum instantaneous power or average power. The
latter impacts battery lifetime and heat
Enrico Macii
. . . continued on Page 8
7
Low Power Circuits … continued from Page 7
dissipation system cost, the former
constrains power grid and power supply circuits design. We will focus on
average power in the remainder of the
paper, even if maximum power is also
a serious concern. A more detailed
It is important to stress from the outset that power minimization is never
the only objective in real-life designs.
Performance is always a critical metric that cannot be neglected. Unfortunately, in most cases, power can be reduced at the price of some performance
degradation. For this reason, several
metrics for joint power-performance
have been proposed in the past.
analysis of the various contributions to
overall power consumption in CMOS
circuits (the dominant VLSI technology)
is provided in the following section.
It is important to stress from the
outset that power minimization is
never the only objective in real-life
designs. Performance is always a critical metric that cannot be neglected.
Unfortunately, in most cases, power
can be reduced at the price of some
performance degradation. For this reason, several metrics for joint powerperformance have been proposed in
the past. In many designs, the power8
delay product (i.e., energy) is an acceptable metric. Energy minimization
rules out design choices that heavily
compromise performance to reduce
power consumption. When performance has priority over power consumption, the energy-delay product
[4] can be adopted to tightly control
performance degradation. Alternatively, we can take a constrained optimization approach. In this case, performance degradation is acceptable up
to a given bound. Thus, power minimization requires optimal exploitation of
the slack on performance constraints.
Besides power vs. performance,
another key trade-off in VLSI design
is power vs. flexibility. Several authors have observed that applicationspecific designs are orders of magnitude more power efficient than general-purpose systems programmed to
perform the same computation [1].
On the other hand, flexibility (programmability) is often an indispensable requirement, and designers must
strive to achieve maximum power
efficiency without compromising
flexibility.
The two fundamental trade-offs
outlined above motivate our selection
of effective low-power design techniques and illustrative cases. We selected successful low power design
examples (real-life products) from
four classes of circuits, spanning the
flexibility vs. power-efficiency tradeoff. To maintain uniformity, we chose
designs targeting the same end-market, namely multimedia and Internetenabled portable appliances, where
power efficiency is not out-weighted
by performance requirements. Our illustrative examples are (in order of
decreasing flexibility):
1. Transmeta’s Crusoe processor
family [5]. This design is representative of the class of general-purpose
x86-compatible processors for highend portable appliances, and it features aggressive power optimizations
as key market differentiator.
2. Intel’s StrongARM family [6].
This design is representative of the
class of low-power integrated coreand-peripherals for personal digital
assistants and palmtop computers.
3. Texas Instrument TMS320C5x
family [7]. This design is a typical
very-low power digital signal processor for baseband processing in wireless communication devices (i.e.,
digital cellular phones).
4. Toshiba’s MPEG4 Codec [8].
This is a typical application-specific
system-on-chip for multimedia support,
targeted for PDAs and digital cameras.
Needless to say, our selection neither implies any endorsement of the
commercial products derived from
the above mentioned designs, nor implies any form of comparison or
benchmarking against competing
products.
The remainder of this paper is organized as follows. In Basic Principles, we analyze in some detail the
sources of power consumption in
CMOS technology, and we introduce
the basic principles of power optimization. In the section Technology and
Circuit Level Optimizations we de-
scribe a few successful power optimization methods at the technology and
circuit level. The section Logic and
Architecture Level Optimizations covers logic and architecture-level optimizations, while Software and System
Level Optimizations deals with software and system-level optimizations.
Whenever possible, we will describe
the optimization techniques with reference to the example designs.
Basic Principles
CMOS is, by far, the most common technology used for manufacturing digital ICs. There are 3 major
sources of power dissipation in a
CMOS circuit [9]:
P = PSwitching + PShort-Circuit + PLeakage
PSwitching, called switching power, is due
to charging and discharging capacitors
driven by the circuit. PShort-Circuit, called
short-circuit power, is caused by the
short circuit currents that arise when
pairs of PMOS/NMOS transistors are
conducting simultaneously. Finally,
PLeakage, called leakage power, originates from substrate injection and subthreshold effects. For older technologies (0.8 µm and above), PSwitching was
predominant. For deep-submicron processes, PLeakage becomes more important.
Design for low-power implies the
ability to reduce all three components
of power consumption in CMOS circuits during the development of a low
power electronic product. Optimizations can be achieved by facing the
power problem from different perspectives: design and technology. En. . . continued on Page 10
9
Low Power Circuits … continued from Page 9
hanced design capabilities mostly impact switching and short-circuit power;
technology improvements, on the
other hand, contribute to reductions of
all three components.
Switching power for a CMOS gate
working in a synchronous environment
is modeled by the following equation:
PSwitching =
1
C V2 f E
2 L DD Clock SW
where CL is the output load of the gate,
VDD is the supply voltage, fClock is the
clock frequency and ESW is the switching activity of the gate, defined as the
probability of the gate’s output to make
a logic transition during one clock
cycle. Reductions of PSwitching are achievable by combining minimization of the
parameters in the equation above.
Historically, supply voltage scaling
has been the most adopted approach to
power optimization, since it normally
yields considerable savings thanks to
the quadratic dependence of PSwitching
on VDD. The major short-coming of this
solution, however, is that lowering the
supply voltage affects circuit speed. As
a consequence, both design and technological solutions must be applied in
order to compensate the decrease in
circuit performance introduced by reduced voltage. In other words, speed
optimization is applied first, followed
by supply voltage scaling, which
brings the design back to its original
timing, but with a lower power requirement.
A similar problem, i.e., performance decrease, is encountered when
10
power optimization is obtained
through frequency scaling. Techniques that rely on reductions of the
clock frequency to lower power consumption are thus usable under the
constraint that some performance
slack does exist. Although this may
seldom occur for designs considered
in their entirety, it happens quite often that some specific units in a larger
architecture do not require peak performance for some clock/machine
cycles. Selective frequency scaling
(as well as voltage scaling) on such
units may thus be applied, at no penalty in the overall system speed.
Optimization approaches that
have a lower impact on performance,
yet allowing significant power savings, are those targeting the minimization of the switched capacitance
(i.e., the product of the capacitive
load with the switching activity).
Static solutions (i.e., applicable at
design time) handle switched capacitance minimization through area optimization (that corresponds to a decrease in the capacitive load) and
switching activity reduction via exploitation of different kinds of signal
correlations (temporal, spatial,
spatio-temporal). Dynamic techniques, on the other hand, aim at
eliminating power wastes that may be
originated by the the application of
certain system workloads (i.e., the
data being processed).
Static and dynamic optimizations
can be achieved at different levels of
design abstraction. Actually, addressing the power problem from the very
250
35
early stages of design development
offers enhanced opportunities to obtain significant reductions of the
power budget and to avoid costly redesign steps. Power conscious design
flows must then be adopted; these
require, at each level of the design hierarchy, the exploration of different
alternatives, as well as the availability of power estimation tools that
could provide accurate feed-back on
the quality of each design choice.
In the sequel, we will follow a
bottom-up path to present power optimization solutions that have found
their way in the development of realistic designs; on the other hand, issues
related to power modeling and estimation will not be covered; the interested reader may refer to survey articles available in the literature (see,
for example, [10–13]).
Technology and Circuit Level
Optimizations
VLSI technology scaling has
evolved at an amazingly fast pace for
the last thirty years. Minimum device
size has kept shrinking by a factor k
= 0.7 per technology generation. The
basic scaling theory, known as constant field scaling [14], mandates the
synergistic scaling of geometric features and silicon doping levels to
maintain constant field across the gate
oxide of the MOS transistor. If devices are constant-field scaled, power
2
dissipation scales as k and power
density (i.e., power dissipated per unit
area) remains constant, while speed
increases as k.
200
25
20
150
15
100
Power
Power density
30
10
50
5
0
0
1.5
1
0.8
0.6
0.35
0.25
0.18
Technology Generation
Figure 2. Power density (squares) and power (diamonds) vs. technology generation.
In a constant-field scaling regime,
silicon technology evolution is probably the most effective way to address
power issues in digital VLSI design.
Unfortunately, as shown in Fig. 2,
power consumption in real-life integrated circuits does not always follow
this trend. Figure 2 shows average
power consumption and power density
trends for Intel Microprocessors, as a
function of technology generations
[14, 15]. Observe that both average
power and power density increase as
minimum feature size shrinks.
This phenomenon has two concomitant causes. First, die size has
been steadily increasing with technology, causing an increase in total average power. Second, voltage supply has
not been reduced according to the directives of constant-field scaling. Historically, supply voltage has scaled
much more slowly than device size
because: (i) supply voltage levels have
been standardized (5 V, then 3.3 V), (ii)
faster transistor operation can be obtained by allowing the electric field to
raise in the device (i.e., “overdriving”
the transistor). This approach is
known as constant voltage scaling,
and it has been adopted in older silicon technologies, up to the 0.8 µm
generation.
. . . continued on Page 12
11
Designing Low-Power Circuits:
Practical Recipes
Low Power Circuits … continued from Page 11
Probably, the most widely known
(and successful) power-reduction technique, known as power-driven voltage
scaling [3], moves from the observation that constant-voltage scaling is
highly inefficient from the power
viewpoint, and that we can scale down
the voltage supply to trade off performance for power. Depending on the
relative weight of performance with
respect to power consumption constraints, different voltage levels can be
adopted. It is important to observe,
however, that transistor speed does not
depend on supply voltage VDD alone,
but on the gate overdrive, namely the
difference between voltage supply and
device threshold voltage (VDD – VT).
Hence, several authors have studied
the problem of jointly optimizing VDD
and VT to obtain minimum energy, or
minimum energy-delay product [1, 3].
Accurate modeling of MOS transistor currents is paramount for achieving acceptable scaled VDD and VT values. The simplistic quadratic model of
CMOS ON-current IDS = S' . (VGS – VT)2
leads to overly optimistic switching
speed estimates for submicrometric
transistors. In short-channel transistors, the velocity saturation of the electrons traveling between drain and
source dictates a different current
equation: IDS = S' . (VGS – VT)m, with
1 ≤ m < 2 (e.g., m = 1.3). Furthermore,
another important characteristic of
CMOS transistors is sub-threshold
conduction. When VGS < VT, the current is not zero, but it follows an exponential law: IDS = R' . eV /V , Vo beT
12
o
ing the technology-dependent subthreshold slope.
Velocity saturation favors supply
down-scaling, because it implies diminishing returns in performance
when supply voltage is too high. On
the other hand, sub-threshold conduction limits down-scaling, because of
increased static current leaking
through nominally OFF transistors.
Intuitively, optimization of threshold
and supply voltage requires balancing ON-current and OFF-current,
while at the same time maintaining
acceptable performance. Optimizing
VDD and VT for minimum energy delay product leads to surprisingly low
values of both. VDD should be only
slightly larger than 2VT, with VT a few
hundred millivolts. This approach is
known as ULP, or ultra-low-power
CMOS [16].
ULP CMOS is not widely used in
practice for two main reasons. First,
threshold control is not perfect in
real-life technology. Many transistors
may have sub-threshold currents that
are orders of magnitude larger than
expected if their threshold is slightly
smaller than nominal (remember that
sub-threshold current is exponential
in VGS). Second, sub-threshold current is exponentially dependent on
temperature, thereby imposing tight
thermal control of ULP CMOS,
which is not cost-effective. Nevertheless, aggressive voltage scaling is
commonplace in low-power VLSI
circuits: voltage supply and thresholds are not scaled to their ultimate
limit, but they are significantly re-
6
VDD
5
Voltage
4
3
2
VT
1
0
1.5
1
0.8
0.6
0.35
0.25
0.18
Technology Generation
Figure 3. Supply voltage (diamonds) and threshold voltage (squares) vs. technology generation.
duced with respect to constant-voltage scaling. See Example 1.
Power-driven voltage scaling was
a “low-hanging fruit” in older technologies. Unfortunately, starting from
the 0.8 µm generation, the situation
has changed. As shown in Fig. 3, supply voltage has started to decrease
with shrinking device size even for
high-performance transistors [15].
There are several technological reasons for this trend, which are outside
the scope of this paper. From the
point of view of power reduction,
technology-driven voltage scaling
has two important consequences. Aggressively scaled transistors with
minimum channel length are becoming increasingly leaky in the OFF
state (getting closer to ULP), and
there is not much room left for further voltage scaling.
Leakage power is already a major concern in current technologies,
because it impacts battery lifetime
even if the circuit is completely idle.
Quiescent power specifications tend
to be very tight. In fact, CMOS technology has traditionally been extremely power-efficient when transistors are not switching, and system designers expect low leakage from
CMOS chips. To meet leakage power
constraints, multiple-threshold and
variable threshold circuits have been
proposed [3]. In multiple-threshold
CMOS, the process provides two different thresholds. Low-threshold transistors are fast and leaky, and they are
employed on speed-critical sub-circuits. High-threshold transistors are
slower but exhibit low sub-threshold
leakage, and they are employed in noncritical units/paths of the chip.
Unfortunately, multiple-threshold
techniques tend to lose effectiveness as
more transistors become timing-criti. . . continued on Page 14
Example 1. The StrongARM processor was first designed
in a three-metal, 0.35 µm CMOS process, which had been
originally developed for high-performance processors (the
DEC Alpha family). The first design decision was to reduce
supply voltage from 3.45 V to 1.5 V, with threshold voltage
VTN = |VTP| = 0.35 V, thus obtaining a 5.3x power reduction.
The performance loss caused by voltage scaling was acceptable, because StrongARM has much more relaxed performance specifications than Alpha.
As a second example, the research prototype of the
TMS320C5x DSP adopted a supply voltage VDD = 1 V in a
0.35 µm technology. The aggressive VDD value was chosen
by optimizing the energy-delay product.
13
Example 2. The TMS320C5x DSP prototype adopted a
dual-threshold CMOS process, in order to provide acceptable
performance at the aggressively down-scaled supply voltage
of VDD = 1 V. The nominal threshold voltages were 0.4 V and
0.2 V for slow and fast transistors, respectively. Leakage current for the high-VT transistors is below 1 nA/µm. For the lowVT transistors, leakage current is below 1 µA/µm (a three-orders-of-magnitude difference!). The drive current of low-VT
transistors is typically twice that of the high-VT devices.
The MPEG4 Video Codec prototype adopted the variablethreshold voltage scheme to reduce power dissipation. Substrate biasing is exploited to dynamically adjust the threshold: VT is controlled to 0.2 V in active mode and to 0.55 V
when the chip is in stand-by mode.
Low Power Circuits … continued from Page 13
cal. Variable-threshold circuits overcome this shortcoming by dynamically
controlling the threshold voltage of
transistors through substrate biasing.
When a variable-threshold circuit becomes quiescent, the substrate of
NMOS transistors is negatively biased,
and their threshold increases because
of the well known body-bias effect. A
similar approach can be taken for
PMOS transistors (which require positive body bias). Variable-threshold circuits can, in principle, solve the quiescent leakage problem, but they re-
quire stand-by control circuits that
modulate substrate voltage. Needless
to say, accurate and fast body-bias
control is quite challenging, and requires carefully designed closed-loop
control [3]. See Example 2.
In analogy with threshold voltage, supply voltage can also be controlled to reduce power, albeit in the
limited ranges allowed in highly
scaled technologies. Multiple-voltage
and variable voltage techniques have
been developed to this purpose [3]. In
multiple-voltage circuits two or more
power supply voltages are distributed
on chip. Similarly to the multiplethreshold scheme, timing-critical
transistors can be powered at a high
voltage, while most transistors are
connected to the low voltage supply.
Multiple voltages are also frequently
used to provide standard voltage levels (e.g., 3.3 V) to input-output circuits, while powering on-chip internal logic at a much lower voltage to
save power. The main challenges in
the multiple-voltage approach are in
the design of multiple power distribution grids and of power-efficient
level-shifters to interface low-voltages with high-voltage sections.
Example 3. Supply voltage is closed-loop controlled in both the
MPEG4 Codec core and the Crusoe Processor. Supply voltage control in
the MPEG4 helps in compensating process fluctuations and operating environment changes (e.g., temperature changes). Variable-voltage operation
in Crusoe is more emphasized: it automatically lowers the supply voltage
when the processor is under-loaded. From the performance viewpoint, the
processor gradually slows down when its full performance is not needed,
but its supply voltage and clock speed are rapidly raised when the workload
increases.
In both MPEG4-core and Crusoe, frequency and power supply are synergistically adjusted thanks to a clever feed-back control loop: a replica of
the critical path is employed to estimate circuit slow-down in response to
voltage down-scaling, and clock frequency is reduced using a PLL that locks
on the new frequency.
14
Variable-voltage optimizations
allow modulating the power supply
dynamically during system operation.
In principle, this is a very powerful
technique, because it gives the possibility to trade off power for speed
at run time, and to finely tune performance and power to non-stationary
workloads. In practice, the implementation of this solution requires
considerable design ingenuity. First,
voltage changes require non-negligible time, because of the large time
constants of power supply circuits.
Second, the clock speed must be varied consistently with the varying
speed of the core logic, when supply
voltage is changed. Closed-loop feedback control schemes have been
implemented to support variable supply voltage operation. See Example 3.
The techniques based on voltage
scaling described above require
significant process and system support, which imply additional costs
that can be justified only for largevolume applications. Less aggressive
circuit-level approaches, that do not
require ad-hoc processes can also be
successful. Among the proposals
available in the literature, library cell
design and sizing for low power have
gained wide-spread acceptance. From
the power viewpoint, probably the
most critical cells in a digital library
are the sequential primitives, namely,
latches and flip-flops. First, they are
extremely numerous in today’s
deeply pipelined circuits; and second,
they are connected to the most active
network in the chip, i.e., the clock.
Clock drivers are almost invariantly
the largest contributors to the power
budget of a chip, primarily because of
the huge capacitive load of the clock
distribution network. Flip-flop (and
latch) design for low power focuses
IN_H
Vdd
L1
L5
Vss
OUT_L
L3
Vdd
L4
L2
OUT_H
Vdd
IN_L
CLK
Figure 4. Low-power FF used in the StrongARM design.
on minimizing clock load and reducing internal power when the clock
signal is toggled. Significant power
reductions have been achieved by
carefully designing and sizing the
flip-flops [1].
Clock power optimization is effectively pursued at the circuit level also
by optimizing the power distribution
tree, the clock driver circuits and the
clock generation circuitry (on-chip
oscillators and phase-locked loops)
[3]. Higher abstraction-level approaches to clock power reduction are
surveyed in the next section. See Example 4.
Transistor sizing is also exploited
to minimize power consumption in
combinational logic cells. Rich libraries with many available transistor sizes
are very useful in low-power design,
because they help synthesis tools in
achieving optimum sizing for a wide
range of gate loads. Power savings can
be obtained by adopting non-standard
logic implementation styles such as
pass-transistor logic, which can reduce
the number of transistors (and, consequently the capacitive load), for implementing logic functions which are
commonly found in arithmetic units
(e.g., exclusive-or, multiplexers).
Example 4. The
StrongARM processor adopted the
edge-triggered flipflop of Figure 4 to
reduce clock load
with respect to the
flow-through latches
used in the Alpha
processor. The flipflop is based on a
differential structure, similar to a
sense amplifier. Notice that the internal
structure of the FF is
quite complex in
comparison with the
simple flow-through
latches in Alpha.
However,the FF has
reduced clock load
(only three transistors). This key advantage gave a 1.3 x
overall power reduction over the
latch-based design.
. . . continued on Page 16
15
Example 5. Static power minimization was a serious concern for all our example designs. In the
MPEG4 Codec, which adopted a dual supply voltage scheme, significant static power can be dissipated when high-supply gates are driven by a low-supply gate, because the driver cannot completely
turn off the pull-up transistors in the fanout gates. For this reason, special level shifter cells were designed that exploited a regenerative cross-coupled structure similar to a sense amplifier to restore full
swing in the high-supply section, and minimize static power.
Leakage power was the main concern in the StrongARM design, that could not benefit from multiple threshold or variable threshold control. To reduce leakage power in gates outside the critical path,
non-minimum length transistors were employed in low-leakage, low-performance cells.
Low Power Circuits … continued from Page 15
Example 6. The
multiply-accumulate
(MAC) unit of the
StrongARM processor is based on a
Wallace-tree multiplier coupled with a
carry-lookahead
adder. The Wallacetree architecture was
chosen because it is
very fast, but also because it has low dynamic power consumption in the
carry-save adder tree.
The improvement
comes from a sizable
reduction in spurious
switching, thanks to
path delay balancing
in the Wallace-tree. A
23% power reduction
(as well as a 25%
speed-up) is achieved
by the Wallace-tree
architecture with respect to the array
multiplier.
16
While most traditional power optimization techniques for logic cells focus
on minimizing switching power, circuit design for leakage power reduction is also gaining importance [17].
See Example 5.
Logic and Architecture Level
Optimizations
Logic-level power optimization
has been extensively researched in the
last few years [11, 18]. Given the complexity of modern digital devices,
hand-crafted logic-level optimization
is extremely expensive in terms of design time and effort. Hence, it is costeffective only for structured logic in
large-volume components, like microprocessors (e.g., functional units in the
data-path). Fortunately, several optimizations for low power have been automated and are now available in commercial logic synthesis tools [19], enabling logic-level power optimization
even for unstructured logic and for
low-volume VLSI circuits.
During logic optimization, technology parameters such as supply voltage are fixed, and the degrees of freedom are in selecting the functionality
and sizing the gates implementing a
given logic specification. As for technology and circuit-level techniques,
power is never the only cost metric of
interest. In most cases, performance is
tightly constrained as well. A common setting is constrained power optimization, where a logic network can
be transformed to minimize power
only if critical path length is not increased. Under this hypothesis, an effective technique is based on path
equalization.
Path equalization ensures that signal propagation from inputs to outputs of a logic network follows paths
of similar length. When paths are
equalized, most gates have aligned
transitions at their inputs, thereby
minimizing spurious switching activity (which is created by misaligned
input transitions). This technique is
very helpful in arithmetic circuits,
such as adders of multipliers. See
Example 6.
Glue logic and controllers have
much more irregular structure than
arithmetic units, and their gate-level
implementations are characterized by
a wide distribution of path delays.
These circuits can be optimized for
power by resizing. Resizing focuses
on fast combinational paths. Gates on
fast paths are down-sized, thereby decreasing their input capacitances,
while at the same time slowing down
signal propagation. By slowing down
fast paths, propagation delays are
equalized, and power is reduced by
joint spurious switching and capacitance reduction. Resizing does not
a
b
a
b
c
c
d
d
always imply down-sizing. Power
can be reduced also by enlarging (or
buffering) heavily loaded gates, to
increase their output slew rates. Fast
transitions minimize short-circuit
power of the gates in the fanout of the
gate which has been sized up, but its
input capacitance is increased. In
most cases, resizing is a complex optimization problem involving a tradeoff between output switching power
and internal short-circuit power on
several gates at the same time.
Other logic-level power minimization techniques are re-factoring, remapping, phase assignment and pin
swapping. All these techniques can be
classified as local transformations.
They are applied on gate netlists, and
focus on nets with large switched capacitance. Most of these techniques
replace a gate, or a small group of
gates, around the target net, in an effort to reduce capacitance and switching activity. Similarly to resizing, local transformations must carefully
balance short circuit and output
power consumption. See Example 7.
Logic-level power minimization
is relatively well studied and understood. Unfortunately, due to the local
(a)
a
a
b
b
c
a
b
c
d
c
(b)
d
b
c
a
1.0
1.2
1.4
1.6
1.0
1.2
1.4
1.6
(c)
Figure 5. Local transformations: (a) re-mapping, (b) phase assignment,
(c) pin swapping.
nature of most logic-level optimizations, a large number of transformations has to be applied to achieve sizable power savings. This is a time consuming and uncertain process, where
uncertainty is caused by the limited
accuracy of power estimation. In many
cases, the savings produced by a local
move are below the “noise floor” of the
power estimation engine. As a consequence, logic-level optimization does
. . . continued on Page 18
Example 7. Figure 5 shows three examples of local transformations.
In Fig. 5 (a) a re-mapping transformation is shown, where a high-activity
node (marked with x) is removed thanks to a new mapping onto an andor gate. In Fig. 5 (b), phase assignment is exploited to eliminate one of
the two high-activity nets marked with x. Finally, pin swapping is applied
in Fig. 5 (c) to connect a high-activity net with the input pin of the 4-input
nand with minimum input capacitance.
17
STATE
IN
STATE
Combinational
Logic
OUT
Combinational
Logic
IN
Fa
L
OUT
GCLK
CLK
CLK
Figure 6. Example of gated clock architecture.
Low Power Circuits … continued from Page 17
not result in massive power reductions.
Savings are in the 10 to 20% range, on
average. More sizable savings can be
obtained by optimizing power at a higher
abstraction level, as detailed next.
Complex digital circuits usually
contain units (or parts thereof) that are
not performing useful computations at
every clock cycle. Think, for example,
of arithmetic units or register files
within a microprocessor or, more simply, to registers of an ordinary sequential circuit. The idea, known for a long
time in the community of IC designers, is to disable the logic which is not
in use during some particular clock
cycles, with the objective of limiting
power consumption. In fact, stopping
certain units from making useless transitions causes a decrease in the overall switched capacitance of the system,
thus reducing the switching component of the power dissipated. Optimization techniques based on the principle above belong to the broad class
of dynamic power management
(DPM) methods. As briefly explained
in the section Basic Principles, they
achieve power reductions by exploiting specific run-time behaviors of the
design to which they are applied.
The natural domain of applicability of DPM is system-level design;
therefore, it will be discussed in greater
detail in the next section. Nevertheless,
18
this paradigm has also been successfully adopted in the context of architectural optimization.
Clock gating [20] provides a way
to selectively stop the clock, and thus
force the original circuit to make no
transition, whenever the computation
to be carried out by a hardware unit
at the next clock cycle is useless. In
other words, the clock signal is disabled in accordance with the idle conditions of the unit.
As an example of use of the
clock-gating strategy, consider the
traditional block diagram of a sequential circuit, shown on the left of Fig.
6. It consists of a combinational logic
block and an array of state registers
which are fed by the next-state logic
and which provide some feed-back
information to the combinational
block itself through the present-state
input signals. The corresponding
gated-clock architecture is shown on
the right of the figure.
The circuit is assumed to have a
single clock, and the registers are assumed to be edge-triggered flip-flops.
The combinational block Fa is controlled by the primary inputs, the
present-state inputs, and the primary
outputs of the circuit, and it implements the activation function of the
clock gating mechanism. Its purpose
is to selectively stop the local clock
of the circuit when no state or output
transition takes place. The block
named L is a latch, transparent when
the global clock signal CLK is inactive. Its presence is essential for a correct operation of the system, since it
takes care of filtering glitches that
may occur at the output of block Fa.
It should be noted that the logic for
the activation function is on the critical path of the circuit; therefore, timing violations may occur if the synthesis of Fa is not carried out properly.
The clock management logic is
synthesized from the Boolean function representing the idle conditions
of the circuit. It may well be the case
that considering all such conditions
results in additional circuitry that is
too large and power consuming. It
may then be necessary to synthesize
a simplified function, which dissipates the minimum possible power,
and stops the clock with maximum
efficiency. Because of its effectiveness, clock-gating has been applied
extensively in real designs, as shown
by the examples below, and it has
lately found its way in industry-strength
CAD tools (e.g., Power Compiler by
Synopsys). See Examples 8 and 9.
Power savings obtained by gating
the clock distribution network of
Example 8. The TMS320C5x DSP processor exploits
clock gating to save power during active operation. In particular, a two-fold power management scheme, local and global, is implemented. The clock signal feeding the latches
placed on the inputs of functional units is enabled only when
useful data are available at the units’ inputs, and thus meaningful processing can take place. The gating signals are generated automatically by local control logic using information
coming from the instruction decoder. Global clock gating is
also available in the processor, and is controlled by the user
through dedicated power-down instructions, IDLE1, IDLE2,
and IDLE3, which target power management of increasing
strength and effectiveness. Instruction IDLE1 only stops the
CPU clock, while it leaves peripherals and system clock active. Instruction IDLE2 also deactivates all the on-chip peripherals. Finally, instruction IDLE3 powers down the whole
processor.
some hardware resources come at the
price of a global decrease in performance. In fact, resuming the operation
of an inactive resource introduces a latency penalty that negatively impacts
system speed. In other words, with
clock gating (or with any similar DPM
technique), performance and throughput of an architecture are traded for
power. See Example 10.
Clock gating is not the only
scheme for implementing logic shutdown; solutions ranging from register
disabling and usage of dual-state flipflops to insertion of guarding logic and
gate freezing may be equally effective,
although no industry-strength assessment of these design options has been
done so far.
Example 10.
The latency for the
CPU
of
the
TMS320C5x DSP
processor to return
to active operation
from the I D L E 3
mode takes around
50 µsec, due to the
need of the on-chip
PLL circuit to lock
with the external
clock generator.
. . . continued on Page 20
Example 9. The MPEG4 Codec exploits software-controlled clock gating in some of its internal units, including the DMA controller and the RISC
core. Clock gating for a unit is enabled by setting the corresponding bit in
a “sleep” register of RISC through a dedicated instruction. The clock signal for the chosen hardware resource is OR-ed with the corresponding
output of the sleep register to stop clock activity.
19
Example 11. Several schemes [21–23] have been proposed to store compressed instructions in main
memory and decompress them on-the-fly before execution (or when they are stored in the instruction
cache). All these techniques trade off the efficiency of the compression algorithm with the speed and
power consumption of the hardware de-compressor. Probably the best known instruction compression
approach is the “Thumb” instruction set of the ARM microprocessor family [24]. ARM cores can be
programmed using a reduced set of 16-bit instructions (in alternative to standard 32-bit RISC instructions) that reduce the required instruction memory occupation and required bandwidth by a factor of 2.
Low Power Circuits … continued from Page 19
Software and System Level
Optimizations
Electronic systems and subsystems
consist of hardware platforms with
several software layers. Many system
features depend on the hardware/software interaction, e.g., programmability and flexibility, performance and
energy consumption.
Software does not consume energy
per se, but it is the execution and storage of software that requires energy
consumption by the underlying hardware. Software execution corresponds
to performing operations on hardware,
as well as accessing and storing data.
Thus, software execution involves
power dissipation for computation,
storage, and communication. Moreover, storage of computer programs in
semiconductor memories requires energy (refresh of DRAMs, static power
for SRAMs).
The energy budget for storing programs is typically small (with the choice
of appropriate components) and predictable at design time. Thus, we will concentrate on energy consumption of software during its execution. Nevertheless,
it is important to remember that reducing the size of program, which is a
usual objective in compilation, correlates with reducing their energy storage costs. Additional reduction of code
size can be achieved by means of compression techniques. See Example 11.
20
The energy cost of executing a
program depends on its machine code
and on the hardware architecture parameters. The machine code is derived from the source code from compilation. Typically, the energy cost of
the machine code is affected by the
back-end of software compilation,
that controls the type, number and
order of operations, and by the means
of storing data, e.g., locality (registers
vs. memory arrays), addressing, order. Nevertheless, some architectureindependent optimizations can be
useful in general to reduce energy
consumption, e.g., selective loop unrolling and software pipelining.
Software instructions can be characterized by the number of cycles
needed to execute them and by the
energy required per cycle. The energy
consumed by an instruction depends
weakly on the state of the processor
(i.e., by the previously executed instruction). On the other hand, the energy varies significantly when the instruction requires storage in registers
or in memory (caches).
The traditional goal of a compiler
is to speed up the execution of the
generated code, by reducing the code
size (which correlates with the latency of execution time) and minimizing spills to memory. Interestingly enough, executing machine
code of minimum size would consume the minimum energy, if we neglect the interaction with memory
and we assume a uniform energy cost
of each instruction. Energy-efficient
compilation strives at achieving machine code that requires less energy
as compared to a performance-driven
traditional compiler, by leveraging
the disuniformity in instruction energy cost, and the different energy
costs for storage in registers and in
main memory due to addressing and
address decoding. Nevertheless, results are sometimes contradictory.
Whereas for some architectures energy-efficient compilation gives a
competitive advantage as compared
to traditional compilation, for some
others the most compact code is also
the most economical in terms of energy, thus obviating the need of
specific low-power compilers.
It is interesting to note that the
style of the software source program
(for any given function) affects the
energy cost. Energy-efficient software can be achieved by enforcing
specific writing styles, or by allowing
source-code transformation to reduce
Example 12. The IBM XL
compilers transform source code
(from different languages) into
an internal form called W-code.
The Toronto Portable Optimizer
(TPO) [25] performs W-code to
W-code transformations. Some
transformations are directed to
reducing power consumption. To
this goal, switching activity is
minimized by exploiting relations among variables that take
similar values. Power-aware instruction scheduling can increase
the opportunity of clock gating,
by grouping instructions with
similar resource usage.
Example 13. Simultaneous multi-threading (SMT) [25]
has been proposed as an approach for the exploitation of instruction-level parallelism. In SMT, the processor core shares
its execution-time resources among several simultaneously executing threads (programs). Thus, at each cycle, instructions can
be fetched from one or more independent threads, and passed to
the issue and execution boxes. Since the resources can be filled
by instructions from parallel threads, their usage increases and
energy waste decreases. Compaq has announced that its future
21x64 designs will embrace the SMT paradigm.
the power consumption of a program.
Such transformation can be automated,
and can be seen as the front-end of
compilation. See Example 12.
Power-aware operating systems
(OSs) trade generality for energy
efficiency. In the case of embedded
electronic systems, OSs are streamlined to support just the required applications. On the other hand, such an
approach may not be applicable to OSs
for personal computers, where the user
wants to retain the ability of executing
a wide variety of applications.
Energy efficiency in an operating
system can be achieved by designing
an energy aware task scheduler. Usually, a scheduler determines the set of
start times for each task, with the goal
of optimizing a cost function related to
the completion time of all tasks, and
to satisfy real time constraints, if applicable. Since tasks are associated
with resources having specific energy
models, the scheduler can exploit this
information to reduce run-time power
consumption. See Example 13.
Operating systems achieve major
energy savings by implementing dynamic power management (DPM) of
the system resources. DPM dynamically reconfigures an electronic system
to provide the requested services and
performance levels with a minimum
number of active components or a
. . . continued on Page 22
21
P=400mW
run
~10µs
~90µs
160ms
P=0.16mW
~10µs
P=50mW
idle
Wait for interrupt
~90µs
sleep
Wait for wakeup event
Figure 7. Power state machine for the StrongARM SA-1100 processor.
Low Power Circuits … continued from Page 21
minimum load on such components
[26]. Dynamic power management
encompasses a set of techniques that
achieve energy-efficient computation
by selectively shutting down or slowing down system components when
they are idle (or partially unexploited).
DPM can be implemented in different
forms including, but not limited to,
clock gating, clock throttling, supplyvoltage shut-down, and dynamicallyvarying power supplies, as described
earlier in this paper.
We demonstrate here power management using two examples from existing successful processors. The
former example describes the shut down
modes of the StrongARM processor,
while the latter describes the slow
down operation within the Transmeta
Crusoe chip. See Example 14.
An operating system that controls
a power manageable component, such
as the SA-1100, needs to issue commands to force the power state transitions, in response to the varying
workload. The control algorithm that
issues the commands is termed policy.
An important problem is the computation of the policy for a given
workload statistics and component parameter. Several avenues to policy
computation are described in another
tutorial paper [28]. We consider next
another example, where power management is achieved by slowing
down a component. See Example 15.
We mentioned before the important problem of computing policies,
with guaranteed optimality properties, to control one or more components with possibly different management schemes. A related interesting
problem is the design of components
that can be effectively power managed. See Example 16.
Several system-level design
trade-offs can be explored to reduce
energy consumption. Some of these
design choices belong to the domain
of hardware/software co-design, and
leverage the migration of hardware
functions to software or vice versa.
For example, the Advanced Configuration and Power Interface (ACPI)
standard, initiated by Intel, Microsoft
and Toshiba, provides a portable hw/
sw interface that makes it easy to
implement DPM policies for personal
computers in software. As another
example, the decision of implementing specific functions (like MPEG
decoding) on specifically-dedicated
hardware, rather than on a programmable processor, can significantly
affect energy consumption. As a final
interesting example, we would like to
point out that code morphing can be
a very powerful tool in reducing energy dissipation. See Example 17.
Conclusions
Electronic design aims at striking
a balance between performance and
power efficiency. Designing lowpower applications is a multi-faceted
problem, because of the plurality of
embodiments that a system specification may have and the variety of
. . . continued on Page 24
22
Designing Low-Power Circuits: Practical Recipes
Example 14. The StrongARM SA-1100 processor [27] is an example of a
power manageable component. It has three modes of operation: run, idle,
and sleep. The run mode is the normal operating mode of the SA-1100: every on-chip resource is functional. The chip enters the run mode after successful power-up and reset. The idle mode allows a software application to
stop the CPU when not in use, while continuing to monitor on-chip or off-chip
interrupt requests. In the idle mode, the CPU can be brought back to the run
mode quickly when an interrupt occurs. The sleep mode offers the greatest
power savings and consequently the lowest level of available functionality. In
the transition from run or idle, the SA-1100 performs an orderly shutdown
of on-chip activity. In a transition from sleep to any other state, the chip steps
through a rather complex wake-up sequence before it can resume normal activity.
The operation of the StrongARM SA-1100 can be captured by a power state
machine model, as shown in Fig. 7. States are marked with power dissipation
and performance values, edges are marked with transition times. The power
consumed during transitions is approximately equal to that in the run mode.
Notice that both idle and sleep have null performance, but the time for exiting sleep is much longer than that for exiting idle (10 µs versus 160 ms).
On the other hand, the power consumed by the chip in sleep mode (0.16 mW)
is much smaller than that in idle (50 mW).
Example 15. The Transmeta Crusoe TM5400 chip uses the Long RunR
power management scheme, that allows the OS to slow down the processor
when the workload can be serviced at a slower pace. Since a slow down in clock
frequency permits a reduction in supply voltage, then the energy to compute a
given task decreases quadratically. From a practical standpoint, the OS observes
the task queue and determines the appropriate frequency and voltage levels,
which are transmitted to the phase-locked loop (PLL) and DC-DC converter.
As a result, Transmeta claims that the Crusoe chip can play a soft DVD at the
same power level used by a conventional x86 architecture in sleep state.
Example 16. The 160-Mhz implementation of the StrongARM described
in [6] supports 16 possible operating frequencies generated by a PLL. Moreover, the PLL was designed within a power budget of 2 mW, so that it could be
kept running in the idle state. Thus, the need for a low-power PLL is dictated
by the power budget in the idle state. Note that the ability of keeping the PLL
running in the idle state is key to achieving a fast transition to the run state.
Example 17. The Transmeta Crusoe chip executes x86-compatible binaries on a proprietary architecture, which is designed for low-power operation.
The operating system performs run-time binary translation to this effect. The
code morphing is key in reducing the energy cost associated with any given
program.
23
Designing Low-Power Circuits:
Practical Recipes
by Luca Benini *
Giovanni De Micheli
Enrico Macii
OU
L5
T_
L
T_
Vdd
_H
IN
OU
Vdd
H
K
CL
L4
L2
Vs
s
Low Power Circuits … continued from Page 22
degrees of freedom that designers have
to cope with power reduction.
In this brief tutorial, we showed
different design options and the corresponding advantages and disadvantages. We tried to relate general-purpose low-power design solutions to a
few successful chips that use them to
various extents. Even though we described only a few samples of design
techniques and implementations, we
think that our samples are representative of the state of the art of current
technologies and can suggest future
developments and improvements.
References
[1] J. Rabaey and M. Pedram, Low Power
Design Methodologies. Kluwer, 1996.
[2] J. Mermet and W. Nebel, Low Power
Design in Deep Submicron Electronics.
Kluwer, 1997.
[3] A. Chandrakasan and R. Brodersen, LowPower CMOS Design. IEEE Press, 1998.
24
[4] T. Burd and R. Brodersen, “Processor
Design for Portable Systems”, Journal
of VLSI Signal Processing Systems, vol.
13, no. 2–3, pp. 203–221, August 1996.
[5] D. Ditzel, “Transmeta’s Crusoe: Cool
Chips for Mobile Computing”, Hot
Chips Symposium, August 2000.
[6] J. Montanaro, et al., “A 160-MHz, 32b, 0.5-W CMOS RISC Microprocessor”, IEEE Journal of Solid-State Circuits, vol. 31, no. 11, pp. 1703–1714,
November 1996.
[7] V. Lee, et al., “A 1-V Programmable DSP
for Wireless Communications”, IEEE
Journal of Solid-State Circuits, vol. 32,
no. 11, pp. 1766–1776, November 1997.
[8] M. Takahashi, et al., “A 60-mW MPEG4
Video Coded Using Clustered Voltage
Scaling with Variable Supply-Voltage
Scheme”, IEEE Journal of Solid-State
Circuits, vol. 33, no. 11, pp. 1772–1780,
November 1998.
[9] A. P. Chandrakasan, S. Sheng, and R. W.
Brodersen, “Low-Power CMOS Digital
Design”, IEEE Journal of Solid-State
Circuits, vol. 27, no. 4, pp. 473–484,
April 1992.
[10] F. Najm, “A Survey of Power Estimation Techniques in VLSI Circuits”, IEEE
Transactions on VLSI Systems, vol. 2,
no. 4, pp. 446–455, December 1994.
[11] M. Pedram, “Power Estimation and Optimization at the Logic Level”, International Journal of High-Speed Electronics and Systems, vol. 5, no. 2, pp. 179–
202, 1994.
[12] P. Landman, “High-Level Power Estimation”, ISLPED-96: ACM/IEEE International Symposium on Low Power
Electronics and Design, pp. 29–35,
Monterey, California, August 1996.
[13] E. Macii, M. Pedram, F. Somenzi,
“High-Level Power Modeling, Estimation, and Optimization”, IEEE Transactions on Computer-Aided Design of Integrated Circuits and Systems, vol. 17,
no. 11, pp. 1061–1079, November 1998.
[14] S. Borkar, “Design Challenges of Technology Scaling”, IEEE Micro, vol. 19,
no. 4, pp. 23–29, July-August 1999.
[15] S. Thompson, P. Packan, and M. Bohr,
“MOS Scaling: Transistor Challenges
for the 21st Century”, Intel Technology
Journal, Q3, 1998.
[16] Z. Chen, J. Shott, and J. Plummer,
“CMOS Technology Scaling for Low
Voltage Low Power Applications”,
ISLPE-98: IEEE International Symposium on Low Power Electronics, pp. 56–
57, San Diego, CA, October 1994.
[17] Y. Ye, S. Borkar, and V. De, “A New
Technique for Standby Leakage Reduction in High-Performance Circuits”,
1998 Symposium on VLSI Circuits, pp.
40–41, Honolulu, Hawaii, June 1998.
[18] M. Pedram, “Power Minimization in IC
Design: Principles and Applications”,
ACM Transactions on Design Automation of Electronic Systems, vol. 1, no. 1,
pp. 3–56, January 1996.
[19] B. Chen and I. Nedelchev, “Power Compiler: A Gate Level Power Optimization
and Synthesis System”, ICCD’97: IEEE
International Conference on Computer
Design, pp. 74–79, Austin, Texas, October 1997.
[20] L. Benini, P. Siegel, and G. De Micheli,
“Automatic Synthesis of Gated Clocks for
Power Reduction in Sequential Circuits”,
IEEE Design and Test of Computers, vol.
11, no. 4, pp. 32–40, December 1994.
[21] Y. Yoshida, B.-Y. Song, H. Okuhata, T.
Onoye, and I. Shirakawa, “An Object
Code Compression Approach to Embedded Processors”, ISLPED-98: ACM/
IEEE International Symposium on Low
Power Electronics and Design, pp. 265–
268, Monterey, California, August 1997.
[22] L. Benini, A. Macii, E. Macii, and M.
Poncino, “Selective Instruction Compression for Memory Energy Reduction in
Embedded Systems”, ISLPED-99: ACM/
IEEE 1999 International Symposium on Low
Power Electronics and Design, pp. 206–211,
San Diego, California, August 1999.
[23] H. Lekatsas and W. Wolf, “Code Compression for Low Power Embedded Systems”, DAC-37: ACM/IEEE Design Automation Conference, pp. 294–299, Los
Angeles, California, June 2000.
[24] S. Segars, K. Clarke, and L. Goudge,
“Embedded Control Problems, Thumb
and the ARM7TDMI”, IEEE Micro, vol.
15, no. 5, pp. 22–30, October 1995.
[25] D. Brooks, et al., “Power-Aware Microarchitecture: Design and Modeling Challenges for Next-Generation Microprocessors”, IEEE Micro, vol. 20, No. 6, pp. 26–
44, November 2000.
[26] L. Benini and G. De Micheli, Dynamic
Power Management: Design Techniques
and CAD Tools. Kluwer, 1997.
[27] Intel, SA-1100 Microprocessor Technical
Reference Manual. 1998.
[28] L. Benini, A. Bogliolo, and G. De Micheli,
“A Survey of Design Techniques for System-Level Dynamic Power Management”,
IEEE Transactions on VLSI Systems, vol.
8, no. 3, pp. 299–316, June 2000.
Luca Benini received the B.S. degree summa cum laude in electrical engineering from the University of Bologna, Italy, in 1991, and the
M.S. and Ph.D. degrees in electrical engineering from Stanford University in 1994 and 1997, respectively. Since 1998, he has been assistant
professor in the Department of Electronics and Computer Science in the University of Bologna, Italy. He also holds visiting professor positions
at Stanford University and the Hewlett-Packard Laboratories, Palo Alto, California.
Dr. Benini’s research interests are in all aspects of computer-aided design of digital circuits, with special emphasis on low-power applications, and in the design of portable systems. On these topics, he has authored more than 100 papers in international journals and conferences, a
book and several book chapters. He is a member of the technical program committee in several international conferences, including the Design
and Test in Europe Conference and International Symposium on Low Power Design.
Giovanni De Micheli is professor of electrical engineering, and by courtesy, of computer science at Stanford University. His research
interests include several aspects of design technologies for integrated circuits and systems, with particular emphasis on synthesis, system-level
design, hardware/software co-design and low-power design. He is author of Synthesis and Optimization of Digital Circuits, McGraw-Hill, 1994,
co-author and/or co-editor of four other books and of over 200 technical articles.
Dr. De Micheli is a fellow of ACM and IEEE. He received the Golden Jubilee Medal for outstanding contributions to the IEEE CAS Society in 2000. He received the 1987 IEEE Transactions on CAD/ICAS Best Paper Award and two best paper awards at the Design Automation
Conference, in 1983 and in 1993.
He is editor-in-chief of the IEEE Transactions on CAD/ICAS. He was vice president (for publications) of the IEEE CAS Society in 1999–
2000. Dr. De Micheli was program chair and general chair of the Design Automation Conference (DAC) in 1996–1997 and 2000 respectively.
He was program and general chair of the International Conference on Computer Design (ICCD) in 1988 and 1989 respectively.
Enrico Macii holds the Dr. Eng. degree in electrical engineering from Politecnico di Torino, Italy, the Dr. Sc. degree in computer science
from Università di Torino, and the Ph. D. degree in computer engineering from Politecnico di Torino. From 1991 to 1994 he was adjunct faculty
at the University of Colorado at Boulder. Currently, he is associate professor of computer engineering at Politecnico di Torino.
His research interests include several aspects of the computer-aided design of integrated circuits and systems, with particular emphasis on
methods and tools for low-power digital design. He has authored over 200 journal and conference articles in the areas above, including a paper
that received the best paper award at the 1996 IEEE EuroDAC conference.
Enrico Macii is associate editor of the IEEE Transactions on CAD/ICAS (since 1997) and associate editor of the ACM Transactions on
Design Automation of Electronic Systems (since 2000). He was technical program co-chair of the 1999 IEEE Alessandro Volta Memorial Workshop on Low Power Design, and technical program co-chair of the 2000 ACM/IEEE International Symposium on Low Power Electronics and
Design (ISLPED). Currently, he is general chair of ISLPED’01.
25
High-Speed
Data Converters
for Communication
Systems
by
Franco
Maloberti
26
1531- 636X/03/$10.00©2001IEEE
Introduction
T
he tremendous progress in microelectronics significantly benefits the
communication area. We are enjoying an
unprecedented growth in mobile communication: new services are being constantly introduced
thanks to the availability of new integrated circuits
and systems. This progress results from two key components: DSP and the data converter. Using sub-submicron technologies it is possible to integrate millions of transistors on a single chip. The speed of digital circuits is increasing up to many hundreds of MHz or even GHz. Hence,
new DSP architectures allow complex algorithms to be implemented at very high computation speeds. However, analog-digital
interfaces must be able to match the resolution and the speed of the
DSP. Therefore, the second key component completing the basic design
set is the high speed and high resolution data converter [1].
The present trend pushing the border of digital conversion toward
the transmit and receive terminal leads to ever increasing demands
on specifications, namely speed and resolution. We will see that
often more than 14 bits and several hundreds of MHz are required.
In addition, many applications impose continuous reductions
in power consumption. As a result, market challenges favor
research on high speed data converters. In turn, the results
achieved lead to new architectural solutions which create new needs. This paper analyses this process and discusses recent circuit solutions suitable for meeting key
system specifications [2].
Communication Systems
It is possible to use three main transmission media for analog
communication and for data transfer: wireless, twisted-pair cable
and coaxial cable. Optic fibers are also employed, but their
high cost makes their use suitable for backbone networks
or for central office and optical network unit (ONU) con. . . continued on Page 28
27
Digital
Tuning
ADC
Demod.
DSP µcontroller
Figure 1.
Ideal software
radio.
Soft
High-Speed Data Converters … continued from Page 27
nections. For data communication, it is
possible to utilize either narrow-band
connections (up to 56 kb/s) or broadband
connections (like the T-1 service operating at 1.544 Mb/s). Around the options
of media and bandwidth, communication engineers can build many possible
architectures. We shall see that in many
of them, data converters have a significant role.
Wireless Telephony
Figure 2.
(a) Rx section
of a conventional
architecture and
(b) its possible
Software Radio
modification.
For narrow-band wireless we have
different standards around the world. In
Europe we have the DCS and GSM operating in two different frequency bands:
900 and 1800MHz (GSM900 and
GSM1800). In the USA we have various mobile radio standards in the 1800
MHz frequency band, among them the
IS-95 and the IS-136. Moreover, studies
on globalization of personal communications begun in 1986 are going to define new standards for the so called 3G
(third generation) mobile telecommuni-
ADC
LNA
BP
Filter
BP
Filter
BP
Filter
ADC
Fixed
LO
Variab.
LO
(a)
90˚
LO
LNA
BP
Filter
(b)
28
BP
Filter
Fixed
LO
ADC
Digital
Tuning
Demod.
DSP µcontroller
cation systems. The standards will follow an evolutionary path so as to protect
the capital investment done for the second generation standards. Thus, the
present scenario is such that, despite the
considerable effort spent on harmonization, it will be hard to establish a unique
world-wide standard for future mobile
radio systems. Consequently, managing
multiple standards will be ever more
necessary, creating difficulties for both
manufacturers and operators.
Significant help can come from the
so-called “software radio” approach [3].
Figure 1 shows the “ideal” software radio solution: a high speed data converter digitizes the RF signal directly
at the antenna with all radio functions
being performed in the digital domain
possibly using a specialized reconfigurable hardware. At present (but
even in the future) it is impossible to
imagine a data converter operating at
1 or 2 GHz and delivering 16-18 bits.
Therefore, because of obvious difficulties in the implementation of the
“ideal” solution the data converter is
moved after some analog preprocessing. This is done typically after the low
noise amplifier and a first mixing (Fig.
2b, showing the Rx section). Unlike a
conventional Rx architecture (Fig. 2a)
the data converter in the software radio (Fig. 2b) runs at the IF frequency
and a digital processor performs the IF
filtering, the IF mixing, the band-base
filtering and the demodulation.
More generally, software radio
envisages a base station platform with
reconfigurable hardware and software.
This architecture can be adapted to different standards or services simply by
using the appropriate software. Software radio offers many advantages: it
is flexible, allows new services to be
added quickly thus shortening the
time-to-market, and offers better management of logistics, maintenance and
sinc
LPF
Design Issues
80 MHz
LO Freq.
80 MHz
∑
DUC
DSP
Channel
∑
DUC
DSP
Channel
80 MHz
80 MHz
RF
BPF
IF
LPF
LO
DAC
∑
DUC
DSP
Channel
320 MHz
DAC
sinc
LO Freq.
personnel training. Moreover, software
radio permits migration from one standard to the successive generation along
an evolutionary path.
About data converters, observe
that, because the channel tuning is performed in the digital domain, the linearity of the converter must be such as
to avoid any spurious interference.
Fortunately the bandwidth licensed to
operators is limited, typically in the
range 20-40 MHz. Therefore, appropriate choice of the LO frequency and
the sampling frequency of data converters allows some system flexibility
that can place images and spurious signals out of band. Thus, data converter
specifications depend on the architecture and standard adopted. For modern
architectures they span from 12 to 16
bit with spurious performances around
Interpol.
4x
BPF
Possible Solution
320 MHz
100 dB. The sampling frequency
ranges from 60 to 100 MHz.
Figure 3 show the Tx section of a
typical software radio architecture. It
uses, as an example, an 80 MHz clock,
thus permitting a 40 MHz bandwidth.
Suitable digital-up conversions (DUC)
allocate the transmission channels in
the selected frequency slot. All the
digital channels are converted into analog by a single DAC. The result, after
suitable filtering and modulation,
drives the power amplifier that, in turn,
drives the antenna. Figure 3 indicates
some possible design issues: a “sinc”
attenuation shapes the spectrum after
the DAC; the continuous-time low
pass filter in the IF section should carefully remove the frequency components beyond 40 MHz, the RF bandpass filter should take care of the im-
Figure 3.
Tx section of
a possible
software radio
architecture.
. . . continued on Page 30
29
(a)
(b)
High-Speed Data Converters … continued from Page 29
formances around 95 dB and sampling
frequency as large as 400 MHz [4].
Figure 4.
(a) Conventional
antenna transmission.
(b) Smart antenna
transmission.
age rejection. Thus, if the channels
occupy almost the 40 MHz available
the filters design becomes quite problematic. The insert in the figure shows
a possible solution: an interpolator increases the clock frequency by a factor 4 so that a band-pass digital filter
can select the first image of the transmission spectrum. Therefore, after the
DAC the signal used suffers by a limited sinc attenuation, it is 40 MHz apart
from dc and its image falls between
240 and 280 MHz. These features permit a significant relaxation of the filters’ specs. However, the clock frequency of the DAC must be 320 MHz.
Thus, more demanding performances
on the data converter help us in optimizing the system architectures. Modern CMOS technologies permit achievement of 12 bit DACs with spurious perFigure 5.
Architecture of an
Rx smart antenna: high
speed data converters
permit beam-forming
digital processing.
Smart Antennas
In cellular communication, important features are coverage, capacity
and service quality. In rural areas ensuring a proper coverage is expensive;
in urban areas the capacity is often
close to average demand. Moreover,
interference from neighbor cells worsens service quality. Smart antennas
provide an answer to these problems
[5]. A smart antenna is a directional
antenna used in the base station to
track mobile terminals (Fig. 4). The
basic structure uses a switched-beam
or an adaptive array architecture. Two
or more antenna elements are spatially
arranged and the signals are properly
processed to produce a directional radiation pattern.
Digital
Downconverter
LNA
BB
BP
Filter
BP
Filter
ADC
DDC
W0 .
BP
Filter
BP
Filter
ADC
DDC
W0 .
LNA
∑
LNA
BP
Filter
BP
Filter
Fixed
LO
30
ADC
W0 .
DDC
Phase
Weight
Adjust
Control
In its simplest form, a smart antenna uses a fixed phase beam network
that is switched to the elements of the
array so as to produce a beam close to
the desired direction. In the phased
array the phases of signals are adjusted
to change the array pattern, thus tracking a given mobile or cancelling a
given interference signal.
Here, different strategies for rural
or urban areas are necessary. Therefore, adaptive and reconfigurable architectures are the best solution. Figure 5 shows the block diagram of a
possible “digital” implementation. The
architecture uses a weight-adjustment
algorithm to obtain IF beam-forming.
Of course the given architecture is only
one possibility; we could design a solution fully in the analog domain. Nevertheless, the digital approach, thanks
to the use of the data converter in the
IF section, permits complex beamforming with a digital processor.
Therefore, the availability of data converters which answer system requirements leads us to an adaptive system
providing superior performance.
The data converter specifications
depend on the IF frequency used. Even
for these applications the resolution
depends on the width of the bandwidth
granted to a given provider. In the case
of 10MHz we can satisfy the GSM
specifications (receive side) with an 81
dB dynamic range (13.5 bit) and a 97
dB SFDR. Moreover, the clock jitter
must be kept below 2 ps. The above
figures assume an IF frequency of 70
MHz. More relaxed figures result
when considering other standards like
the DCS1800.
Broadband Services
Broadband communication offers
many examples of architectures that
exploit data converters and DSP.
Through the already installed copper
twisted-pair phone lines it is possible
to provide broadband digital services.
We have various possibilities, the
ADSL (Asymmetric Digital Subscriber Line), HDSL (High Speed), the
RADSL (Rate Adaptive) and the
VDSL (Very High Speed). The first
class runs at a data rate of up to 8 Mb/s
in the downstream direction (from the
central office to the subscriber) and up
to 1 Mb/s in the upstream direction.
The distance from the central office to
the subscriber, however, must be a
maximum of 5 km. The VDSL service
is more advanced; it
can provide a
data
In its simplest
form, a smart antenna uses a fixed phase
beam network that is
switched to the elements of
the array so as to produce
a beam close to the
rate as
desired direction.
high as
52Mb/s.
The architecture provides a fiber optical connection from
the central office to the neighborhood
of a group of customers (Optical Network Unit, ONU). The connection
from the ONU to the subscriber is
through a short link (not more than
1200 m) of twisted-pair copper wire [6].
The above features are made possible by complex modulation schemes
and the relatively high bandwidth of
twisted-pair wires (much higher than
the 3 kHz filtered voiced channel). An
example of VDSL architecture is
. . . continued on Page 32
31
Tx
Filter
Packet
Format
Logic
FEC
Encoder
I/Q
Modul.
DAC
Figure 6.
Architecture
of a VDSL
transceiver.
A
Filter
Rx
Filter
I/Q
Demodul.
ADC
Packet
Format
Logic
FEC
Decoder
Filter
High-Speed Data Converters … continued from Page 31
shown in Fig. 6 [7]. Note that it includes an A/D and a D/A converter
very close to the twisted-pair connection. For the ADSL architecture the
sampling rate in the two links is 8–10
MHz and the required resolution is 1214 bits. For VSDL architectures the
resolution required is a bit lower but
the sampling rate must be programmable from 1 MHz to 52 MHz.
High Speed Data Converters
All the data converters in the architectures discussed so far require high
speed. Table 1 summarizes the key
specifications required. Observe that
the conversion rate ranges from 10
MHz to 100 MHz. This feature com-
bined with the resolution and the
SFDR make the converter feasible in
submicron CMOS technology for the
ASDL and the VDSL applications. Instead, the higher resolution and the demanding SFDR posed by the GSM and
the smart antenna implementations
impose significant bandwidth and linearity. Thus, GSM and smart antenna
typically demand submicron BiCMOS
(silicon or silicon-germanium) technologies. Presently, the fT of state-ofthe art submicron CMOS (0.12 µm)
exceeds 20 GHz while advanced
BiCMOS or SiGe permit bandwidths
on the order of 60 GHz. Therefore,
speed is an issue that can be satisfactorily addressed by exploiting the
available technology. In contrast, accuracy depends on the components’
precision and on practical limits related to circuit implementation. Therefore, the processes used, as well as
deep-submicron line widths, should
Table 1.
Feature
32
GSM
Smart Antenna
ASDL
VSDL
Resolution
14 bit
13.5 bit
12 bit
12 bit
Conversion Rate
60-100 MHz
40-60 MHz
10 MHz
52 MHz
Clock Jitter
4 ps
2 ps
-
-
SFDR
100 dB
95 dB
60 dB
65 dB
Technology
BiCMOS/SiGe
BiCMOS/SiGe
CMOS
CMOS
In
+
S&H
A/D
∑
Res
-
k
D/A
N,1
also provide precise resistors and capacitors. At this purpose special layout
techniques and a careful control of the
process are beneficial. These actions
are only effective, however, when not
more than 10–12 bits are required. For
higher resolution the best way to get
rid of inaccuracy is to use calibration.
Moreover, to avoid SNR degradation, the analog critical nodes must
be conveniently protected from spurious signals and noise. This demand
imposes additional steps on the process: it is necessary to utilize, for example, multiple diffused wells,
trench separation or migrate toward
SOI technologies.
High-Speed A/D Architectures
The simplest way to achieve high
speed is to use one step or two step
flash architectures. The techniques
have been used extensively in the past;
however, various limitations do not
allow us to step outside the 10 bit limit
[8]. Another approach is the folding
and interpolation technique. Folding
consists in nonlinear input processing
that produces an output characteristic
like a triangular wave. The number of
foldings performed determines the
MSBs (3 bits is a normal figure). Subsequent data conversion determines
additional LSBs. Folding is often associated with interpolation: a suitable
network processes two folded outputs
and produces transition levels intermediate to the one given by the folding
transformation. We can thus extract
additional bits at a reduced cost. An
example of folding and interpolation
+
S&H
A/D
∑
Res
-
k
A/D
D/A
N,2
D/A
N,i
architecture is given in [9]. The circuit
achieves 10 bits and 40 MS/s using a
7 GHz 0.6 µm CMOS technology.
The two step flash and the folding
technique don’t permit us to achieve
very high accuracy. By contrast a pipeline architecture (when it incorporates
digital calibration) can reach 12–14
bits so as to suit speed requirements
and low power consumption demands.
Because of that designers more and
more utilize pipeline based converters
for communication needs. Figure 7
shows a typical pipeline scheme. A
number of cells compose the structure.
Each cell provides a given number of
bits at the output (say, N bits) and a
residual voltage. The next cell processes the residual voltage, performs
digital conversion and gives another
residual voltage. The output of the entire system comes from the bits generated by each stage. Because the dynamic range of the residual voltage is
N
smaller than the input by a factor 2 ,
many architectures foresee an amplification of the residual voltage by the
same factor to keep the dynamic range
constant along the pipeline. The residual output of each cell is
(1) Vi, res
∑
S&H
Figure 7.
Pipeline
architecture.


bn,0 + 2bn,1 + … +2 N −1 bn, N −1
= Vi, in −
VRef  ⋅ 2 N
N
2


where bn, i are the bits generated by the
n-th cell. Possible errors come from an
inaccurate amplifier and from offsets
in the DAC or in the ADC. Offsets are
normally corrected digitally. Each
stage of the pipeline provides the digital information with some redundancy
. . . continued on Page 34
33
Noise shaping
Band of Interest
∆2/(12·fN1)
∆2/(12·fN2)
f
Nyquist interval 1
Nyquist interval 2
High-Speed Data Converters … continued from Page 33
(for a 1 bit per stage architecture, 0.5
additional bit). Redundancy avoids
loss of information even in the presence of offsets and permits cancellation of inaccuracies [10], [11].
Designers widely use the sigma
delta technique for band-base applications. Recent technology advancements expand the usable bandwidth
and extend the applicability of sigma
delta converters to medium and high
frequency applications (as required by
communication architectures). Sigma
delta modulators exploit oversampling, that means using a sampling
frequency much higher than twice the
signal band. Oversampling and a
proper shaping of the quantization error spectrum permit us to significantly
attenuate the power of the quantization
noise in the band of interest [12]. Figure 8 depicts this basic concept. If we
increase the sampling frequency of a
Figure 8.
Oversampling
and noise
shaping.
Figure 9.
Oversampling and
noise shaping.
n(z )
In
u(z )
H 1 (z)
t
Out
y(z)
A/D
A/D
H 2 (z)
34
D/A
conventional converter by a factor 2
the power of the quantization noise
∆2/12 ( ∆ is the quantization step,
Vref /2N) is spread over a double frequency interval; therefore, the amount
of noise power that falls in the band of
interest halves. Noise shaping does
better: as shown in Fig. 8 the noise in
the band of interest is quite low even
if we have much more noise outside.
Therefore, a digital filter that rejects
out the band noise permits us to
achieve a pretty good SNR. Remembering that the number of bits, n and
the SNR are related by
SNR = 6.03 . n – 1.73
(2)
an increase of the SNR leads to more
bits of accuracy.
Sigma delta modulators produce
noise shaping by including the quantization into a feedback loop. Figure 9
shows the generic configuration of a
sigma delta. With a linear model an
additive term n(z) represents the quantization error. Thus, in the circuit we
have two inputs: the input signal and
the quantization error. The design target is to leave unchanged the signal
while shaping the quantization error.
By inspection of the circuit we derive
the signal transfer function and the
quantization-error transfer function
Y(z) = S(z)u(z) + Q(z)n(z)
S(z) =
H1 (z)
;
1+ H1 (z)H2 (z)
Q(z) =
1
.
1+ H1 (z)H2 (z)
(3)
(4)
A proper choice of the transfer
functions H1 and H2 (we normally set
H2 to 1) achieves the target. A huge
number of papers deal with the definition and the design of modulator architecture. Designers of converters for
communication systems will take advantage of that huge previous work [12].
High Speed DAC Architectures
The limited bandwidth of amplifiers used mainly limits the speed of
analog systems. Thus, all the DACs
operating at very high clock rate don’t
use op-amps or amplifiers. It is possible to achieve digital-to-analog conversion without active elements by
using the current steering approach
[13]. Equal (or binary ratioed) current
sources are properly switched toward
the matching resistance or a dummy
resistance. Figure 10 shows n-channel
cascode current sources switched by
simple differential pairs. More complex architectures can be envisaged.
Nevertheless, the technique is quite
simple; possible design problems
come from current sources matching
inaccuracy and the consequent request
to use special techniques and achieve
mismatch compensation. Moreover, a
non-simultaneous switching of current
sources produces glitches in the output
voltage. This deteriorates the converter
performance. Many papers discuss
how to face the above mentioned issues. Readers can refer, for example,
to [14],[15]. The area consumption of
a current steering architecture can be
quite small: it is possible to achieve a
10 bits converter in only 0.6 mm2 [16].
Moreover, it is possible with modern
technologies to combine the analog-todigital conversion and digital processing as outlined in Fig. 3 [17].
Conclusions
We have seen that the tremendous
pace in telecommunication developments affects (and is produced by)
improvements in data converter performance. We have a combination of
technological advances and novel architectures. Typically, benefits come
from an extensive use of digital correction and calibration. Therefore, we can
talk about a new approach for achieving high performance: using “digitally
assisted” data converters.
The market already offers data
converters running at 80 Mb/s with 14
bit resolutions and a SFDR of 100 dB.
The technology used is, in some cases,
specifically optimized for the application. Nevertheless, progress in CMOS
processes will soon make available
0.15-µm technologies with migration
to the 0.12-µm mark by the year 2001.
It is expected that 0.09-µm effective
channel length will be achieved by
2001 and 0.07-µm by 2004. If these
features are combined with a suitable
defense from spur interference,18-bit,
200 Mb/s data converters capable of
answering a wide spectrum of telecommunication needs will be on the
market by the first decade of the next
century.
Figure 10.
Principle of
operation of the
current steering
DAC.
. . . continued on Page 36
b1
b2 b2
b1 b 2
b2
bN
bN
RT
RT
Vb 2
1
2
2
2
Vb1
35
High-Speed
Data Converters
for Communication
Systems
High-Speed Data Converters … continued from Page 35
References
Franco Maloberti
Texas A&M University
Dept. of Electrical Eng.
318B WERC
College Station, TX 77843
Phone: 845-7160
Fax: 845-7161
franco@ee.tamu.edu
[1] B. Brannon, D. Efstathiou, T. Gratzek, “A
Look at Software Radios: Are They Fact
or Fiction?”, Electronic Design, pp. 117–
122, December 1998.
[2] F. Maloberti, “High Speed Data Converters and New Telecommunication Needs”,
IEEE 1999, Tucson, Arizona, pp. 147–
150, April 11–13, 1999.
[3] B. Schweber, “Converters Restructure
Communication Architectures”, EDN, pp.
51–64, August 1995.
[4] Fujitsu Microelectronics Europe,
MB86061 Datasheet, January 2000.
[5] C. B. Dietrich, W. L. Stutzman, “Smart
Antennas Enhance Cellular/PCS Performance”, Microwave and RF, pp.76–86,
April 1997.
[6] H. Samueli, “Broadband Communication
IC: Enabling High-Bandwidth Connectivity in the Home and Office”, IEEE-International Solid State Circuit Conference,
pp.26–30, 1999.
[7] R. H. Joshi, et al., “A 52 Mb/s Universal
DSL Transceiver IC”, IEEE-International
Solid State Circuit Conference, pp.250–
251, 1999.
[8] B. Brandt, J. Lutsky, “A 75 mW 10b
20MS/s CMOS Subranging ADC with 59
dB SNDR”, IEEE-International Solid
State Circuit Conference, pp.322–323,
1999.
[9] G. Hoogzaad, R. Roovers, “A 65mW 10b
40MS/s BiCMOS Nyquist ADC in
0.8mm2”, IEEE-International Solid State
Circuit Conference, pp.320–321, 1999.
[10] A. M. Abo, P. R. Gray, “A 1.5-V, 10-bit,
14.3-MS/s CMOS Pipeline Analog-toDigital Converter”, IEEE Journal of Solid
State Circuits, vol. 34, pp.599–606, 1999.
[11] I. E. Opris, L. Lewicki, B. Wong, “A
Single-Ended 12 b 20 M Sample/s SelfCalibrating Pipeline A/D Converter”,
IEEE Journal of Solid State Circuits, vol.
33, pp.1898–1903, 1998.
[12] J. C. Candy, G. C. Temes, “Oversampling
Delta-Sigma Data Converters”, IEEE
Press, 1992.
[13] A. Cremonesi, F. Maloberti, “A 100 MHz
CMOS DAC for Video Graphic Systems”, IEEE Journal of Solid State Circuits, vol. 24, pp. 635–639, 1989.
[14] A. Marques, J. Bastos, A. Van der Bosh,
J. Vandenbussche, M. Steyaert, W.
Sansen, “A 12b Accuracy 300 Msample/
Supdate Rate CMOS DAC”, IEEE-International Solid State Circuit Conference,
pp.216–217, 1998.
[15] A. R. Bugeja, B. S. Song, P.L. Rakers, S.F.
Gillig, “A 14-b, 100MS/s CMOS DAC
Designed for Spectral Performance”,
IEEE Journal of Solid State Circuits, vol.
34, pp. 1719–1732, 1999.
[16] C. H. Lin, K. Bult, “A 10b 500 MSample/
S CMOS DAC in 0.6 mm2” IEEE Journal of Solid State Circuits, vol. 33, pp.
1948–1958, 1998.
[17] D. Domanin, U. Gatti, P. Malcovati, F.
Maloberti, “A Multipath Poliphase Digital-to-Analog Converter for Software
Radio Transmission Systems”, IEEEISCAS, pp. II 361–II 364, 2000.
Franco Maloberti received the Laurea Degree in physics (Summa cum Laude) from the University of Parma, Italy, in 1968 and the
Doctorate Honoris Causa in electronics from the Instituto Nacional de Astrofisica, Optica y Electronica (Inaoe), Puebla, Mexico, in 1996.
He is currently professor of electrical engineering at Texas A&M University, College Station, Texas, where he holds the TI/J. Kilby
Chair. He joined the University of L’Aquila in 1968, and the University of Pavia from 1967 to 2000. Presently he is on leave from the University of Pavia. In 1993 he was visiting professor at ETH-PEL, Zurich working on electronic interfaces for sensor systems. His professional
expertise is in the design, analysis, and characterization of integrated circuits and analog digital applications, mainly in the areas of switched
capacitor circuits, data converters, interfaces for telecommunication and sensor systems, and CAD for analog and mixed A-D design.
Professor Maloberti has written more than 230 published papers, two books, and holds 15 patents. He was 1992 recipient of the XII
Pedriali Prize for his technical and scientific contributions to Italian industrial production. He was co-recipient of the 1996 Institution of Electrical Engineers (U.K.) Fleming Premium. He received the 1999 IEEE CAS Society Meritorious Service Award, the 1999 CAS Society Golden
Jubilee Medal and the IEEE Millenium Medal. He was associate editor for the IEEE Transactions on Circuit and System—II. He is former
IEEE CAS vice president of Region 8. He is the president-elect of the IEEE Sensor Council. He is a Fellow of IEEE and a member of AEI.
36
Technical Committees
DSP within
Circuits and Systems
by Paulo S. R. Diniz*
2000 Chair, CAS Technical Committe
on Digital Signal Processing
E-mail: diniz@lps.ufrj.br
A
bstract—It has been widely recognized that a large number of activities related to the digital signal processing
(DSP) field have started in the Circuits and Systems (CAS)
Society. In fact, the CAS Society originated many important new Societies within the IEEE, including the sister
Signal Processing Society. This article discusses some
milestones in the digital signal processing field and their
links with the activities of the Circuits and Systems (CAS)
Society. Some critical analysis of the current and future
trends in digital signal processing and the corresponding
role of the CAS Society in their development are presented.
The article also discusses engineering education in the
DSP area, which has become challenging due to its potential application in a variety of fields.
Technical Coverage of the CAS Society
From the point of view of some experts [1], the field
of electrical engineering (EE) consists of two major
streams, namely, applied physics and applied mathematics. We can cite device modeling and semiconductor manufacturing as examples of activities in the stream of applied
physics. In the stream of applied mathematics we can cite
circuit analysis and design, and, for certain, digital signal
processing. Both streams have historically found room in
the CAS Society.
Since its conception in 1952 the CAS Society is a forum for publication and discussion of topics covering a
broad spectrum of subjects related to electrical engineering. In the fifties we witnessed the publication of many
articles dealing with network synthesis, information theory,
communications, electronic circuits, and feedback circuits,
among others. Over the years the CAS Society has evolved
to include many different topics and their applications,
which these days range from classical fields such as circuit theory, feedback circuits and systems, system stability, distributed circuits and systems, nonlinear circuits and
systems, analog filters (continuous-time passive and active), to the more recently developed fields of digital filters, analog integrated filters (continuous-time, and discrete-time or switched), analog and digital VLSI circuits,
computer-aided design, accurate modeling of devices, lowpower circuits, adaptive systems, multidimensional circuits
and systems, video technology, and neural networks among
*On behalf of the DSP Technical Committee.
others [2]. The open mind and tolerance of the CAS Society
have allowed birth of many new Societies within the IEEE,
including the Signal Processing and Solid State Societies,
and the co-sponsorship of many important technical publications such as the Transactions on Neural Networks,
VLSI, and Fuzzy Logic just to name a few. In addition, the
CAS Society sponsors and co-sponsors a large number of
conferences, symposia and workshops around the world,
among which the International Symposium on Circuits and
Systems (ISCAS) represents a synthesis of them all.
Technological Changes
With the advent of very large scale integration (VLSI)
of circuits, millions of transistors could fit in a single silicon chip, allowing compact and reliable implementation
of sophisticated systems. The recent development of microelectronics technology caused a great impact in our
modern life, making available electronic equipment such
as mobile phones, computers, and video and audio systems
at a reasonable cost. This development occurred in parallel with advances in the technology of transmission, processing, recording, reproduction, and general treatment of
signals through analog and digital electronics, as well as
other means such as acoustical, mechanical, optical, and
so forth.
In particular, the field of Digital Signal Processing
(DSP) has developed very fast in the last decades and has
been incorporated into a large number of products, being
also the subject of many engineering curricula not restricted to electrical engineering. It is very common to see
DSP as part of the mechanical engineering core to deal with
vibration and acoustics studies, in biomedical engineering
where much equipment incorporates sophisticated signal
processing algorithms, in industrial engineering where
many forecast analyses are based on DSP tools, and elsewhere.
If we consider that digital signal processors (DSPs) are
now widely available at low cost, and powerful computer
packages incorporating tool boxes for DSP are within
reach, teaching basic DSP tools is not a major challenge.
This fact led to the widespread use of digital signal processing in a number of different areas.
Milestones and Current Trends
The concept of digital signal processing started in the
late forties and developed further in the fifties with the
special objective of providing simulation tools for analog
systems. These tools were implemented on general purpose
computers, and consisted of basic interpolation, differentiation, prediction and other numerical analysis tasks.
In the sixties, the availability of better computers made
the use of digital signal processing feasible for certain realtime applications. It was during this decade that the names
digital filters and digital signal processing became popular and many new techniques and applications became
37
Technical
Technical Committees
Committees
available. Methods for digital filter design, the fast Fourier transform, and speech coding and synthesis are examples
of important breakthroughs experienced in the sixties.
During the seventies the field of digital signal processing matured, becoming a very dynamic field as new applications and design tools were proposed at a very fast pace.
Major contributions to Finite and Infinite Impulse Response (FIR and IIR) filter approximation, realization and
implementation were proposed. In addition we witnessed
significant developments such as: introduction of other
THE ADVENTURES OF …
…THE
tegrated circuit (ASIC), form the family of hardware available for the DSP system designer. Certainly the DSP is the
most popular machine used to implement DSP systems, due
to its powerful computational capability and decreasing
cost. Also most DSPs available in the market come with
comprehensive development tools, making the designer’s
life much easier by reducing the time from a product conception to its full implementation. The DSPs boosted the
development of new concepts and applications of the DSP
field in the eighties and nineties. We can cite: time-frequency analysis including wavelets and filter
banks, real-time implementation of adaptive
systems, acoustic echo cancellation, image and
'UMBLE OHM video compression, audio signal compression
and restoration, implementation of nonlinear
…Shlomo Karni
systems such as neural networks, and again
many many others.
More recently mixed mode analog-digital
circuits, low-power circuits, and the currently
available DSP technology, are used in a large
number of applications that range from disk
drives for computers, CD players, modern
TV’s, telephony systems, data networks, to
watches, home appliances, robots and biomedical instruments. The conception and design of
such systems requires a deep knowledge of circuits and systems. Without this knowledge it is
doubtful that the electrical and computer engineers of the future will have enough skills to
face new challenges.
Future Trends
02–02–2006: The newest, most powerful chip is
unveiled. Commercial logo: “— ligent outside”.
Runs at 999 GHz, and also makes coffee.
types of discrete transforms, digital architectures specialized for DSP implementation, digital processing of speech
signals, and early experiments in image processing, just to
mention a few. It was also during this decade that the first
comprehensive textbooks dealing with digital signal processing and its applications appeared in the market.
The advent of VLSI brought about a revolution in the
electronic industry, widening the applications where electronic circuits could be used as the means to implement
sophisticated systems. This made available in the early
eighties the first generation of digital signal processors
(DSPs). These processors have been designed to perform
efficiently some common tasks encountered in the DSP
algorithms, such as inner products and FFT’s. The DSPs,
the faster general purpose computers, the programmable
logic array (PLA), along with the application specific in-
38
If one recalls that the world is analog, even
if we are able to design perfect digital computers, at one point the information produced by
the computers has to interface with the external world. Therefore, very good digital-to-analog and analog-to-digital converters are essential. Someone has to design, specify and understand how they work. Internally the digital machines carry
analog signals since the ideal pulses representing zeros and
ones are only abstractions. As a consequence, there is always need for interplay between the analog and digital domains. However, digital signal processing provides us with
the alternative that, whenever possible and necessary, we
can solve “part” of our real world design problem in the
discrete-time domain. DSP adds flexibility and reliability
to part of the system, but is by no means the sole solution.
In our view mixed-mode analog continuous-time circuits
along with digital signal processing circuits and systems
will play a growing role in forthcoming systems, if their
interrelationships are properly explored. It should be noted
that the implementation of discrete-time systems with analog circuits is also possible by using for example switched
capacitor, switched current or charge coupled technologies.
Technical
Technical Committees
Committees
In the next generation of electronic systems, many
more systems will certainly incorporate digital signal processors. We will witness the continuing trend of having
available more powerful, lower cost, and lower power digital signal processors. There will be also a growing number of systems on chips.
Among those systems we can cite the mobile phones
with data communication and internet service facilities,
high-definition television, CD quality digital radio, local
area wireless computer networks, medical image processing and many many others. In all these systems the current trend of digitalizing the information as soon as possible will continue; therefore, digital signal processing will
play a major role in the future electronic systems.
Many of these systems will include microsensors (for
example to acquire images), mixed-mode (analog and digital signal processing) circuits, and will in many cases utilize concurrent and distributed processing. Also self-learning or self-designing circuits and systems will be more frequently applied in future systems.
To design these future systems, the companies should
have a team of engineers with depth in a few topics so that
they can recognize what are the limits of knowledge in
some important topics and how these topics can be widened and used in new applications. For the overall team of
electrical and computer engineers, a deep knowledge of
circuit analysis from Kirchhoff’s laws to computer aided
analysis of nonlinear circuits, linear control, analog and
digital communications, electric machines, semiconductor
devices, electronic circuit design, signal processing, computer programming, computer architecture, and electromagnetics on top of basic sciences and mathematics is an
essential core to face any challenge in designing these systems. The CAS Society is certainly one of the few forums
where one can find engineers whose expertise covers such
a vast range of areas.
Therefore, if we also take into account that the time
reduction between the conception of a scientific knowledge
and its use for technological innovations is key for a company to compete in the internationalized market, companies badly need skillful electrical and computer engineers.
Educational Challenges
The current trend is that electrical engineers are mere
users of circuits, and superficially understanding them is
enough for their career. As Guillemin said in 1957 [3], the
interest is in learning “how things go”, and a superficial
knowledge of “why things are that way”. We share his opinion that a tool must be well understood to be most useful,
otherwise things can go very wrong.
There are a number of reasons that could be presented
to explain the current trend for a lack of deepness and thoroughness in dealing with some electrical and computer engineering fields. Some of them are:
• These days we are very familiar with computers and
the fast result they provide to a number of interesting
problems. Sometimes all it takes is to strike the mouse
and the keyboard a few times. As a result, it is a challenging job for engineers to keep themselves motivated
and interested in acquiring deep information about
some subjects, especially if this knowledge has little
immediate practical use.
• The current trend is learning a number of topics in
shallow manner whereas detailed learning is postponed
to a later stage.
• Some of the reasons why thoroughness is commonly
avoided or postponed are because it requires competence,
patience and experience on the part of the learner.
The CAS Society promotes a number of activities
where students, practicing engineers and researchers can
share their wide ranging knowledge in a cooperative manner. This is certainly a major goal of the Digital Signal Processing Technical Committee of the CAS Society, a committee in charge of dealing with such a multidisciplinary
subject which finds applications in a large number of areas.
Conclusions
In this article we discussed the historical role of the
Circuits and Systems Society in the development of the
digital signal processing field.
Considering the previous history and future challenges,
we feel that the CAS Society, through its numerous technical activities, will continue to provide seeds for future
breakthroughs in theory and applications of digital signal
processing techniques. It is also clear that the DSP field is
not close to saturation and that there is plenty of room for
further developments.
References
[1] Y.-F. Huang, Private Communication. University of Notre Dame,
1999.
[2] W. K. Chen, The Circuits and Filters Handbook. IEEE Press and
CRC Press: Salem, MA, 1995.
[3] E. A. Guillemin, Synthesis of Passive Networks. John Wiley &
Sons: New York, NY, 1957.
Paulo S. R. Diniz received the B.Sc.
degree from the Federal University of Rio
de Janeiro (UFRJ) in 1979, the M.Sc. degree from COPPE/UFRJ in 1981, and the
Ph.D. from Concordia University,
Montreal, P.Q., Canada, in 1984, all in
electrical engineering. Since 1979 he has
been with UFRJ, where he is presently
professor. He served as undergraduate
course coordinator and as chairman of the
Graduate Department. He is one of the
three senior researchers and coordinators
of the National Excellence Center in Signal Processing, and since 1995 is a member of the advisory committee of CNPq.
39
People on the Move
Sung-Mo (Steve) Kang
New Engineering Dean at
UC, Santa Cruz
T
he University of California,
Santa Cruz, has named
Sung-Mo (Steve) Kang as the
new dean of the Baskin School of
Engineering, starting in January
2001. Kang previously served as
professor and department head of
electrical and computer engineering at the University of Illinois,
Urbana-Champaign.
Dr. Kang received the Ph.D.
degree in electrical engineering
from UC Berkeley in 1975. Until 1985, he was with AT&T Bell Laboratories and also served
as a faculty member of Rutgers University. In 1985, Kang
joined the Department of Electrical and Computer Engineering at the University of Illinois at Urbana-Champaign (UIUC),
where he later served as department head.
Looking forward to his new role at UCSC, Kang noted
the importance of maintaining existing strengths while expanding the engineering programs.
“I am challenged and most excited for the great opportunity to work with faculty and staff to make UC Santa Cruz
engineering programs of highest quality and at the same time
expansive,” Kang said.
At Urbana-Champaign, Kang was affiliated not only with
the Department of Electrical and Computer Engineering, which
he headed, but also with the Department of Computer Science,
the Beckman Institute for Advanced Science and Technology,
and the Coordinated Science Laboratory in the College of
Engineering.
Kang’s research interests include computer chip design (specifically, methods of very large-scale integration, or VLSI) and
optimization of chip design for performance, reliability, and
manufacturability; modeling and simulation of semiconductor
devices and circuits; high-speed optoelectronic circuits; and fully
optical network systems. He holds six patents, has published more
than 300 technical papers, and has coauthored eight books.
Kang is a fellow of the Institute of Electrical and Electronics Engineers (IEEE) and of the American Association for the
Advancement of Science (AAAS). He is a foreign member of
the National Academy of Engineering of Korea and the recipient of numerous national and international awards and honors,
including the Semiconductor Research Corporation Technical
Excellence Award in 1999. Kang was a visiting professor at the
Swiss Federal Institute of Technology at Lausanne in 1989 and
visiting Humboldt Professor at the University of Karlsruhe in
1997 and at the Technical University of Munich in 1998. He has
40
served as a member of the Board of Governors, secretary and
treasurer, administrative vice president, and 1991 president of
the IEEE Circuits and Systems Society. He was the founding
editor-in-chief of the IEEE Transactions on VLSI Systems and
has served on the editorial boards of several other journals.
Thomas Kailath among
60 New US NAS Electees
P
rofessor Thomas Kailath
was among the 60 new
members elected to the US National Academy of Sciences in
2000. Election to the NAS recognizes “distinguished and continuing achievements in original research”. He has been a member
of the National Academy of Engineering since 1984 and of the
American Academy of Arts and
Sciences since 1993. Professor
Kailath is well known to the Circuits and Systems Society for his book on Linear Systems,
Prentice Hall, 1980, and several papers in our transactions on
stability theory, adaptive filtering, orthogonal digital filters and
VLSI implementations. Most recently he delivered a Plenary
Lecture at ISCAS 2000 in Geneva, speaking on: “Breaking the
0.1µm Barrier in Optical Microlithography via Signal Processing”. In Geneva, Kailath also received a Golden Jubilee Medal
of the CAS Society and an IEEE Third Millennium Medal. He
had been awarded the CAS Education Medal in 1993, followed
by the IEEE Education Medal in 1995.
Dr. Kailath recently co-authored the volume: Linear Estimation, Prentice-Hall, 2000 (with A. Sayed and B. Hassibi) and
coedited with Ali Sayed a SIAM monograph on Fast Algorithms
for Structured Matrices. Kailath is a Distinguished Editor of the
Journal of Linear Algebra and its Applications and of the Journal on Integral Equations and Operator Theory. He received the
Claude E. Shannon Award of the Information Theory Society,
which he served as President in 1975, and an honorary doctorate from the University of Carlos III, Madrid, Spain (with earlier ones from Linkoping University, Sweden, and Strathclyde
University, Scotland).
New Editors Begin Work
January 2001 for CAS Society
Eby Friedman, Editor-in-Chief
IEEE Transactions on VLSI Systems
Eby G. Friedman received the B.S. degree from Lafayette
College in 1979, and the M.S. and Ph.D. degrees from the University of California, Irvine, in 1981 and 1989, respectively, all
People
People on
on the
the Move
Move
in electrical engineering.
From 1979 to 1991, Dr.
Friedman was with Hughes Aircraft Company, rising to the position of manager of the Signal
Processing Design and Test Department, responsible for the design and test of high performance
digital and analog IC’s. He has
been with the Department of
Electrical and Computer Engineering at the University of
Rochester since 1991, where he
is professor, director of the High Performance VLSI/IC Design
and Analysis Laboratory, and director of the Center for Electronic
Imaging Systems. His current research and teaching interests are
in high performance synchronous digital and mixed-signal microelectronic design and analysis with application to high speed
portable processors and low power wireless communications.
Dr. Friedman is the author of more than 135 papers and book
chapters and the author or editor of four books in the fields of
high speed and low power CMOS design techniques, interconnect and substrate noise, pipelining and retiming, and the theory
and application of synchronous clock distribution networks. He
is regional editor of the Journal of Circuits, Systems, and Computers, on the editorial board of Analog Integrated Circuits and
Signal Processing and IEEE Transactions on Circuits and Systems II: Analog and Digital Signal Processing, chair of the steering committee for the IEEE Transactions on Very Large Scale
Integration (VLSI) Systems, CASS Distinguished Lecturer, member of the IEEE CAS Society Board of Governors, CAS liaison
to the IEEE Solid-State Circuits Society (SSCS), and member
of the technical program committee of a number of conferences.
Chung-Yu Wu, CAS Editor
IEEE Circuits and Devices Magazine
Chung-Yu Wu was born in
Chiayi, Taiwan, Republic of
China, in 1950. He received the
M.S. and Ph.D. degrees from the
Department of Electronics Engineering, National Chiao-Tung
University, Taiwan, in 1976 and
1980, respectively.
From 1980 to 1984 he was
associate professor in the National Chiao-Tung University.
During 1984–1986, he was visiting associate professor in the Department of Electrical Engineering, Portland State University,
Oregon. Since 1987, he has been professor in the National ChiaoTung University. From 1991 to 1995 he was rotated to serve as
director of the Division of Engineering and Applied Science in
the National Science Council. Currently, he is the Centennial
Honorary Chair Professor at the National Chiao-Tung Univer-
sity. He was awarded the Outstanding Research Award by the
National Science Council in 1989, 1995, and 1997, the Outstanding Engineering Professor by the Chinese Engineer Association
in 1996, and the Tung-Yuan Science and Technology Award in
1997. He has published more than 200 technical papers on several topics, including analog neural integrated circuits, mixedmode integrated circuits, ESD protection circuits, special semiconductor devices, and process technologies. He also has 18 patents including 9 US patents. His current research interests focus
on low-voltage, low-power mixed-mode integrated circuit design, hardware implementation of visual and auditory neural systems, and RF integrated circuits design.
Michael Sain, Founding Editor-in-Chief
IEEE Circuits and Systems Magazine
Michael K. Sain received the
B.S. and M.S. in Research, from
Saint Louis University, and the
Ph.D. from the University of Illinois, Urbana, in 1965. He then
joined the University of Notre
Dame, where he has held since
1982 the Frank M. Freimann
Chair in Electrical Engineering,
Notre Dame’s first endowed chair
established in 1971. He has been
visiting scientist in Canada at the
University of Toronto, and University distinguished visiting professor at Ohio State University.
Most recently, Mike served the CASS as editor of the IEEE
Circuits and Systems Society Newsletter, which he founded and
designed. Under his direction, the CAS Newsletter comprised 44
issues over 11 years, passing from 16 pages in black-and-white
to 48 pages in full color, and adding up to 952 pages overall. In
previous years, on the executive side of things, Mike served the
CAS Society for two years as vice-president for Technical Activities and for two years as vice-president for Administration.
Twice nominated for president of the Society, he is honored to
have ranked second to John Choma and to Rui de Figueiredo.
An IEEE Fellow, a recipient of the IEEE Centennial Medal,
and recently a CAS Golden Jubilee Awardee, Mike has served
as consultant to General Motors Corporation, Garrett Corporation, Deere and Company, and Allied-Signal/Bendix Aerospace.
Mike is the author or co-author of more than three hundred
reviewed technical publications, and has been the director or codirector on some fifty funded research projects. On the industrial interface, with Joseph L. Peczkowski, Mike developed practical design methods, based upon nonlinear inverses, for operation of modern gas turbine aircraft engines. These ideas have also
had a modern automotive implementation. Presently, Mike is involved with digital installations to protect buildings, bridges, and
off-shore installations from earthquakes, winds, and waves, especially with dampers based upon magnetorheological technology, and is co-director of the Earthquake Engineering Laboratory at the University of Notre Dame.
41
Executive and Board
CAS NOMINATION FORM (Revised 2001)
Check the CAS Society homepage at http:// www.ieee-cas.org
for information on how to submit nominations electronically.
CAS Office:
Member of the Board
of Governors
President-Elect
Vice President,
Technical Activities
Vice President,
Conferences
Vice President,
Region 8
Vice President,
Region 9
The Candidate has agreed
to serve if elected
Nomination by Petition:
30 valid names and addresses
(mail, phone, fax, e-mail), with
signatures, are attached!
Candidate:
Name
Address
Phone
Fax
Nominator:
Name
Address
Phone
CAS Society
Call for Nominations
Fax
Please e-mail, mail or fax this form to:
George S. Moschytz, CAS Nominations Committee
c/o Barbara Wehner, Administrative Office
15 W. Marne Avenue, P.O. Box 265
Beverly Shores, IN 46301–0265
Tel.: (219) 871–0210
Fax: (219) 871–0211
E-mail: b.wehner@ieee.org
Board of Governors
Each year, five members of the CAS Society are elected to the Board of Governors for
3-year terms. The Board of Governors shall
represent the members of the Society and approve the Society’s annual budget, amendments to the Constitution and Bylaws, and authorize the expenditure of Society funds.
Members of the Board of Governors should
not miss annual meetings (at ISCAS in May/
June and ICCAD in November) more than two
times consecutively. Nominations by petition
from the Society membership must include at
least 30 signatures of Society members, excluding students and affiliates. Upon receipt
of nominations, the Nominations Committee
will submit at least eleven candidates for Society-wide election of the five Board members.
If you wish to nominate a member to the
Board of Governors, then, after obtaining the
consent of the nominee to serve if elected,
please check the CAS Society website at
http://www.ieee-cas.org for instructions to
submit a nomination electronically, or fax the
name of the nominee, address, telephone, fax
number, and e-mail address, if available, to the
CAS Nominations Committee Chair using the
form to the left. Board of Governors nominations must be received by June 1, 2001.
Society Officers
Each year, members of the Board of Governors elect President-Elect and Vice Presidents for the Circuits and Systems Society. The CAS Bylaws have provision for nominations from Society members by written petition with at least 30 members’
signatures, excluding students and affiliates.
In order to stagger the terms for the officers, this year’s election includes two-year terms for the offices of Vice President of
Conferences, Technical Activities, Region 8, and Region 9. The election last year began two-year terms for the offices of Vice
President of Administration, Publications, Regions 1–7, and Region 10 which are not, therefore, included on this year’s ballot.
If you wish to nominate a member to any of the open CAS offices, then, after obtaining the consent of the nominee to serve
if elected, please check the CAS Society website at http:// www.ieee-cas.org for how to submit a nomination electronically, or
fax the name of the nominee, address, telephone, fax number, and e-mail address, if available, to the CAS Nominations Committee Chair using the form above. Officer nominations must be received by September 1, 2001.
Electronic submissions are strongly encouraged.
Constitution and Bylaw Changes
K. Thulasiraman, Chair, CAS Society Constitution & Bylaws (C&BL) Committee, announces that the Board of Governors has recently approved changes to the Society’s Constitution and Bylaws. Interested readers may check the following
URLs: http://www.ieee-cas.org/html/constitution.html and http://www.ieee-cas.org/html/bylaws.html.
42
Meetings
First IEEE SAWCAS Report
Rio de Janeiro, Brazil, November 20–22, 2000
Bahía Blanca, Argentina, November 22–24, 2000
General Overview
T
he First South-American Workshop on Circuits and Systems (First IEEE SAWCAS) was organized by Argentine
and Rio de Janeiro (Brazil) Chapters of the IEEE Circuits and
Systems Society. The organizing committee consisted of Juan
E. Cousseau, vice president of Region 9, Circuits and Systems
Society; Pablo S. Mandolesi, chapter chair, Argentina; and
Sergio L. Netto, chapter chair, Rio de Janeiro, Brazil.The initiative consisted of several plenary conferences, at tutorial
level, lectured by specialists in the areas of circuits, systems
and communications. These tutorial conferences were addressed to students, graduate and undergraduate, and professionals related to communications, circuits and system applications.
As an evolution of previous experiences of CAS Tour I
(Buenos Aires—Bahía Blanca—Rio de Janeiro) and II
(Florianópolis and Bahía Blanca), 1998 and 1999 respectively,
the First IEEE SAWCAS was held on consecutive days at the
Federal University of Rio de Janeiro and at the Universidad
Nacional del Sur, Bahía Blanca Argentina.
Indeed, for the first time, the initiative included some technical sessions in the areas of circuits, systems and communications. The technical sessions were held in both meeting
places, Rio de Janeiro and Bahía Blanca. This form of technical session presentation was selected in order to provide a more
interactive atmosphere
between authors and attendees in both places.
The invited lecturers and their respective
conferences were: Prof.
Yih-Fang Huang, Dept.
of Electrical Engineering, University of Notre
Dame, USA—Title:
Adaptive Set-Membership Filtering - Equalization and MAI Mitigation; Prof. Carlos H.
Muravchik, Engineering Faculty, Universidad Nacional de La
Plata, Argentina—Title:
Global Positioning Systems Aspects; Ana E. F.
de Silva and Paulo H. C. V. de Castro, Globo TV Broadcasting Network Inc., Brazil—Title: Brazilian Tests on Digital Terrestrial Television Systems; Prof. Magdy Bayoumi, Univer-
sity of Southwestern Louisiana, Center for Advanced Computer Studies, Lafayette, USA—Title: VLSI Design Techniques for DSP; and Professor Tapio Saramäki, Tampere University of Technology, Signal Processing Laboratory, Tampere,
Finland—Title: Design of Digital Filters and Filter Banks by
Optimization: Applications.
Following this general overview, there are included the
details of the event at each place, as reported by the respective CASS chapter chairs, Prof. Pablo Mandolesi (Argentina)
and Prof. Sergio Lima Netto (Rio de Janeiro). Further details
can be found in the Proceedings of the workshop.
Argentina Chapter Chair Report
The IEEE SAWCAS 2000 Argentine part was held at the
Center for Basic and Applied Research, Bahía Blanca, Argentina. As an outgrowing of previous CAS Society supported
meetings in the Region, the SAWCAS had the attention of a
very broad audience (on the order of 80 participants every day),
mostly people searching for state-of-the-art knowledge and direct discussion of the areas covered by the meeting. More interesting in this meeting was the participation of people from
different parts of the country (Neuquen, La Plata, even from
the southern part of Brazil).
The local program included the participation of four excellent lecturers: Prof. Yih-Fang Huang, Prof. Carlos
43
Meetings
Meetings
Muravchik, Prof. Magdy Bayoumi, and Prof. Tapio Saramäki.
The amazing teaching experience of each lecturer was really
appreciated by the participants—mostly graduate and
undergraduate students. Simultaneously, small technical sessions (paper presentations) were included, that
motivated participants’ involvement with the workshop.
In addition, and with the
kind participation of Prof.
Tapio Saramäki, the last day
of the IEEE SAWCAS 2000
a videoconference was programmed. This video conference was established with
the participation of the local
Argentine CASS chapter on one side, and the IEEE subsection Córdoba, on the other, represented by his Chair Prof.
Miguel Solinas, Universidad Nacional de Córdoba, Argentina.
The videoconference was a very interesting opportunity to
share motivations and exchange ideas of common research
areas. As part of this exchange of ideas, it was possible to
learn about the Circuits and Systems Society objectives,
as lectured by the VP for Region 9, Prof. Juan E. Cousseau.
In addition, some perspectives of electronic engineering
education in Argentina were introduced by Prof. Osvaldo
E. Agamennoni, dean of the Department of Electrical Engineering, Universidad Nacional del Sur; and finally, an
overview of digital signal processing and VLSI research
activities in Finland was briefly introduced by Prof. Tapio
Saramäki.
— Pablo S. Mandolesi
Argentina CASS Chapter Chair
Rio de Janeiro Chapter Chair Report
The entire IEEE SAWCAS 2000 concept was conceived
by the present vice-president of the IEEE Circuits and Systems Society in Latin America, Prof. Juan E. Cousseau, from
44
the Universidad Nacional del Sur, in Bahía Blanca, Argentina.
Prof. Cousseau’s idea of dividing the workshop into two
branches has motivated a great deal of integration between the
two organizing groups in Brazil and Argentina. The Rio de
Janeiro portion of SAWCAS 2000 was held on November 20–
22. The SAWCAS activities started with an informal banquet,
where most SAWCAS participants were able to enjoy the company of our special speakers in a very friendly atmosphere.
We deeply thank Prof. Magdy Bayoumi, University of Louisiana at Lafayette; Prof. Yih-Fang Huang, University of Notre
Dame; and Prof. Tapio Saramäki, Tampere University of Technology, Finland, for finding some time in their extremely busy
personal and professional schedules, and for supporting the
SAWCAS 2000 initiative in Region 9 (Latin America) of IEEE.
The banquet consisted of a delicious Brazilian barbecue
by the beautiful sight of Guanabara Bay, surrounded by world
famous landmarks such as the Sugar Loaf and the Statue of
Jesus Christ, the Redeemer. Technical activities included lec-
ture sessions by the invited speakers and presentation of 15
technical papers in the area of circuits and systems with a flavor
of “telecommunications applications” to most of them. During the Rio de Janeiro portion of the event, a lecture session
on “Brazilian Tests on Digital Terrestrial Television Systems”
was given by Ana E. F. de Silva and Paulo H. C. V. de Castro
of Globo TV Broadcasting Network Inc., emphasizing the
highly professional level of the entire workshop. Student participation at both attendance and presentation levels was very
satisfactory.
The Rio de Janeiro portion of SAWCAS 2000 was locally
organized by the Rio de Janeiro Chapter of the IEEE Circuits
and Systems Society. We would like to take this opportunity
to thank sincerely our partner institutions on this joint venture
for providing all necessary funding and technical support:
COPPE, Escola Politécnica/UFRJ, IEEE Rio de Janeiro Section, and the Signal Processing Lab (Laboratório de
Processamento de Sinais, LPS). Last, but by no means least,
we thank all participants of SAWCAS 2000 (organizers, volunteers, attendees, and presenters) for making the entire event
as succesful as it was.
—Sergio L. Netto
Rio de Janeiro CASS Chapter Chair
Meetings
Meetings
IEEE-ICECS 2000 Report
Lebanon, December 17–20, 2000
T
he IEEE-ICECS 2000 was held in Lebanon from December 17–20, 2000. The conference venue was the
Portemilio Hotel in Kaslik, Jounieh, about 20 km north of the
capital, Beirut. The Organizing Committee was very happy to
host this IEEE international conference and would like to address its warmest thanks to the IEEE CAS Society and to the
IEEE Region 8, as well as the Lebanese American University
and the Ecole Polytechnique of Montreal for their great support and contribution to the success of the conference. Also,
the committee feels very much indebted to the numerous sponsors and cosponsors, program
committee members, all invited
keynote and tutorial speakers,
and to all authors and participants; their participation and invaluable collaboration have contributed to the success of ICECS
2000 and to the high standards
that were achieved. We highlight
in this short report the main
events of ICECS 2000.
The opening ceremony was
addressed by H.E. Dr. Bassel
Fuleihan, the Minister of the
Economy, representing H. E. Mr.
Rafic Hariri, president of the
Council of Ministers of Lebanon, who graciously held this
conference under his patronage.
This ceremony was attended by
over 240 people, including members of parliament, ambassadors
and distinguished guests.
Major areas of electrical and computer engineering topics, covered by ICECS 2000, drew submissions from about
55 countries, from all over the world. Program Committee
members carried out all the tremendous work of paper review.
Accepted contributions were selected for presentation in a total of 47 lecture or poster sessions, either regular or special
ones. The technical program was enriched by plenary talks
given by three recognized authorities: Prof. G. De Micheli from
Stanford University, Prof. M. Ismail from Ohio State University, and Mr. C. Salameh from Hewlett Packard. It was also
complemented by the tutorial sessions that were offered during the first day of the conference. Their topics ranged from
advanced design techniques to several novel applications. In
addition, a panel discussion was organized on December 19
by Prof. M. Bayoumi from University of Louisiana at Lafayette
with the participation of Dr. Y. Dorian from Logic Vision, Prof.
H. Harmanani from LAU, Prof. M. Sawan from École
Polytechnique, and Prof. H. Jiefen from Peking University. The
panel topic was entitled: Information Technology and Developing Countries: A Divide or a Savior.
The IEEE ICECS conferences have evolved to be the most
important of annual electronics, circuits and systems meetings
in IEEE Region 8. The Technical Program Committee of
ICECS 2000 has assembled an excellent technical program to
provide a unique forum for the exchange of ideas and research
results. All members of the Organizing Committee worked
hard to meet the challenge by providing wonderful surroundings for our participants to discuss important technical issues
and, at the same time, through the social activities, to give them
a chance to appreciate the richness of Lebanese culture. In
addition to technical activities, several social events were organized to let participants enjoy the local hospitality and the
nice weather of Lebanon during the holiday season. A welcome
reception was held at the Portemilio Hotel on Sunday and was
followed with an evening at the Beirut city center on Monday. On Tuesday, two events were organized: a lunch and a
visit to the famous Jeita Grotto and the conference banquet at
the Riviera Hotel. On Wednesday, there was a closing remarks
session followed by a dinner at the Old Zouk-Mikael Souk
close to the Jounieh Bay.
Finally, thanks are due to the numerous volunteers who
helped the organization, and mainly, the students of the Electrical and Computer Engineering Department of the Ecole
Polytechnique de Montréal, Canada, and the IEEE Student
Division of the Lebanese American University, Lebanon.
We sincerely hope that all the participants have acquired
valuable new insights from our technical program and enjoyed
their visit and stay in Lebanon.
—Mohamad Sawan, Technical Program Chair
—Abdallah Sfeir, General Chair
45
Meetings
Meetings
APCCAS 2000 Report
Tianjin, China, December 4–6, 2000
T
he 2000 IEEE Asia Pacific Conference on Circuits and
Systems (APCCAS 2000), the fifth in the series of biennial conventions of APCCAS, was held in the Crystal Palace
Hotel, Tianjin, China, December 4–6, 2000.
Jointly sponsored by IEEE CAS Society, Chinese Institute of Electronics (CIE), IEEE Beijing Section, IEEE CAS
Beijing Chapter and Tianjin University, in cooperation with
the National Natural Science Foundation of China, K. C. Wong
Education Foundation, Hong Kong, China, and Tianjin Association for Science and Technology, China, APCCAS 2000
attracted 216 scholars and professionals from the Asia-Pacific
regions, especially from China (Mainland), Japan, Thailand,
Taiwan, Indonesia, Singapore, Hong Kong, and the United
States. There were also some scholars from Europe. They gathered in Tianjin to exchange ideas, further technological inquiries, explore the applications and contemplate the future direction of electronic circuits and systems, especially in relation
to telecommunication.
9:00, December 4. Before the inauguration, Mr. Su Liang,
deputy mayor of the Tianjin Municipal Government, China,
met with the president of IEEE CAS Society, the general chair
and co-chair, the Technical Program Committee (TPC) chair
and co-chairs, the keynote speakers of the Plenary Sessions,
and the TPC honorary members of APCCAS 2000 at 8:20, in
the VIP Room, Crystal Palace Hotel. He expressed his warm
welcome to the distinguished guests.
In the inauguration, Professor Yong-Shi Wu, general chair
of APCCAS 2000, Mr. Su Liang, deputy mayer of Tianjin
Municipal Government, and Professor Ping Shan, president
of Tianjin University delivered welcome speeches. Dr. Bing
J. Sheu, president of IEEE CAS Society, offered congratulations on the opening of the conference and presented appreciation plaques to the general chair and co-chair, the TPC chair
and co-chairs of APCCAS 2000, to express his sincere appreciation to the conference.
Papers
The technical program committee of APCCAS 2000 received 360 papers submitted from 22 countries or regions.
After the reviewing process, 232 papers were accepted based
on the scores given by reviewers. The accepted rate was 64.4%.
However, only 217 camera-ready papers from the accepted
papers were received by the conference. With eight invited papers (four from USA, two from Japan, one from China (Mainland), one from Switzerland), there were altogether 225 papers from 20 countries or regions printed in the Proceedings
of APCCAS 2000.
Inauguration and Plenary Sessions
The inauguration of the conference was held from 8:30–
General chairs, ISC chair, and keynote speakers.
The plenary sessions opened the APCCAS 2000 technical program with four keynote technical addresses by internationally renowned experts in the field at 9:00, December 4.
The keynote technical addresses were: Ernest S. Kuh, University of California at Berkeley, USA—Title: Circuit Theory and
Interconnect Analysis for DSM Chip Design; Sung-Mo Steve
Kang, University of Illinois at Urbana-Champaign, USA—
Title: Computer-Aided Design of Mixed-Technology VLSI
Systems; Leon O. Chua, University of California at Berkeley,
USA—Title: Visions of Circuits and Systems: Circa 2100; and
Jun-Liang Chen, Beijing University of Posts and Telecommunications, China—Telecommunications Development in China.
These addresses were warmly welcomed by the attendees.
Special Sessions and Panel Sessions
The main members of the APCCAS Organizing Committee.
46
From the afternoon of December 4 to the afternoon of
December 6, the technical offerings of APCCAS-2000 were
grouped into four special sessions and 37 panel sessions.
The four special sessions were: Recent Advances in VLSI
Layout Design; Blind System Identification; Panel Discussion
of Neural Networks; and Advances in Telecommunications.
Meetings
Meetings
These sessions provided a forum for the discussion of the
latest developments in established areas as well as for the exploration of new topics.
The 37 panel sessions, while covering all the topics in the
field of circuits and systems, highlighted those subjects that
focused on telecommunication. The typical titles of the panel
sessions were: Mobile Communication; Digital Communication; Telecommunication Network; Neural Networks and Their
Applications; Video Codec; Image Signal Processing; VLSI
Design and Applications; Analog Circuits; Filters, Linear and
Nonlinear CAS; Microwave Circuits and Systems; Electronic
Filters; Adaptive Signal Processing; Image Coding; Image
Recognition and Processing; Object-Based Video Compression; Circuit Simulation; Digital Circuit Design; Algorithms
for VLSI Synthesis; Systems for Image Processing and Communication; Multimedia Processing; Measurement and Analysis; Testing and Fault Tolerant Systems; HDTV, VOD and so
forth.
The presentations at APCCAS 2000 are of interest to researchers and practitioners in the related fields, and will be important for advancing electronic industries beyond the year 2000.
Social Activities
On the evening of December 3, Professor Ping Shan, president of Tianjin University, held a banquet with distinguished
representatives of the conference including the president of IEEE
CAS Society, the general chair and co-chair, the TPC chair and
co-chairs, the keynote speakers of the Plenary Sessions, and the
TPC honorary members of APCCAS 2000 to express his warm
welcome to the representatives of the conference.
A reception for general participants was held at 18:30,
December 4. After the reception, Dr. Bing J. Sheu, president
of IEEE CAS Society, and Professor Ruey-Wen Liu, China
coordinator of IEEE CAS Society, held a dinner meeting with
a number of professors from China. They gave an introduction of IEEE and the CAS Society. The Chinese professors expressed their willingness to become IEEE members.
The conference banquet and awards recognition were held
in the evening of December 5. Professor Tony S. Ng, International Steering Committee chair of APCCAS, delivered a
speech during the banquet expressing his appreciation to the
conference organizers. Professor Yih-Fang Huang, TPC chair,
Professor Yoji Kajitani and Professor Runtao Ding, the TPC
co-chairs, presented the certificates of three Best Paper Awards
to the authors. Dr. S. Soegijoko, general chair of APCCAS
2002, which will be held in Bali, Indonesia, invited everyone
to meet again two years later. At last, the Peiyang Music Band
of students of Tianjin University performed excellent Chinese
music, which made a pleasant and ardent atmosphere for the
banquet.
These social functions have improved mutual understanding and friendship among participants.
We think this conference is a great success.
—Yong-Shi Wu
General Chair of APCCAS 2000
Awards
Message from Awards
Committee Chair
D
r. Bing Sheu, sheu@nassda.com, chair of the 2001 IEEE
Circuits and Systems Society Awards Committee, has announced the membership and procedures of this year’s committee.
The following are members of the Awards Committee:
Prof. Leon Chua, as subcommittee chair for the Van
Valkenburg Award, chua@eecs.berkeley.edu; Prof. George
Moschytz, as subcommittee chair for
the Society Education Award,
moschytz@isi.ee.ethz.ch; Prof. Fathi
Salam, as subcommittee chair for
the Technical Achievement Award,
salam@ee.msu.edu; Prof. Yih-Fang
Huang, as subcommittee chair for
the Meritorious Service Award,
huang.2@nd.edu; Prof. Eby Friedman, as subcommittee chair for the
Industrial Pioneer Award, friedman
@ee.rochester.edu; Prof. M.N.S.
Bing Sheu
Swamy, as subcommittee chair for
the Prize Paper Awards, swamy@ece.concordia.ca; and Prof.
Josef Nossek, as subcommittee chair for the Chapter-of-theYear Award, nossek@ei.tum.de.
The Prize Paper Awards Subcommittee includes the following members: Prof. M.N.S. Swamy to coordinate the
Guillemin-Cauer Award**, swamy@ece.concordia.ca; Prof.
Chris Toumazou to coordinate the Darlington Award**,
c.toumazou@ic.ac.uk; Prof. Nanni De Micheli to coordinate
the IEEE Transactions on Computer-Aided Design of Integrated Circuits and Systems Best Paper Award,
nanni@galileo.stanford.edu; Dr. Weiping Li to coordinate the
IEEE Transactions on Circuits and Systems for Video Technology Best Paper Award, li@optivision.com; Prof. Wayne
Wolf and Prof. Eby Friedman to coordinate the IEEE
Transactions on VLSI Systems Best Paper Award,
wolf@ee.princeton.edu; and Prof. M.N.S. Swamy and Prof.
Nanni De Micheli to coordinate the Outstanding Young Author Award.
Operating Guidelines
The Society administrator will assist the Awards Committee and its chair. Each subcommittee chair will invite a few
qualified peers to belong to it (approximately 5–7 persons, at
least 3). The names of the subcommittee members will remain
anonymous to the public except for the Awards Committee
** From either IEEE Transactions on Circuits and Systems, Part I
or Part II.
Note: The co-sponsored IEEE Transactions on Multimedia Best Paper Award has not been established in 2001.
47
Awards
Awards
chair. The names of the chairs of the subcommittees were earlier announced to the Society ExCom. Each subcommittee
ranks the nominees, and the subcommittee chair provides the
Awards Committee chair, Dr. Sheu, with the top recommended
nominee and the second. When the chair of the subcommittee
sends the two names as mentioned above, he also sends the
names of the members of the subcommittee and the number of
candidates that were considered for the particular award when
the selection was made.
The list of the candidates selected is reviewed and discussed by the Awards Committee for final modification and
approval.
The Board of Governors of the CAS Society has paid significant attention to conflict of interest in Society operation.
Any member who participates in nomination/reference effort
is disqualified from the selection/voting process of the particular award. Exception, if any, is granted by the Board of Governors of the Society only. Thus, the Awards Committee and
its subcommittees closely follow the rules of IEEE and the
rules of the CAS Society in avoiding conflicts of interest.
IEEE CAS Fellow Profiles 2001
Jenq-Neng Hwang
For contributions to adaptive learning systems.
Jenq-Neng Hwang received the B.S. and M.S. degrees from the National Taiwan
University. Immediately after,
he received the Ph.D. degree
from the University of Southern California. Dr. Hwang then
joined the University of Washington in Seattle, where he is
currently professor of electrical
engineering. He has published
more than 150 journal and conference papers and book chapters in the areas of image/video
signal processing, computational neural networks, multimedia system integration and networking. Dr. Hwang received the 1995 IEEE Signal Processing Society’s Best Paper Award in the area of neural networks
for signal processing.
Dr. Hwang served as secretary of the Neural Systems and
Applications Committee of the IEEE CAS Society, chairman
of the Neural Networks Signal Processing Technical Committee in the IEEE Signal Processing Society and as the Society’s
representative to the IEEE Neural Network Council. He is currently associate editor for the IEEE Transactions on Neural
Networks and the IEEE Transactions on Circuits and Systems
for Video Technology. He chaired the tutorial committee for
IEEE ICNN’96 and was program co-chair of ICASSP’98.
48
Wei Hwang
For contributions to high density cell technology and high
speed Dynamic Random Access Memory design.
Wei Hwang received the B.S. degree from National
Cheng-Kung University, the M.S. degree from National ChiaoTung University, and the Ph.D. degree from the University of
Manitoba.
From 1975 to 1978, he was assistant professor of electrical engineering at Concordia University in Montreal. In 1979,
he joined the Electrical Engineering Department at Columbia
University in New York as associate professor. Since 1984 he
has been with the Silicon Technology, VLSI Design and Digital
Communications Departments
at the IBM Thomas J. Watson
Research Center. Currently, he
is also adjunct professor of
electrical engineering at Columbia University.
Dr. Hwang has made
significant contributions in the
areas of semiconductor memories, VLSI CMOS digital circuits and high-frequency microprocessor design. His current research interests are in
ultra-low power CMOS/SOI
circuits and technology, and
elite digital signal processor design for wireless communications. He has received sixteen IBM Invention Achievement
Awards and three IBM Research Division Technical Awards
and earned the IBM Research Division honorary title of Master Inventor. He holds 47 U.S. patents. He has authored or
coauthored over 100 technical papers and a book entitled Electrical Transports in Solids.
Dr. Hwang is an active member of the Chinese American
Academic and Professional Society (CAAPS) where he has
served separately as president and chairman of the board. For
his leadership, Dr. Hwang received the Courvoisier Leadership Award and the CAAPS Special Service Award. He is a
member of the New York Academy of Science, Phi Tau Phi
and Sigma Xi.
Jason Cong
For contributions to the computer-aided design of integrated
circuits, especially in physical design automation, interconnect
optimization, and synthesis of field-programmable gate-arrays.
Jason Cong received the B.S. degree in computer science
from Peking University in 1985, and the M.S. and Ph. D. degrees in computer science from the University of Illinois at
Urbana-Champaign in 1987 and 1990, respectively. Currently,
he is professor and co-director of the VLSI CAD Laboratory in
the Computer Science Department of the University of California, Los Angeles.
Awards
Awards
His research interests include layout synthesis and logic
synthesis for high-performance, low-power VLSI circuits, design and optimization of highspeed VLSI interconnects,
FPGA synthesis and reconfigurable architectures. He has
published over 140 research
papers and led over 20 research
projects supported by DARPA,
NSF, and a number of industrial sponsors in these areas. He
served as general chair of the
1993 ACM/SIGDA Physical
Design Workshop, program
chair and general chair of the
1997 and 1998 International
Symposium on FPGAs, respectively, program co-chair of the1999 International Symposium on Low-Power Electronics and Designs, and on program
committees of many major conferences, including DAC,
ICCAD, and ISCAS. He is associate editor of IEEE Transactions on VLSI Systems and ACM Transactions on Design Automation of Electronic Systems.
Dr. Cong received the Best Graduate Award from the Peking University in 1985, and the Ross J. Martin Award for Excellence in Research from the University of Illinois at UrbanaChampaign in 1989. He received the NSF Young Investigator Award in 1993, the Northrop Outstanding Junior Faculty
Research Award from UCLA in 1993, the IEEE Transactions
on CAD Best Paper Award in 1995 from IEEE CAS Society,
the ACM SIGDA Meritorious Service Award in 1998, and an
SRC Inventor Recognition Award in 2000. He has been guest
professor of Peking University since 2000.
David A. Johns
For contributions to the theory and design of analog
adaptive integrated circuits used in digital communications.
David Johns received the B.A.Sc., M.A.Sc., and Ph.D. degrees from the University of Toronto, Canada, in 1980, 1983
and 1989, respectively.
From 1980–81, Prof.
Johns worked at Mitel, while
from 1983–85 he worked at
Pacific Microcircuits Ltd. In
1988, he was hired at the University of Toronto where he is
currently professor. He has ongoing research programs in the
general area of analog integrated circuits with particular
emphasis on circuits and systems for digital communications. His research work has re-
sulted in more than 60 publications, one textbook and the 1999
IEEE Darlington Award. He served as associate editor for IEEE
Transactions on Circuits and Systems Part II from 1993 to
1995 and for Part I from 1995 to 1997. He has been involved
in numerous industrial short courses as well as being a cofounder of an analog contract company named Snowbush. His
homepage is located at http://www.eecg.toronto.edu/~johns.
Huifang Sun
For contributions to digital video technologies including coding optimization, down-conversion, and error resilience.
Huifang Sun graduated from Harbin Military Engineering Institute, Harbin, China in 1967, and received the Ph.D.
from the University of Ottawa, Canada, in 1986. He joined the
Electrical Engineering Department of Fairleigh Dickinson University as assistant professor in 1986. He was promoted to associate professor before moving to Sarnoff Corporation (formerly David Sarnoff Research Center) in 1990 as a member
of the technical staff, and was
later promoted to technology
leader of Digital Video Communication. In 1995, he joined
Murray Hill Lab (formerly
Advanced Television Laboratory) of Mitsubishi Electric Research Laboratories as a senior
principal technical staff member and now as deputy director.
His research interests include
digital video/image compression and digital communication. He has published more
than 100 journal and conference papers. He holds 12 US patents and has more pending.
He received the AD-HDTV Team Award in 1992 and the Technical Achievement Award for optimization and specification
of the Grand Alliance HDTV video compression algorithm in
1994 at Sarnoff Lab. He received the Best Paper Award of the
1992 IEEE Transactions on Consumer Electronics and the
1996 Best Paper Award of ICCE. He is currently associate editor for IEEE Transactions on Circuits and Systems for Video
Technology.
Sethuraman (Panch) Panchanathan
For contributions to compressed domain processing and indexing in visual computing and communications.
Sethuraman (Panch) Panchanathan received the B.Sc.
degree in physics from the University of Madras, India, the
B.E. degree in electronics and communication engineering
from the Indian Institute of Science, the M. Tech degree in electrical engineering from the Indian Institute of Technology,
Madras, and the Ph.D. degree in electrical engineering from
49
Awards
Awards
the University of Ottawa in
1981, 1984, 1986 and 1989, respectively. He is currently associate professor and director
of the Collaborative Program
on Ubiquitous Computing
(CubiC) in the Department of
Computer Science and Engineering at Arizona State University (ASU), Tempe, Arizona. Prior to joining ASU, he
was with the University of Ottawa, Canada. He is chief
scientific consultant for Obvious Technology. His research
interests are in the areas of compression, indexing, storage,
retrieval and browsing of images and video, multimedia communications, VLSI architectures for video processing, multimedia hardware architectures, parallel processing, and ubiquitous computing. He has published over 200 papers in his area
of research. He has been chair of several conferences and is
on the editorial board of four journals, including the IEEE
Transactions on Circuits and Systems for Video Technology
and the IEEE Transactions on Multimedia. He has also guest
edited five special issues on multimedia. He is a fellow of the
International Society for Optical Engineering (SPIE).
Cesar A. Gonzales
For contributions to MPEG encoding algorithms
and leadership in their use.
Cesar Gonzales received the Electrical Engineering degree from Universidad Nacional de Ingeniería, Lima, Peru in
1974, and the Ph.D. degree from Cornell University in 1979.
From 1973 to 1975 he worked
as a design engineer at the
Jicamarca Radar Observatory
in Peru. From 1975 to 1979 he
became a research assistant in
the Department of Electrical
Engineering at Cornell. In
1979 he joined the Arecibo
Radar Observatory as a research associate working on
atmospheric physics and radar
signal processing.
In 1983 Dr. Gonzales
joined IBM’s T.J. Watson Research Center, where he is currently senior manager. At IBM he focused on multimedia compression algorithms and their implementation in software and
hardware. He led IBM’s contributions to the MPEG-1 and -2
standards and developed the first implementations of MPEG
in IBM’s Aptiva and Thinkpad brands. He initiated and con-
50
tributed to the development of MPEG chips leading to the introduction of a number of decoder, set top box and encoder
products. IBM chips are widely used in professional and consumer digital video equipment today.
Dr. Gonzales has authored many papers in scientific radar, image and video coding. He holds over a dozen patents.
He served as associate editor of IEEE Transactions on Circuits and Systems for Video Technology from 1994–1995. He
has received numerous awards, including the title of IBM Fellow. His current interests are in interactive TV applications.
Paul J. Hurst
For contributions to the design of CMOS integrated circuits
for telecommunications and magnetic recording.
Paul J. Hurst was born in
Chicago, Illinois, in 1956. He
received the B.S., M.S., and
Ph.D. degrees in electrical engineering from the University
of California, Berkeley, in
1977, 1979, and 1983, respectively.
From 1983 to 1984, he
was with the University of
California, Berkeley, as a lecturer, teaching integrated-circuit design courses and working on an MOS delta-sigma
modulator. In 1984, he joined
the telecommunications design group of Silicon Systems Inc.,
Nevada City, California. There he was involved in the design
of three mixed-signal CMOS integrated circuits for voice-band
modems, including the first single-chip 2400-bps modem,
which was used initially in consumer voice-band modems and
later in set-top boxes for billing. Since 1986, he has been on
the faculty of the Department of Electrical and Computer Engineering at the University of California, Davis, where he is
now professor. His research interests are in the area of analog
and mixed-signal integrated-circuit design for signal processing and communication applications. His research projects
have included work on data converters, filters, adaptive equalizers and timing recovery circuits for data communications,
and image processing.
Professor Hurst was a member of the program committee for the Symposium on VLSI Circuits in 1994 and 1995 and
guest editor for the December 1999 issue of the Journal of
Solid-State Circuits. He has been a member of the program
committee for the International Solid-State Circuits Conference since 1998 and is associate editor for the Journal of SolidState Circuits. Professor Hurst taught (with Professor Richard Spencer) the short course “Signal Processing for Magnetic
Recording” a dozen times. He is a co-author of the fourth edition of the text book Analysis and Design of Analog Integrated
Circuits. He is also active as a consultant to industry.
Conferences
Conferences and
and Workshops
Workshops
. . . continued from Inside Front Cover
The 2nd Workshop and Exhibition on MPEG-4
June 18 – 20, 2001
San José, California, USA
The first Workshop and Exhibition on MPEG-4 was a great success with tutorials given
by leading experts in the field, technical sessions of very high quality, and an all-soldout exhibition event. To continue this highly successful activity, the Second Workshop
and Exhibition on MPEG-4 will be held from June 18 to June 20, 2001, in San José,
California, U.S.A.
Chairs: Leonardo Chiariglione, Ming L. Liou
Program Chair: Weiping Li
Contact Information
For exhibitions:
For general information:
For technical papers:
Rob Koenen
Weiping Li
Prof. Ming-Ting Sun
WebCast Technologies, Inc.
Electrical Engineering Dept. KPN Research
University of Washington
P.O.Box 421
257 Castro St. Suite 209
Seattle, Washington 98195
2260 AK Leidchendam Mountain View, CA 94041
U.S.A.
The Netherlands
U.S.A.
rob.koenen@hetnet.nl
li@webcasttech.com
sun@ee.washington.edu
Phone: +31 70 332–5310 Phone: (650) 210–9188
Phone: (206) 616–8690
Fax: (206) 543–3842
Fax: +31 70 332–5567
Fax: (603) 250–2287
For more information: http://www.m4if.org
The Fifth
International
Symposium
on Signals,
Circuits and
Systems
July 10–11, 2001
Iasi, Romania
The fifth edition of the International Symposium on Signals, Circuits and Systems will
be held in Iasi at the Faculty of Electronics and Telecommunications, Technical University “Gh. Asachi”. Iasi, the oldest academic center of Romania, is located in the
North-Eastern part of the country and can be reached from Bucharest by plane or by
train. A wine tasting and a one day post symposium trip in the neighboring region,
well known for its wonderful monasteries, will be organized.
Address for correspondence:
SCS’2001 International Symposium
Faculty of Electronics & Telecommunications
“Gh. Asachi” Technical University of Iasi
Bd. Carol 11, Iasi, 6600, ROMANIA
Secretariat:
Phone: +40 32 142283
FAX: +40 32 217720 or +40 32 278628
Other information:
http://www.tuiasi.ro/events/scs2001
http://www-imt.unine.ch/scs2001
lgoras@etc.tuiasi.ro ( Prof. Liviu Goras )
2001 IEEE-EURASIP Workshop on
NONLINEAR SIGNAL AND IMAGE PROCESSING
June 3–6, 2001
Hyatt Regency Baltimore
Baltimore, Maryland USA
http://www.ece.udel.edu/nsip/
NSIP-01 is the fifth biennial international workshop on nonlinear signal and image processing. The workshop provides a unique, high quality forum for the presentation and
discussion of technical advances in nonlinear methods for a wide array of communications, signal, bio systems, image and multimedia processing applications.Check the web
site for up-to-date information on paper proposals and status, workshop registration, special sessions, keynote speakers, and travel information.
Conference Chairs
Kenneth Barner
Gonzalo Arce
University of Delaware
University of Delaware
Tel: +1–302–831–6937
Tel: +1–302–831–8030
barner@ece.udel.edu
arce@ece.udel.edu
ICME2001
IEEE International Conference on Multimedia and Expo
August 22–25, 2001 Waseda University, Tokyo, Japan
The International Conference on Multimedia & Expo in Tokyo is the first ICME in the
21st century and the 2nd ICME co-organized by the four IEEE societies—the Circuits
and Systems Society, the Computer Society, the Signal Processing Society, and the Communications Society—in joint collaboration, so as to unify all IEEE as well as non-IEEE
multimedia related activities under the common umbrella of ICME.
General Chair
Nobuyoshi Terashima
Waseda University
Global Information and
Telecommunication Institute, Japan
terasima@giti.waseda.ac.jp
Technical Program Chair
Jun Ohya (Waseda U./ Japan)
ICME2001 office
icme2001@giti.waseda.ac.jp
Conference Webpage
http://www.giti.waseda.ac.jp/ICME2001/
NDES2001 NDES2001
June 21–23, 2001
Delft, the Netherlands
Workshop on Nonlinear Dynamics
of Electronic Systems
The ninth workshop on Nonlinear Dynamics of Electronic Systems, this workshop gives
the opportunity to communicate new insights in nonlinear dynamic systems ranging
from a mathematical point of view to a circuit and system design point of view.
Contact Information:
Dr. A. van Staveren
NDES2001 / Electronics Research Lab
Faculty for Information, Technology and Systems
Delft University of Technology
Mekelweg 4, 2628 CD Delft
The Netherlands
E-mail: ndes2001@duteela.et.tudelft.nl
http://www.ndes2001.tudelft.nl
ISLPED’01
International Symposium on Low Power Electronics and Design
Huntington Beach, California
August 6–7, 2001
The International Symposium on Low Power Electronics and Design (ISLPED) is the
premier forum for presentation of recent advances in all aspects of low power design
and technologies, ranging from process and circuit technologies, to simulation and synthesis tools, to system level design and optimization.
Low Power Design Contest
The IEEE Computer Elements Workshop is holding a Low Power Design Contest to
provide a forum for universities and research organizations to showcase original “poweraware” designs and to highlight the innovations and design choices targeted at low power.
The goal is to encourage and highlight design-oriented approaches to power reduction.
The best designs will be selected and invited for presentation at ISLPED 2001, August
6–7, 2001. A special session will be devoted to the Low Power Design Contest.
The deadline for submissions in May 31st, 2001. Entries should be submitted electronically (in PDF or Postscript format) to the Design Contest Chairs:
Vivek Tiwari
Email: Vivek.Tiwari@intel.com
Intel Corp.
3600 Juliette Ln., MS: SC 603-12
Santa Clara, CA 95051
USA
Massoud Pedram
Email: pedram@usc.edu
University of Southern California
Department of EE-Systems, EEB-344
3740 McClintock Ave.
Los Angeles CA 90089-2562, USA
ACM/IEEE ISLPED Homepage: http://www.cse.psu.edu/~islped
ISLPED 2001 Low Power Design Contest Homepage:
http://www.cse.psu.edu/~islped/contest.htm
51
Calls for Papers and Participation
2001 Midwest Symposium on
Circuits and Systems
August 14–17, 2001
Fairborn, Ohio
BMAS 2001
October 11–12, 2001
The Midwest Symposium on Circuits and Systems (MWSCAS) is one of the premier
IEEE conferences presenting research in all aspects of theory, design, and applications of
circuits and systems. In addition to presenting new research results in core circuits and
systems areas, this 2001 year conference will also be focusing on innovative information
technology, intelligent exploitation in microsystems and wireless information exchange.
Six short courses, two plenary speakers, several invited sessions and a student paper contest are planned. Please check the symposium website for current information.
General Chair
Robert L. Ewing
Air Force Research Laboratory
Air Force Institute of Tech.
robert.ewing@wpafb.af.mil
General Co-Chair
Harold W. Carter
University of Cincinnati
hal.carter@uc.edu
Technical Program Chair
M. Ismail
Ohio State University
ismail@ee.eng.ohio-state.edu
Technical Program Co-Chair
Tuna Tarim
Texas Instruments
tarim@ee.eng.ohio-state.edu
Deadlines:
Paper abstract and 1000-word short paper:
Tutorial, panel, or industrial track proposals:
Special session proposals:
Notification of acceptance:
Submission of camera-ready paper:
May 15, 2001
May 15, 2001
May 15, 2001
June 15, 2001
August 17, 2001
Dr. M. Ismail, Ohio State University, 205 Dreese Laboratory,
2015 Neil Avenue, Columbus, OH 43210-1272.
For further updated information, please see the symposium website.
http://www.ececs.uc.edu/~hcarter/mwscas
First IEEE Conference on Nanotechnology
October 28–30, 2001
Outrigger Wailea Resort, Maui, Hawaii, USA
The First IEEE Conference on Nanotechnology will be held in Maui, Hawaii, USA
from October 28 (Sunday) to October 30 (Tuesday), 2001. The state-of-the-art technical
achievements on all aspects of nanotechnology will be reported. Technological innovations as well as R&D topics will receive intensive discussion. Participants are encouraged to submit technical papers.
IEEE/RSJ International Conference on Intelligent Robots and Systems: IROS 2001
will be held at the same place from October 29 to November 3, 2001 (http://www.icrairos.com/iros2001).
General Co-Chair:
Toshio Fukuda, Nagoya University, fukuda@mein.nagoya-u.ac.jp
Robert D. Shull, NIST, shull@nist.gov
Program Chair: Clifford Lau, ONR, lauc@onr.navy.mil
Paper submission:
May 31, 2001: Abstracts Due
July 15, 2001: Notification of Acceptance
Aug. 15, 2001: Accepted Papers Due
Submission of papers should be addressed to:
Dr. Clifford Lau, Program Chair, IEEE-NANO 2001
Corporate Programs, ONR-363,
800 North Quincy Street, Arlington, VA 22217-5660, USA
Phone: +1-703-696-0431
Fax +1-703-588-1013
Email: lauc@onr.navy.mil
For more information, please check our World Wide Web site.
http://www.mein.nagoya-u.ac.jp/IEEE-NANO
52
2001 IEEE International Workshop on
Behavioral Modeling and Simulation
Fountaingrove Inn
Santa Rosa, California, USA
http://www.esat.kuleuven.ac.be/~bmas
Important Dates:
Abstract Submission Deadline:
June 3, 2001
Notification of Acceptance:
July 1, 2001
Final Material Submission Deadline: August 12, 2001
Interested speakers should submit the requested information via electronic mail (PDF or
postscript format using “Letter” or “A4” size) to the following address before June 3,
2001: http://www.esat.kuleuven.ac.be/~bmas or (if access to the internet is unavailable)
by regular mail or fax to:
Prof. Georges Gielen
For general information, contact:
BMAS 2001 Program Chair
Prof. Alan Mantooth
ESAT-MICAS
BMAS 2001 General Chair
Katholieke Universiteit Leuven
Department of Electrical Engineering
Kasteelpark Arenberg 10
University of Arkansas
3001 Leuven-Heverlee
Fayetteville, AR 72701
Belgium Phone: (501) 575 4838
E-mail: mantooth@engr.uark.edu
Phone: (016) 321077
E-mail: gielen@esat.kuleuven.ac.be
August 24–26, 2001
Berkeley, California, USA
http://www.memsconference.com
MEMS (Microelectromechanical Systems) Conference 2001 provides a medium for
the exchange of ideas between theoreticians, practitioners, investors, administrators and
students to address important issues in the development, manufacturing and commercialization of MEMS.
Abstracts and drafts can be submitted at the following URL: www.memsconference.com/
memsconference/submityourpaper, emailed to directors@memsconference.com or mailed
to the following address: MEMS Conference 2001, Atomasoft Media, 4551-B Hutchison
st., Montreal, QC, H2V4A1 Canada. Electronic submission is preferred.
Important Dates
May 18, 2001
June 11, 2001
Deadline for submissions
Notification of acceptance
Further Information
For further information either contact directors@memsconference.com or see the conference homepage at: http://www.memsconference.com.
The 2002 IEEE Asia-Pacific Conference on Circuits and Systems
October 28–31, 2002 Denpasar, Bali, Indonesia
http://apccas2002.itb.ac.id
http://www.vlsi.ss.titech.ac.jp/apccas2002/
The 2002 IEEE Asia-Pacific Conference on Circuits and Systems (APCCAS’02) is
the sixth in the series of biennial Asia-Pacific Conferences sponsored by the IEEE Circuits and Systems Society.
Technical Papers Author’s Schedule
Deadline for Submission of Papers:
March 15, 2002
Notification of Acceptance:
June 15, 2002
Deadline for Submission of Final Papers: August 15, 2002
Please use our website above for detailed list of areas, specification of the cameraready format and electronic submission of papers.
For further information contact Prof. Soegijardjo Soegijoko the APCCAS2002 General Chair at the Conference Secretariat, Electronics Laboratory, Department of Electrical Engineering, Institut Teknologi Bandung, Jalan Ganesha 10 Bandung 40132 INDONESIA; Tel: (62–22) 253 4117; Fax: (62–22) 250 1895; E-mail: apccas2002@ee.itb.ac.id
2001 IEEE CIRCUITS AND SYSTEMS SOCIETY ROSTER**
(CAS-04, Division I)
Division I Director
R. W. Wyndrum, Jr.
+1 732 949 7409
Administrative Vice President
I. N. Hajj
+ 1 961 347 952
Vice President – Regions 1-7
W. B. Mikhael
+1 407 823 3210
President
H. C. Reddy
+ 1 562 985 5106
Vice President – Conferences
G. A. De Veirman
+1 714 890 4118
Vice President – Region 8
A. C. Davies
+44 20 7848 2441
2001
J. Cong
E. G. Friedman
I. Galton
M. Green
C. Toumazou
President – Elect
J. A. Nossek
+49 89 2892 8501
Vice President – Publications
M. N. S. Swamy
+1 514 848 3091
Vice President – Region 9
J. E. Cousseau
+54 291 459 5153
STANDING COMMITTEE CHAIRS
Awards
Constitution and Bylaws
Fellows
Nominations
B. J. Sheu
K. Thulasiraman
W.-K. Chen
G. S. Moschytz
Vice President – Technical Activities
M. E. Zaghloul
+1 202 994 3772
Vice President – Region 10
N. Fujii
+81 3 5734 2561
Elected Members of the Board of Governors
2002
2003
G. G. E. Gielen
A. G. Andreou
R. Gupta
R. J. Marks II
M. Hasler
M. Pedram
P. Pirsch
L. Trajkovic
H. Yasuura
E. Yoffa
Society Administrator
B. Wehner
+1 219 871 0210
Past President
B. J. Sheu
+ 1 408 562 9168 Ext 223
Advisor to President
G. De Micheli
+1 650 725 3632
Ex Officio
Chairman of Sponsored or
Co-Sponsored Conferences
Standing Committee Chairs
Technical Committee Chairs
Chapter Chairpersons
Editors of Society Sponsored
Transactions & Magazines
Division I Director
D. Senese
M. Ward-Callan
Visual Signal Processing and Communication
VLSI Systems and Applications
J. Ostermann
R. Sridhar
REPRESENTATIVES
M. Ahmadi, A.R. Stubberud
Neural Networks Council
Sensors Council
M. E. Zaghloul, G. Barrows
Solid-State Circuits Society
E. Sanchez-Sinencio
Parliamentarian
R. J. Marks II
Society on Social Implications of Technology
R. W. Newcomb
CONFERENCE ACTIVITIES
G. A. De Veirman
Society Education Chairs Committee
J. Choma
2002 APCCAS
S. Soegijoko
IEEE Press
M. A. Bayoumi
2001 DAC
J. Rabaey
IEEE TAB Magazines Committee
W. H. Wolf
2001 ICCAD
R. Ernst
IEEE
TAB
Nanotechnology
Committee
B. J. Sheu, C.-Y. Wu
2001 ICCD
S. Kundu
IEEE TAB New Technology
2001 ICECS
J. Micallef
Directions Committee
C.-Y. Wu
2001 ISCAS
G. Hellestrand/D. Skellern
IEEE TAB Newsletters Committee
M. Haenggi
2001 MWSCAS
R. L. Ewing
IEEE TAB Nominations &
DISTINGUISHED LECTURER PROGRAM
E. Yoffa
Appointments Committee
G. S. Moschytz
MEMBERSHIP DEVELOPMENT AFFAIRS
J. A. Nossek
IEEE TAB Periodicals Committee
M. A. Bayoumi
IEEE TAB Transactions Committee
M. Hasler
PUBLICATION ACTIVITIES
M. N. S. Swamy
IEEE-USA
R. de Figueiredo
CAS Magazine Editor
M. K. Sain
IEEE-USA Professional Activities Committee
Circuits and Devices Magazine Editor
R. W. Waynant
for Engineers (PACE)
P. K. Rajan
- Editor for CAS Society
C.-Y. Wu
Transactions on Circuits and Systems Part I:
CORRESPONDING MEMBERS
Fundamental Theory and Applications
M. N. S. Swamy
TAB Awards and Recognition Committee
L.-G. Chen
Transactions on Circuits and Systems Part II:
Conference Publications Committee
L. Goras
Analog and Digital Signal Processing
C. Toumazou
P2SB/TAB Electronic Products and
Transactions on Computer Aided Design of
Services Committee
A. Ioinovici
Integrated Circuits & Systems
G. De Micheli
TAB Finance Committee
K. Thulasiraman
Transactions on Circuits and Systems for
TAB Periodicals Committee
C.-S. Li
Video Technology
W. Li
TAB Periodicals Packages Committee
J. Silva-Martinez
Transactions on Multimedia
M.-T. Sun
TAB Periodicals Review Committee
J. Vandewalle
Transactions on VLSI Systems
E. G. Friedman
TAB Products Committee
M. Ogorzalek
TAB Society Review Committee
J. Cousseau
REGIONAL ACTIVITIES
J. A. Nossek
TAB Strategic Planning and Review Committee R. Gupta
TECHNICAL ACTIVITIES
M. E. Zaghloul
2001 Advisory and Consulting Group of Past CAS Society Presidents
Analog Signal Processing
G. Cauwenberghs
Professor Belle Shenoi
Professor Ruey-Wen Liu
Cellular Neural Networks and Array Computing T. Roska
Professor George Moschytz
Professor Wai-Kai Chen
Circuits and Systems for Communications
M. A. Bayoumi
Professor Leon Chua
Professor Rolf Schaumann
Computer-Aided Network Design
R. Gupta
Professor Rui de Figueiredo
Professor Ming Liou
Digital Signal Processing
P. S. R. Diniz
Professor John Choma
Professor Sanjit K. Mitra
Multimedia Systems and Applications
C.-S. Li
Professor Michael Lightner
Neural Systems and Applications
M. Ahmadi
Nonlinear Circuits and Systems
Power Systems and Power Electronics Circuits
Sensors and Micromachining
G. R. Chen
A. Ioinivici
M. E. Zaghloul
** To obtain full contact information for any one of the volunteers, please
contact the Society Adminstrator, Barbara Wehner at b.wehner@ieee.org.
53
FOR CENTURIES, PEOPLE HAVE GATHERED
IN THE NEVADA DESERT TO SHARE
STORIES OF THEIR TECHNOLOGY.
2001 IEEE International Symposium on Circuits and Systems
May 6–9, 2001
Sydney Convention and Exhibition Center
Darling Harbor, Sydney, Australia
SYSTEMS of CIRCUITS and MIXED TECHNOLOGY ELEMENTS
The 2001 IEEE International Symposium on Circuits and Systems will
be held in Sydney, Australia. The Symposium is sponsored by the IEEE Circuits and Systems Society.
The Symposium will include: regular sessions; plenary sessions on advanced aspects of theory, design and applications of circuits and systems; and
short courses/tutorials linked with special sessions on wireless, mixed technology systems engineering, high speed devices and modelling, signal and
video processing, and low power high speed VLSI design.
NOW’S YOUR CHANCE TO MAKE DESIGN HISTORY
AT THE 38TH DESIGN AUTOMATION CONFERENCE,
JUNE 18 THROUGH 22, 2001 IN LAS VEGAS.
If you want to make a lasting
impression on the way you work—and
the people you work with—plan on
attending the 38th Design Automation
Conference. Our technical program is
the hottest spot for the latest R&D,
industry trends and design methodologies. Leading EDA industry
visionaries will address topics such as
deep submicron, design and implementation verification, distributed and
collaborative design, IP and design
reuse, and more.
New this year, the Embedded Systems
Showcase will focus on the latest
developments in embedded systems
tools, compilers, IP, HW/SW co-design
and other technologies for embedded
system-on-chip designs. As always,
DAC will bring together the very best
EDA tools, silicon and IP solutions,
with more than 260 companies represented. And there’s no better place
on earth to network with the movers
and shakers of the design world.
Visit the DAC Web site for details and
to register online. And secure your
place in design history.
THE INSTITUTE OF ELECTRICAL
& ELECTRONICS ENGINEERS, INC.
445 HOES LANE
PISCATAWAY, NJ 08855
Web Site: http://www.iscas2001.org/
CONFERENCE CO-CHAIRMEN:
Professor David Skellern
Electronics Department (E6A 247)
Division of Information and Communication Sciences
Macquarie University
NSW 2109 Australia
daves@elec.mq.edu.au
Tel: +61 2 9850 9145; Fax No: +61 2 9850 918
http://www.elec.mq.edu.au/
Professor Graham Hellestrand
Address: VaST Systems Technology Corporation
1230 Oakmead Parkway
Suite 314, Sunnyvale, CA 94806 USA
Tel: +1 408 328 0909 Fax No: +1 408 328 0945
g.hellestrand@vastsystems.com
http://www.vastsystems.com