Contents | Zoom in | Zoom out
For navigation instructions please click here
Search Issue | Next Page
Innovative Technology for Computer Professionals
December 2007
A Force of
Nature, p. 8
SMS: The Short
Message Service,
p. 106
Contents | Zoom in | Zoom out
__________________
h t t p : / / w w w. c o m p u t e r. o r g
Conquering
Complexity, p. 111
For navigation instructions please click here
Search Issue | Next Page
Computer
Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next Page
A
BEMaGS
F
_____________________________
Computer
Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next Page
A
BEMaGS
F
Computer
Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next Page
A
BEMaGS
F
Innovative Technology for Computer Professionals
Editor in Chief
Computing Practices
Special Issues
Carl K. Chang
Rohit Kapur
Bill N. Schilit
Iowa State University
rohit.kapur@synopsys.com
________________
schilit@computer.org
____________
chang@cs.iastate.edu
____________
Bill N. Schilit
Kathleen Swigger
Michael R. Williams
president@computer.org
______________
Web Editor
Perspectives
Associate Editors
in Chief
2007 IEEE Computer Society
President
Bob Colwell
Ron Vetter
bob.colwell@comcast.net
_______________
vetterr@uncw.edu
__________
Research Features
Kathleen Swigger
University of North Texas
kathy@cs.unt.edu
__________
Area Editors
Column Editors
Computer Architectures
Steven K. Reinhardt
Security
Jack Cole
Mike Lutz
US Army Research Laboratory
Edward A. Parrish
Databases/Software
Michael R. Blaha
Broadening Participation
in Computing
Juan E. Gilbert
Embedded Computing
Wayne Wolf
Software Technologies
Mike Hinchey
Worcester Polytechnic Institute
Modelsoft Consulting Corporation
Georgia Institute of Technology
Loyola College Maryland
Graphics and Multimedia
Oliver Bimber
Entertainment Computing
Michael R. Macedonia
Michael C. van Lent
How Things Work
Alf Weaver
Standards
John Harauz
Reservoir Labs Inc.
Bauhaus University Weimar
Information and
Data Management
Naren Ramakrishnan
Virginia Tech
Multimedia
Savitha Srinivasan
IBM Almaden Research Center
Networking
Jonathan Liu
University of Florida
Software
Dan Cooke
Texas Tech University
Robert B. France
Colorado State University
Rochester Institute of Technology
Ron Vetter
University of North Carolina at Wilmington
Alf Weaver
University of Virginia
Jonic Systems Engineering Inc.
CS Publications Board
Web Technologies
Simon S.Y. Shim
Jon Rokne (chair), Mike Blaha,
Doris Carver, Mark Christensen, David
Ebert, Frank Ferrante, Phil Laplante,
Dick Price, Don Shafer, Linda Shafer,
Steve Tanimoto, Wenping Wang
University of Virginia
SAP Labs
In Our Time
David A. Grier
Advisory Panel
George Washington University
University of Virginia
IT Systems Perspectives
Richard G. Mathieu
Thomas Cain
James H. Aylor
CS Magazine
Operations Committee
University of Pittsburgh
James Madison University
Doris L. Carver
Invisible Computing
Bill N. Schilit
The Profession
Neville Holmes
Louisiana State University
Ralph Cavin
Semiconductor Research Corp.
Ron Hoelzeman
University of Pittsburgh
Robert E. Filman (chair), David Albonesi,
Jean Bacon, Arnold (Jay) Bragg,
Carl Chang, Kwang-Ting (Tim) Cheng,
Norman Chonacky, Fred Douglis,
Hakan Erdogmus, David A. Grier,
James Hendler, Carl Landwehr,
Sethuraman (Panch) Panchanathan,
Maureen Stone, Roy Want
University of Tasmania
H. Dieter Rombach
AG Software Engineering
Administrative Staff
Editorial Staff
Scott Hamilton
Lee Garber
Senior Acquisitions Editor
shamilton@computer.org
_____________
Senior News Editor
Judith Prow
Associate Editor
Managing Editor
jprow@computer.org
___________
Bob Ward
Chris Nelson
Bryan Sallis
Senior Editor
Design and Production
Larry Bauer
Cover art
Dirk Hagner
Margo McCall
Associate Staff Editor
Associate Publisher
Dick Price
Membership & Circulation
Marketing Manager
Georgann Carter
Business Development
Manager
Sandy Brown
Senior Advertising
Coordinator
Marian Anderson
Publication Coordinator
James Sanders
Senior Editor
Circulation: Computer (ISSN 0018-9162) is published monthly by the IEEE Computer Society. IEEE Headquarters, Three Park Avenue, 17th Floor, New York, NY 100165997; IEEE Computer Society Publications Office, 10662 Los Vaqueros Circle, PO Box 3014, Los Alamitos, CA 90720-1314; voice +1 714 821 8380; fax +1 714 821 4010;
IEEE Computer Society Headquarters,1730 Massachusetts Ave. NW, Washington, DC 20036-1903. IEEE Computer Society membership includes $19 for a subscription to
Computer magazine. Nonmember subscription rate available upon request. Single-copy prices: members $20.00; nonmembers $99.00.
Postmaster: Send undelivered copies and address changes to Computer, IEEE Membership Processing Dept., 445 Hoes Lane, Piscataway, NJ 08855. Periodicals Postage Paid
at New York, New York, and at additional mailing offices. Canadian GST #125634188. Canada Post Corporation (Canadian distribution) publications mail agreement
number 40013885. Return undeliverable Canadian addresses to PO Box 122, Niagara Falls, ON L2E 6S8 Canada. Printed in USA.
Editorial: Unless otherwise stated, bylined articles, as well as product and service descriptions, reflect the author’s or firm’s opinion. Inclusion in Computer does not
necessarily constitute endorsement by the IEEE or the Computer Society. All submissions are subject to editing for style, clarity, and space.
Published by the IEEE Computer Society
Computer
1
December 2007
Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next Page
A
BEMaGS
F
Computer
Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next Page
A
BEMaGS
F
December 2007, Volume 40, Number 12
Innovative Technology for Computer Professionals
IEEE Computer Society: http://computer.org
Computer : http://computer.org/computer
computer@computer.org
______________
IEEE Computer Society Publications Office: +1 714 821 8380
COMPUTING PRACTICES
24
Examining the Challenges of Scientific Workflows
Yolanda Gil, Ewa Deelman, Mark Ellisman, Thomas Fahringer,
Geoffrey Fox, Dennis Gannon, Carole Goble, Miron Livny,
Luc Moreau, and Jim Myers
Workflows have emerged as a paradigm for representing and
managing complex distributed computations and are used to
accelerate the pace of scientific progress. A recent National Science
Foundation workshop brought together domain, computer, and
social scientists to discuss requirements of scientific applications
and the challenges they present to workflow technologies.
COVER FEATURES
33
The Case for Energy-Proportional Computing
Luiz André Barroso and Urs Hölzle
Energy-proportional designs would enable large energy savings in
servers, potentially doubling their efficiency in real-life use.
Achieving energy proportionality will require significant
improvements in the energy usage profile of every system
component, particularly the memory and disk subsystems.
39
Suzanne Rivoire, Mehul A. Shah, Parthasarathy Ranganathan,
Christos Kozyrakis, and Justin Meza
Power consumption and energy efficiency are important factors in
the initial design and day-to-day management of computer systems.
Researchers and system designers need benchmarks that
characterize energy efficiency to evaluate systems and identify
promising new technologies. To predict the effects of new designs
and configurations, they also need accurate methods of modeling
power consumption.
Cover design and artwork by Dirk Hagner
ABOUT THIS ISSUE
espite advances in power efficiency
fueled largely by the mobile computing industry, computer-energy
consumption continues to challenge
both industry and the global economy.
In the US, enterprise energy consumption doubled over the past five
years and will continue to do so. And
this does not include the energy cost of
manufacturing components—it is
estimated that Japan’s semiconductor
industry will consume 1.7 percent of
that country’s energy budget by 2015.
The articles in this issue propose strategies for mitigating these costs by
designing systems that consume energy
in proportion to the amount of work
performed, establishing new benchmarks and accurate ways of modeling
power consumption, and even recycling
older processors over several computing
generations.
D
Computer
Models and Metrics to Enable Energy-Efficiency
Optimizations
50
The Green500 List: Encouraging Sustainable
Supercomputing
Wu-chun Feng and Kirk W. Cameron
The performance-at-any-cost design mentality ignores
supercomputers’ excessive power consumption and need for heat
dissipation and will ultimately limit their performance. Without
fundamental change in the design of supercomputing systems, the
performance advances common over the past two decades won’t
continue.
56
Life Cycle Aware Computing: Reusing Silicon
Technology
John Y. Oliver, Rajeevan Amirtharajah, Venkatesh Akella,
Roland Geyer, and Frederic T. Chong
Despite the high costs associated with processor manufacturing, the
typical chip is used for only a fraction of its expected lifetime.
Reusing processors would create a “food chain” of electronic
devices that amortizes the energy required to build chips over
several computing generations.
Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next Page
A
BEMaGS
F
Computer
Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next Page
A
BEMaGS
F
Flagship Publication of the IEEE Computer Society
CELEBRATING THE PAST
8 In Our Time
A Force of Nature
David Alan Grier
11 32 & 16 Years Ago
Computer, December 1975 and 1991
Neville Holmes
NEWS
13 Industry Trends
The Changing World of Outsourcing
Neal Leavitt
17 Technology News
A New Virtual Private Network for Today’s Mobile World
Karen Heyman
20 News Briefs
Linda Dailey Paulson
MEMBERSHIP NEWS
6
62
64
66
President’s Message
Report to Members: Election Results
IEEE Computer Society Connection
Call and Calendar
COLUMNS
103 Security
Natural-Language Processing for Intrusion Detection
Allen Stone
106 How Things Work
NEXT MONTH:
Outlook
Issue
SMS: The Short Message Service
Jeff Brown, Bill Shipman, and Ron Vetter
111 Software Technologies
Conquering Complexity
Gerard J. Holzmann
114 Entertainment Computing
Enhancing the User Experience in Mobile Phones
S.R. Subramanya and Byung K. Yi
118 Invisible Computing
Taking Online Maps Down to Street Level
Luc Vincent
124 The Profession
Making Computers Do More with Less
Simone Santini
DEPARTMENTS
4 Article Summaries
23 Computer Society Information
68 IEEE Computer Society
Membership Application
Computer
72
85
86
102
Annual Index
Advertiser/Product Index
Career Opportunities
Bookshelf
COPYRIGHT © 2007 BY THE INSTITUTE OF ELECTRICAL AND
ELECTRONICS ENGINEERS INC. ALL RIGHTS RESERVED.
ABSTRACTING IS PERMITTED WITH CREDIT TO THE SOURCE.
LIBRARIES ARE PERMITTED TO PHOTOCOPY BEYOND THE LIMITS OF US COPYRIGHT LAW
FOR PRIVATE USE OF PATRONS: (1) THOSE POST-1977 ARTICLES THAT CARRY A CODE
AT THE BOTTOM OF THE FIRST PAGE, PROVIDED THE PER-COPY FEE INDICATED IN THE
CODE IS PAID THROUGH THE COPYRIGHT CLEARANCE CENTER, 222 ROSEWOOD DR.,
DANVERS, MA 01923; (2) PRE-1978 ARTICLES WITHOUT FEE. FOR OTHER COPYING,
REPRINT, OR REPUBLICATION PERMISSION, WRITE TO COPYRIGHTS AND PERMISSIONS
DEPARTMENT, IEEE PUBLICATIONS ADMINISTRATION, 445 HOES LANE, P.O. BOX
1331, PISCATAWAY, NJ 08855-1331.
Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next Page
A
BEMaGS
F
Computer
Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next Page
A
BEMaGS
F
ARTICLE SUMMARIES
Examining the Challenges of
Scientific Workflows
pp. 24-32
Yolanda Gil, Ewa Deelman,
Mark Ellisman, Thomas Fahringer,
Geoffrey Fox, Dennis Gannon, Carole
Goble, Miron Livny, Luc Moreau, and
Jim Myers
W
orkflows have recently emerged
as a paradigm for representing
and managing complex distributed scientific computations, accelerating the pace of scientific progress.
Scientific workflows orchestrate the
data flow across the individual data
transformations and analysis steps, as
well as the mechanisms to execute
them in a distributed environment.
Workflows should thus become firstclass entities in the cyberinfrastructure
architecture.
Each step in a workflow specifies a
process or computation to be executed—a software program or Web service, for instance. The workflow links
the steps according to the data flow and
dependencies among them. The representation of these computational workflows contains many details required to
carry out each analysis step, including
the use of specific execution and storage
resources in distributed environments.
The Case for EnergyProportional Computing
pp. 33-37
Luiz André Barroso and Urs Hölzle
E
nergy management has now
become a key issue for servers and
data center operations, focusing on
the reduction of all energy-related costs,
including capital, operating expenses,
and environmental impacts. Many
energy-saving techniques developed for
mobile devices became natural candidates for tackling this new problem
space. Although servers clearly provide
many parallels to the mobile space, they
require additional energy-efficiency
innovations. Energy-proportional computers would enable such savings,
potentially doubling the efficiency of a
typical server.
4
Computer
Computer
In current servers, the lowest energyefficiency region corresponds to their
most common operating mode. Addressing this mismatch will require significant rethinking of components and
systems. To that end, energy proportionality should become a primary
design goal. Although researchers’
experience in the server space motivates
these observations, energy-proportional
computing also will significantly benefit other types of computing devices.
The Energy-Efficiency
Challenge: Optimization
Metrics and Models
pp. 39-48
Suzanne Rivoire, Mehul A. Shah,
Parthasarathy Ranganathan,
Christos Kozyrakis, and Justin Meza
I
n recent years, server and data center power consumption has become
a major concern, directly affecting a
data center’s electricity costs and requiring the purchase and operation of cooling equipment, which can consume
from one-half to one watt for every
watt of server power consumption.
All these power-related costs can
potentially exceed the cost of purchasing hardware. Moreover, the environmental impact of data center power
consumption is receiving increasing
attention, as is the effect of escalating
power densities on the ability to pack
machines into a data center.
The two major and complementary
ways to approach this problem involve
building energy efficiency into the initial design of components and systems,
and adaptively managing the power
consumption of systems or groups of
systems in response to changing conditions in the workload or environment.
The Green500 List: Encouraging
Sustainable Supercomputing
pp. 50-55
Wu-chun Feng and Kirk W. Cameron
mance per watt has only improved
300-fold and performance per square
foot only 65-fold, forcing researchers
to design and construct new machine
rooms and, in some cases, entirely new
buildings. Compute nodes’ exponentially increasing power requirements
are a primary driver behind this less
efficient use of power and space.
Today, several of the most powerful
supercomputers on the TOP500 List
each require up to 10 megawatts of
peak power—enough to sustain a city
of 40,000. To inspire more efficient
conservation efforts, the HPC community needs a Green500 List to rank
supercomputers on speed and power
requirements and to supplement the
TOP500 List.
Life Cycle Aware Computing:
Reusing Silicon Technology
pp. 56-61
John Y. Oliver, Rajeevan Amirtharajah,
Venkatesh Akella, Roland Geyer, and
Frederic T. Chong
M
any consumer electronic
devices, from computers to
set-top boxes to cell phones,
require sophisticated semiconductors
such as CPUs and memory chips. The
economic and environmental costs of
producing these processors for new
and continually upgraded devices are
enormous. Because the semiconductor
manufacturing process uses highly
purified silicon, the energy required is
quite high—about 41 megajoules (MJ)
for a 1.2 cm2 dynamic random access
memory (DRAM) die. In terms of
environmental impact, 72 grams of
toxic chemicals are used to create
such a die.
Processor reuse can help deal with
these increasingly severe economic and
environmental costs, but it will require
innovative techniques in reconfigurable computing and hardware-software codesign as well as governmental
policies that encourage silicon reuse.
D
espite a 10,000-fold increase
since 1992 in supercomputers’
performance when running parallel scientific applications, perfor-
Published by the IEEE Computer Society
Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next Page
A
BEMaGS
F
Computer
Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next Page
A
BEMaGS
F
“The technical details and clarity of the articles is
beyond anything I’ve seen available elsewhere.”
– Sun Microsystems Engineer and IEEE Subscriber
From Imagination to Market
IEEE Computer Society
Digital Library
Premier Collection of Computing Periodicals & Conferences
Covers the complete spectrum of computing and delivers the
highest quality, peer-reviewed content available to users.
Over 180,000 top quality computing articles and papers
23 peer-reviewed periodicals
Over 170 conference proceedings, with a backfile to 1995
OPAC links for easy cataloging
Monthly ‘what’s new’ email notification of new content
and services
Free Trial!
Experience IEEE – request a trial for your company.
www.ieee.org/computerlibrary
IEEE Information Driving Innovation
Computer
Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next Page
A
BEMaGS
F
Computer
Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next Page
A
BEMaGS
F
PRESIDENT’S MESSAGE
An Interesting Year
Michael R. Williams
IEEE Computer Society 2007 President
The Society’s 2007 president
reflects on a year of challenges
and accomplishments.
T
here is an old phrase that
says, “May you live in interesting times.” This saying is
often interpreted as a curse
because “interesting” can
imply a wide variety of situations. I
can confidently say that 2007 has
been “interesting” in almost every
sense of the word.
After much debate, we chose Angela
Burgess for the job, and she assumed
her duties shortly thereafter. Those of
you who have met Angela will, I am
sure, think that the search committee
did a great job in arriving at this decision. I am confident that this will be a
major step forward in the Society’s
history, and Angela’s abilities will be
welcomed for many years to come.
A NEW EXECUTIVE DIRECTOR
One major event this year was the
search for a new executive director of
the Computer Society. The search was
open, in the sense that there was no
obvious candidate. Many extremely
well-qualified individuals applied,
and reducing the list to manageable
proportions was a difficult task for
the search committee. The candidates
represented diverse areas, including
academic, industrial, and nonprofit
institutions.
Although I have been involved in
searching for senior leadership and
administrative people in the past, I
have seldom seen a more impressive
list of individuals than those we
chose to interview. After several
interviews, we finally met over a holiday weekend to spend both formal
and informal time with the top three
candidates. They each brought a different set of strengths to the position
and each would have made a fine
executive director.
6
A BUILDING CRISIS
With every good “interesting”
development, there is often a counterpart. One of the worst this year was
the result of trying to be good citizens
within the IEEE.
Our staff reorganization (about
which, more later) left us with some
excess space in our Washington, DC,
headquarters building. The building
is well over 100 years old and is a
heritage-listed structure in the middle of Embassy Row in DC. Another
IEEE organization was headquartered in Washington, and it seemed
to make eminent sense to agree to a
proposal that we share office space
in our building.
Since this would require some renovations, we called in engineers and
architects to advise us—no sense trying to remove a support wall or something equally devastating. When the
reports came back, we were surprised
to learn that the building infrastruc-
ture—electricity, plumbing, heating,
and so on—not only could not accommodate the proposed renovations, but
was outdated enough to pose potential safety risks. We immediately
moved our staff out until we could
determine the best course of action to
remedy the problems.
As I write this message, a second
group of engineers is studying the situation, so I can’t give you any definite
word on the final outcome. While we
would all like instant answers in such
situations, doing the job properly takes
time. I hope that we can have definitive
plans and cost estimates in hand by the
time you actually read this.
I would like to thank our sister IEEE
organization, IEEE-USA, for providing us emergency office accommodations until we sort out this mess.
They’ve been extremely helpful and,
despite this disruption to their own
office functions, have been more than
welcoming to our employees.
REORGANIZING THE SOCIETY
My presidential message at the start
of 2007 indicated that this would be a
year of decision regarding our organization. I am pleased to say that much
of the staff and volunteer reorganization has begun, and I am sure this will
result in a more effective organization
in the future. Like any major change,
the complete plan for reorganizing our
Society will take time and will likely
be an ongoing effort for several years.
Angela Burgess, our new executive
director, has been instrumental in
implementing the myriad details of
such things as rewriting position
descriptions and recruiting staff to fill
the new vacancies.
It is not only the staff that is being
reorganized, but also the volunteer
side of the organization. While it
might be simple to say that combining the Technical Activities Board and
the Conferences and Tutorials Board
will result in greater synergy in both
areas, it is something else to actually
plan for a smooth transition, rewrite
the governing bylaws, establish new
modes of working, and try to foresee
potential pitfalls. I would like to
Computer
Computer
Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next Page
A
BEMaGS
F
Computer
Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next Page
express my personal thanks to all the
many volunteers who have helped in
this and similar endeavors. The list is
long, and I will not try to name them
all. Of course, such dedicated effort
also will be required in 2008 to
accomplish the next steps in the plan.
All this reorganization effort is necessary for both budget and efficiency
reasons. However, it is disruptive and
we must not lose sight of the reasons
this Society exists in the first place. We
have so far managed to keep a good
perspective on the situation and not
only have kept up such things as our
benefits to members but have actually
increased them in some areas. For
example, in 2008, student members
will have a particularly attractive benefit of being able to access free software from Microsoft.
I have always tried to remember
that making changes is but a step
toward providing better services to
our constituents. However, common
wisdom—particularly that saying
about when you’re up to your waist
in alligators, it isn’t easy to remember
that the objective was to drain the
swamp—rings true.
AN END AND A BEGINNING
As my term as president comes to
an end, I look back on this as truly
being an “interesting” year. The three
items I have touched on in this message are but a fraction of the events
and situations that have kept me busy.
In my January message in Computer,
I said, “I hope that, at the end of this
year, we can look back and not only
conclude that I did my best but that it
was to the benefit of the Society and
IEEE as a whole.” I can say that I have
done my best, but I will leave it to others to make the rest of that judgment.
On 1 January 2008, Ranga Kasturi
will take over from me as president,
and Susan (Kathy) Land will be the
president-elect. Kasturi (as he is usually known) is one of the most
thoughtful and capable individuals I
have ever met. He will certainly be a
president who will take the Society
forward to even greater accomplishments. Kathy is also an accomplished
and dedicated volunteer, and the two
of them will make a good team. I have
every confidence that I leave the
Society in the best possible hands for
2008 and beyond.
A
BEMaGS
F
W
ith a Society as large as ours,
the personal experiences of
our members span the complete spectrum. I have heard from
some of you who have gone on to
great success in 2007, from others who
suffered the ravages of earthquakes
and hurricanes, and others who have
had less extreme experiences. Whatever your experiences in 2007, I want
to wish you the very best possible
2008.
It has been my honor to serve as
your president in this “interesting”
year, and I thank you for the opportunity. I would also be remiss if I did
not thank all the dedicated volunteers
and staff that made it possible to
actually accomplish all that we did in
2007. ■
Michael R. Williams, a professor emeritus of computer science at the University of Calgary, is a historian specializing
in the history of computing technology.
Contact him at _______________
m.williams@computer.
org.
__
_______________________________________________
7
December 2007
Computer
Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next Page
A
BEMaGS
F
Computer
Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next Page
A
BEMaGS
F
IN OUR TIME
A Force
of Nature
David Alan Grier
George Washington University
Our educational system does
little to prepare computer
science students for making the
transition to the working world.
W
hen I first met them,
Jeff, Jeff, and Will were
inseparable. Un pour
tous, tous pour un. As
far as I could tell, they
spent every waking moment in each
other’s presence. I could commonly
find them at the local coffee shop or
huddled together in some corner of a
college building. The younger Jeff
would be telling an elaborate story to
whatever audience was ready to listen.
The elder Jeff would be typing on a
computer keyboard. Will might be
doodling in a notebook or flirting with
a passerby or playing garbage can basketball with pages torn from the day’s
newspaper.
Among colleagues, I referred to
them as the Three Musketeers, as they
seemed to embody the confidence of
the great Dumas heroes. They were
masters of technology and believed
that their mastery exempted them
from the rules of ordinary society. The
younger Jeff, for example, believed
that he was not governed by the law
of time. When given a task, he would
ignore it until the deadline was bearing down on him. Then in an explosion of programming energy, he would
pound perfect code into his machine.
The elder Jeff was convinced that
specifications were written for other
8
people, individuals with weaker
morals or limited visions. He wrote
code that was far grander than the
project required. It would meet the
demands of the moment, but it also
would spiral outward to handle other
tasks as well. You might find a game
embedded in his programs or a trick
algorithm that had nothing to do with
the project or a generalization that
would handle the problem for all time
to come.
Will was comfortable with deadlines and would read specifications but
he lived in a non-Euclidian world. He
shunned conventional algorithms and
obvious solutions. His code appeared
inverted, dissecting the final answer in
search of the original causes. It was
nearly impossible to read, but it
worked well and generally ran faster
than more straightforward solutions.
DISRUPTION
The unity of the Three Musketeers
was nearly destroyed when Alana
came into their circle. She was a force
of nature and every bit the intellectual
equal of the three boys. She took possession of their group as if it were her
private domain. Within a few weeks,
she had them following her schedule,
meeting at her favorite places, and
doing the things that she most liked
to do. She even got them to dress
more stylishly, or at least put on
cleaner clothes.
Alana could see the solution of a
problem faster than her compatriots,
and she knew how to divide the work
with others. For a time, I regularly saw
the four of them in the lounge, laughing and boasting as they worked on
some class project. One of their number, usually a Jeff, would be typing
into a computer while the others discussed what task should be done next.
Paper wads would be scattered
around a wastebasket. A clutch of
pencils would be neatly balanced into
a pyramid.
It was not inevitable that Alana
should destabilize the group, but that
is what eventually happened. Nothing
had prepared the boys for a woman
who had mastered both the technical
details of multiprocessor coding and
the advanced techniques of eye makeup. For reasons good or ill, Alana was
able to paint her face in a way that
made the souls of ordinary men melt
into simmering puddles of sweat.
Steadily, the group began to dissolve. The end was marked with little, gentle acts of kindness that were
twisted into angry, malicious incidents by the green-eyed monster of
jealousy. Soon Jeff was not speaking
to Jeff, Will was incensed with the
Elder, the Younger had temporarily
decamped for places unknown, and
Alana was looking for a more congenial group of colleagues.
Eventually, the four were able to
recover the remnants of their friendship and rebuild a working relationship, but they never completely
recovered their old camaraderie.
Shortly, they moved to new jobs and
new worlds, where they faced not
only the pull of the opposite sex but
also had to deal with the equally
potent and seductive powers of
finance and money.
MOVING ON
The younger Jeff was the first to
leave. He packed his birthright and
followed the western winds, determined to conquer the world. With a
Computer
Computer
Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next Page
A
BEMaGS
F
Computer
Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next Page
few friends, he built an Internet radio
station, one of the first of the genre.
They rented space in a warehouse,
bought a large server, and connected
it to the Internet. They found some
software to pull songs off their collection of CDs and wrote a system that
would stream music across the network while displaying ads on a computer screen. They christened their
creation “intergalactic-radio-usa.net.”
__________________
For a year or two, the station occupied a quirky corner of the Net. It was
one of the few places in those days
before MP3 downloads where you
could listen to music over the Internet.
Few individuals had a computer that
reproduced the sounds faithfully, but
enough people listened to make the
business profitable.
One day, when he arrived at work,
Jeff was met at the door by a man with
a dark overcoat, a stern demeanor,
and a letter from one of the music
publishing organizations, BMI or
ASCAP. The letter noted that intergalactic-radio-usa had not been paying royalties on the music it broadcast,
and it demanded satisfaction. A date
was set. Seconds were selected. Discussions were held. Before the final
confrontation, the station agreed to a
payment schedule and returned to the
business of broadcasting music.
Under the new regime, the station
had to double or triple its income in a
short space of time. This pushed Jeff
away from the technical work of the
station. His partners had grown anxious over his dramatic programming
techniques. They wanted steady progress toward their goals, not weeks of
inaction followed by days of intense
coding. They told him to get a suit,
build a list of clients, and start selling
advertising.
Jeff was not a truly successful salesman, but he also was not a failure. His
work allowed him to talk with new
people, an activity he loved, but it did
not give him the feeling of mastery
that he had enjoyed as a programmer.
“It’s all governed by the budget,” he
told me when he started to look for a
new job. “Everything is controlled by
the budget.”
AN EVOLVING BUSINESS
The elder Jeff left shortly after his
namesake. He and some friends
moved into an old house in a reviving
part of the city, a living arrangement
that can best be described as an
Internet commune. They shared
expenses and housekeeping duties and
looked for ways to make money with
their computing skills. Slowly, they
evolved into a Web design and hosting company. They created a Web
page for one business and then one for
another and finally admitted that they
had a nice, steady line of work.
Overcoming the pressures
and demands of the
commercial world
requires qualities
learned over a lifetime.
As the outlines of their company
became clearer, Jeff decided that they
were really a research and development laboratory that supported itself
with small jobs. He took a bedroom
on an upper floor and began working
on a program that he called “the ultimate Web system.” Like many of the
programs the elder Jeff produced, the
ultimate Web system was a clever idea.
It is best considered an early content
management system, a way to allow
ordinary users to post information
without working with a programmer.
As good as it was, the ultimate
Web system never became a profitable product. Jeff had to abandon
it as he and his partners began to
realize that their company needed
stronger leadership than the collective anarchy of a commune. They
needed to coordinate the work of
designers, programmers, salespeople,
and accountants.
As the strongest personality of the
group, Jeff slowly moved into the role
of president. As he did, the company
became a more conventional organization. Couples married and moved
into their own homes. The large house
A
BEMaGS
F
uptown ceased to be a residence and
became only an office.
Long after Jeff had begun purchasing Web software, he continued to
claim that he was a software developer.
“I’ll get back to it some day,” he would
say. “It will be a great product.”
MAKING CHOICES
Will was the last to leave. He started
a company that installed computers
for law firms. Our city hosts a substantial number of lawyers, so his
long-term prospects were good. I once
saw Will on the street, pushing a cart
of monitors and network cables. He
looked active and happy. Things were
going well, he said. He had plenty of
work but still had enough time to do
a little programming on the side.
We shook hands, promised to keep
in touch, and agreed to meet for dinner on some distant day. That dinner
went unclaimed for five years. It might
have never been held had not I
learned, through one of the Jeffs, that
Will had prospered as a programmer
and now owned a large specialty software firm.
I scanned his Web page and was
pleased with what I saw. “Software in
the service of good,” it read. “Our
motto is people before profit. Do unto
others as you would have them do
unto you.”
I called his office, was connected to
“President Will,” and got a quick
summary of his career. He had started
creating programs for disabled users
and had found a tremendous market
for his work. After a brief discussion,
we agreed to a nice dinner downtown, with spouses and well-trained
waiters and the gentle ambience of
success. We spent most of the evening
talking about personal things—families, children, and houses. Only at the
end of the evening did we turn to
work. “How is the business going?”
I asked.
“Well,” he said, but the corners of
his lips pursed.
I looked at him a moment.
“Everything okay?” I queried.
He exchanged a glance with his wife
and turned to back to me. “Extremely
9
December 2007
Computer
Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next Page
A
BEMaGS
F
Computer
Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next Page
A
BEMaGS
F
IN OUR TIME
well. We have more business than we
can handle.”
Again, I paused. “Starting to draw
competition?”
He smiled and relaxed for a brief
moment. “No,” he said.
I sensed that something was happening, so I took a guess. “A suitor
sniffing around?”
He looked chagrined and shook his
head. “Yeah.”
“A company with three letters in its
name?”
“Yup,” he said.
“It would be a lot of money,” I
noted.
“But then it wouldn’t be my firm,”
he said. After a pause, he added, “And
if I don’t sell, the purchaser might try
to put me out of business.”
We moved to another subject, as Will
was clearly not ready to talk any more
about the potential sale of his company. It was months later that I learned
that he had sold the company and had
decided to leave the technology industry. The news came in a letter that
asked me to write a recommendation
for a young man who wanted to
become “Reverend Will.”
Almost every technical field feels the
constant pull of business demands.
“Engineering is a scientific profession,” wrote the historian Edwin
Layton, “yet the test of the engineer’s
work lies not in the laboratory but in
the marketplace.” By training, most
engineers want to judge their work by
technical standards, but few have that
opportunity. “Engineering is intimately related to fundamental choices
of policy made by organizations
employing engineers,” notes Layton.
LESSONS LEARNED
The experiences of Jeff, Jeff, and
Will have been repeated by three
decades of computer science students.
They spend four years studying languages, data structures, and algorithms and pondering that grand
question, “What can be automated?”
Then they leave that world and move
to one in which profit is king, deadlines are queens, and finance is a knave
that keeps order.
Our educational system does little
to prepare them for this transition.
An early report reduced the issue to
a pair of sentences. “A large portion
of the job market involves work in
business-oriented computer fields,”
the report noted before making the
obvious recommendation. “As a
result, in those cases where there is a
business school or related department, it would be most appropriate
to take courses in which one could
learn the technology and techniques
appropriate to this field.”
Of course, one or two courses can’t
really prepare an individual for the
pressures and demands of the commercial world. Overcoming pressures
requires qualities that are learned over
a lifetime. Individuals need poise,
character, grace, a sense of right and
wrong, an ability to find a way
through a confused landscape.
Often professional organizations,
including many beyond the field of
computer science, have reduced such
qualities to the concept of skills that
can be taught in training sessions:
communications, teamwork, self-confidence. In fact, these skills are better
imparted by the experiences of life, by
The IEEE Computer Society
publishes over 150 conference
publications a year.
For a preview of the latest
papers in your field, visit
www.computer.org/publications/
10
learning that your roommate is writing and releasing virus code, that you
have missed a deadline and will not be
paid for your work, that a member of
your study group is passing your work
as his own.
“It is doubly important,” wrote
Charles Babbage, “that the man of science should mix with the world.” In
fact, most computer scientists have little choice but to mix with the world,
as the world provides the discipline
with problems, ideas, and capital. It is
therefore doubly important to know
how the ideas of computer science
interact with the world of money.
O
ne of the Three Musketeers,
safely out of earshot of his wife,
once asked me what had become
of Alana. I was not in close contact
with her. However, I knew that her life
had been shaped by the same forces
that had influenced the careers of her
three comrades, although, of course,
her story had been complicated by the
issues that women must face. She had
moved to Florida and built a business
during the great Internet bubble. She
had taken some time to start a family,
perhaps when that bubble had burst in
2001, and was now president of her
own firm. I didn’t believe that she had
done technical work for years.
“You’ve not kept in touch,” I said
more as a statement than a question.
“No,” he said.
I saw a story in his face, but that
story might reflect more of my observations than of his experience. It told
of a lost love, perhaps; a lesson
learned; a recognition that he and his
college friends were all moving into
that vast land of the mid-career knowing a little something about how to
deal with the forces of nature. ■
David Alan Grier is the editor in chief,
IEEE Annals of the History of Computing, and the author of When Computers
Were Human (Princeton University
Press, 2005). Grier is associate dean of
International Affairs at George Washington University. Contact him at ____
grier@
gwu.edu.
______
Computer
Computer
Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next Page
A
BEMaGS
F
Computer
Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next Page
A
BEMaGS
F
1975•1991• 1975•1991•1975•1991•1975•1991
32 & 16 YEARS AGO
DECEMBER 1975
COMPUTER EDUCATION (p. 27). “From the educator’s
point of view, perhaps no problem is so apparent as that
of overcoming the dichotomy between computer science
and computer engineering. the task of developing curricula that harmoniously integrate those two components
of computing has been reminiscent of the battles of great
prehistoric beasts in the tar pits: the fiercer and more passionate the struggle, the sooner the combatants are
ensnared in the tar. So far at any rate, the student has had
to choose between computer science or computer engineering—and of course both he and his employer have
had to pay the price in terms of increased training requirements and delayed effectiveness.”
COMPUTING CURRICULA (p. 29). “It is the conclusion of
[the IEEE Computer Society Model Curricula Subcommittee] that these four areas will provide a first entry to
industry-level instruction in computer science and engineering:
“Digital/Processor Logic Instruction towards an
understanding of digital logic devices and their interconnection to provide processing functions. …
“Computer Organization Instruction towards an
understanding of the interaction between computer
hardware and software, and the interconnection of system components.
“Operating Systems and Software Engineering “Theory of Computing Instruction towards an understanding of the formal aspects of computing concepts, to
include discrete structure, automata and formal languages, and analysis of algorithms.”
US SURVEY (p. 40). “The growth of computer science
and computer engineering, at least in EE departments,
has slowed but still continues. Throughout the survey,
growth since 1972 has been slower than growth before
1972. EE departments offer only 2% more CS or CE
options than they did two years ago. They have slightly
more faculty in the computer area. They do offer more
computer courses, particularly minicomputer and microcomputer courses. The growth of the latter courses has
been very rapid with 37% of EE departments reporting
one or more courses on microprocessors in the 19741975 school year.”
LARGE-SCALE COMPUTERS (p. 82). “Amdahl Corporation has delivered three of its $3.7 to $6.0 million 470V/6
large-scale computers and expects to deliver three more
before the end of the year. In all installations to date (Texas
A&M University, Columbia University, and the University
of Michigan), the computer replaced one or more IBM
systems.
“The company states that the 470V/6 can be substituted for an IBM 370 or 360 being used to run any set
of programs and using any peripheral mix. No more
changes are required than would be to change from one
model of the 370 to another.”
COMPUTER MEDIA SERVICING (p. 84). “A new
approach in the servicing of computer centers will be
pursued jointly in the Los Angeles area by Memorex and
Datavan Corporations.
“Specially-outfitted and staffed Datavan mobile units
will travel to customer computer installations where they
will re-ink ribbons, clean and recertify tapes, and clean
disk packs. They will be stocked with Memorex’s computer media products consisting of computer tape, disk
packs and cartridges, data modules, and flexible disks.”
IMPACT PRINTER (p. 86). “Documation, Inc. has
announced the availability of the new DOC 2250 highspeed, impact line printer, capable of printing 2250 single-spaced lines per minute using a 48 graphic character
set.”
“The printer is a free-standing unit containing its own
power supply and control logic. The integrated controller is a Documation-developed microprocessor. The
controller communicates through its interface with the
host system, decodes all commands, controls the printer
hardware, and reports various errors and status.”
PANAMA CANAL (p. 88). “About 40 ships per day pass
through the Panama Canal and that number is expected
to increase significantly in coming years. Because of the
projected upswing, the Panama Canal Company performed studies to develop requirements and specifications for a new Marine Traffic Control System (MTCS).
“The MTCS, implementing 25 General Electric
TermiNet 300 send-receive printers, is a computer-based
system designed to assist in collecting, assimilating, displaying, and disseminating schedules and related data
used to coordinate and control transit operations.”
POST OFFICE LABELS (p. 88). “The United States Postal
Service will introduce a labeling system next year which
will increase the efficiency of transporting mail between
the post office’s 40,000 stations around the country.
“The system, being developed by the Electronics and
Space Division of Emerson Electric Company, is essentially a computerized printing system which has a capability of producing a yearly total of eight-billion labels
and slips used to attach to mail bags and bundles of letters with a common destination.”
“With the new system, the post office’s computerized
printing plant for labels and slips will produce a supply
for every post office based on its own special needs. The
necessary data will be stored on magnetic tape and used
to produce two-week supplies for each order. Changes
to the order will be easily assimilated on the tape, reflecting each office’s latest routing needs.”
11
December 2007
Computer
Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next Page
A
BEMaGS
F
Computer
Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next Page
DECEMBER 1991
PRESIDENT’S REPORT (p. 4). “… At its November 1 meeting, the Board of Governors approved an agreement with
ACM to form the Federation on Computing in the United
States (FOCUS), the successor to the American Federation
of Information Processing Societies (AFIPS). That agreement establishes an organization of technical societies to
represent US computing interests in the International
Federation for Information Processing (IFIP).”
“We must develop ways to disseminate our transactions, magazines, conferences, and other material electronically. … I have become convinced that the effort
required is beyond what we can expect from a few volunteers. For that reason, we asked and obtained board
approval for a staff position dedicated to this effort.”
DATABASE INTEGRATION (p. 9). “Many believe that
standards development will resolve problems inherent in
integrating heterogeneous databases. The idea is to
develop systems that use the same standard model, language and techniques to facilitate concurrent access to
databases, recovery from failures, and data administration functions. This is easier said than done. Agreement on
standards has proven to be one of the most difficult problems in the industry. Most vendors and end users have
already invested in separate solutions for their problems.
Getting them to agree on a common way of handling their
data is challenging.”
ADDRESSING HETEROGENEITY (p. 17). “Our approach
to addressing schematic heterogeneity is to define views
on the schemas of more than one component database
and to formulate queries against the views. The view definition can specify how to homogenize the schematic heterogeneity in CDBS [Component DataBase System] views.
Our approach to data heterogeneity is twofold: First, we
allow the MDBS [MultiDataBase System] query processor to issue warnings when it detects wrong-data conflicts
in query results. Second, we allow the MDBS users and/or
database administrator to prepare and register lookup
tables in the database so that the MDBS query processor
can match different representations of the same data.”
MULTIDATABASE TRANSACTIONS (p. 28). “A transaction is a program unit that accesses and updates the data
in a database. An everyday example of a transaction is
the transfer of money between bank accounts. The debiting of one account and the crediting of another are each
separate actions, yet the combination of these actions is
viewed by the ‘user’ as one. The notion of combining
several actions into a single logical unit is central to many
of the properties associated with transactions.”
“Transaction processing in a distributed environment
is complex because the actions that compose a transaction can occur at several sites. Either all these actions
12
A
BEMaGS
F
should succeed, or none should. Thus, an important
aspect of transaction processing for a distributed system is reaching agreement among sites. The most widely
used solution to this agreement problem is the twophase commit (2PC) protocol.”
INTERDATABASE CONSISTENCY (p. 46). “In most applications, the mutual consistency requirements among
multiple databases are either ignored or the consistency
of data is maintained by the application programs that
perform related updates to all relevant databases.
However, this approach has several disadvantages. First,
it relies on the application programmers to maintain
mutual consistency of data, which is not acceptable if
the programmers have incomplete knowledge of the
integrity constraints to be enforced. … Since integrity
requirements are specified within an application, they
are not written in a declarative way. If we need to identify these requirements, we must extract them from the
code, which is a tedious and error-prone task.”
AUTONOMOUS TRANSACTIONS (pp. 71-72). “The evolution of classic TP [Transaction Processing] to
autonomous TP has just begun. The issues are not yet
clearly drawn, but they undoubtedly affect the way TP
systems are designed and administrated Transaction
execution is affected as well if users require independent
TP operations during network partitions or communication failures. We believe that asynchronous TP provides a suitable mechanism to support execution
autonomy, because temporary inconsistency can be tolerated and database consistency restored after the failure
is repaired. Divergence control methods must be devised
that give systems the flexibility needed to evolve from
classic TP to autonomous TP.”
GENETIC ALGORITHMS (p. 93). “NovaCast of Sweden
has launched the C Darwin II general-purpose tool for
solving optimization problems. The shell can reproduce
with stepwise, leaping changes and an adaptive selection
process that accumulates small, successive, and favorable variations from one generation to another.
“The program solves product design and planning,
machine scheduling, composition, and functional optimization problems. Users define how they want the solution to be presented and describe the environment in
terms of its conditions, restrictions, and cost factors.
The program evaluates a generation of conceivable solutions in parallel.”
PDFs of the articles and departments of the December
1991 issue of Computer are available through the
Society’s Web site, www.computer.org/computer.
Editor: Neville Holmes; __________________
neville.holmes@utas.edu.au
Computer
Computer
Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next Page
A
BEMaGS
F
Computer
Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next Page
A
BEMaGS
F
INDUSTRY TRENDS
The Changing
World of
Outsourcing
Neal Leavitt
O
utsourcing—a
practice
once considered controversial—has become widespread, not only with technology companies but also
with the IT departments of firms in
other industries.
The volume of tech offshore outsourcing—in which companies in
economically advanced countries
send work to businesses in developing nations—has increased since the
approach became popular during
the economic boom of the mid1990s. During that time, the nature
of the practice stayed largely the
same.
Now, though, this appears to be
changing.
For example, companies that handle outsourcing are beginning to
consolidate, creating larger providers offering a broader range of
services. At the same time, niche
providers are emerging.
In addition, countries such as
China are beginning to compete
with India, the longtime outsourcing-services leader.
Once, large companies did most
tech outsourcing. Now, smaller and
mid-size companies are beginning to
outsource work. Also, companies
primarily used to outsource large
projects, such as basic application
development or call-center operations. Now, as outsourcing becomes
more widespread, businesses are
starting to contract out smaller
SHORT-TERM TRENDS
projects—including complex scientific and R&D projects—to more
and more providers.
Various technical and marketplace
developments have driven and
enabled these changes. And industry
observers expect more changes in
the long run.
INSIDE OUTSOURCING
Technology-related outsourcing
began in the early 1980s and grew
rapidly in the mid-1990s. The driving forces included the expanding
tech economy, increased pressure on
IT departments to do more with
their resources, the increasing complexity of managing IT and keeping
up with rapidly advancing technologies, and the difficulty in finding IT workers in all skill areas,
noted David Parker, vice president
of IBM Global Technology Services’
Strategic Outsourcing operations.
Since then, companies have outsourced more than just IT functions.
For example, they have used the
approach to make their production,
customer service, and other processes
Published by the IEEE Computer Society
Computer
efficient and inexpensive by farming
them out to businesses that have the
necessary expertise and can perform
them less expensively, Parker noted.
This work includes help desk, data
center, and network management
operations; database administration;
and server management.
There are outsourcing providers
in both economically advanced and
developing countries. The latter are
able to offer lower costs because
workers there receive lower wages.
Offshore outsourcing has become
controversial, especially in the US,
where critics say it is a way for
domestic companies to save money
by taking jobs from local workers
and moving them overseas.
Technology has helped change the
face of outsourcing. For example,
improvements in telecommunications
and Internet-based technologies such
as videoconferencing, instant messaging, and Internet telephony make
communication faster and more
widely available for outsourcing
providers and their clients, noted Alex
Golod, vice president of business
development for Intetics, a global
software development company.
Legal issues are also important.
Some businesses in developed
nations are outsourcing work to
overseas branches of domestic companies to have more legal recourse
if problems occur, said Ashish
Gupta, CEO of Evalueserve Business
Research, a market-analysis firm.
Consolidation
Because more companies are
farming out a greater variety of projects, there has been a proliferation
of outsourcing suppliers in recent
years to meet the demand.
Meanwhile, as outsourcing companies have grown, they have
looked for ways to reinvest profits,
expand, and acquire new capabilities, noted Gupta.
For these reasons, outsourcing is
experiencing a number of mergers
and acquisitions.
13
December 2007
Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next Page
A
BEMaGS
F
Computer
Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next Page
A
BEMaGS
F
INDUSTRY TRENDS
For instance, the US-based EPAM
Systems became the largest software-engineering-services provider
in Central and Eastern Europe by
acquiring Vested Development, a
Russian software-developmentservices firm.
India’s Wipro acquired Infocrossing, a US infrastructuremanagement services provider, for
$600 million. Wipro has since
opened several software-development and IT-services offices in the US.
Among other recent outsourcing
deals, IBM purchased Daksh,
Computer Sciences Corp. bought
Covansys, and Electronic Data
Systems acquired MphasiS.
According to Intetics’ Golod,
consolidation is a way for small
and midsize vendors to survive
and grow while gaining efficiencies in marketing and operations,
and for larger companies to
acquire competitors and firms
with niche skills.
Beyond India
India has been the leader in outsourcing services since the mid1990s, mostly because it has a large,
educated, English-speaking tech
workforce; low salaries; and a technology sector that has pursued the
work for many years.
Market-research firm Gartner
warns that a shortage of tech workers and rising wages could erode as
much as 45 percent of India’s market share by next year. In addition,
Indian outsourcers can’t handle the
high volume of available projects.
According to Intetics’ Golod, competition is coming largely from places
such as China, the Philippines,
Eastern Europe, Latin America, and
even developed countries like Ireland
and Israel, as discussed in the sidebar “Developing Countries Join the
Outsourcing Marketplace.”
Market-research firm IDC predicts that China will overtake India
by 2011.
Emerging low-wage countries that
might also pull business from India
over the next few years include
Egypt, Malaysia, Pakistan, and
Thailand.
Small-scale and
niche outsourcing
Traditionally, outsourcing has
entailed big companies and large,
long-term projects such as major
application development or the
operations of entire departments.
Bigger companies have been more
willing to pay for outsourcing than
smaller ones, have had more tasks
they could offload, and could provide larger contracts than smaller
businesses.
Now, though, more companies are
competing with large outsourcing
providers. This includes outsourcing
providers targeting smaller businesses and smaller, shorter-term
jobs—including minor testing and
business-analysis projects—either to
Developing Countries Join the Outsourcing Marketplace
India has been the leading offshore technologyoutsourcing supplier for a decade. The country has
leveraged several advantages, including low salaries
and a big, English-speaking workforce with college
degrees in technology fields.
However, there have been growing opportunities for
other developing countries to enter the outsourcing-services market. For example, there is more outsourcing
work than India can handle. In addition, wages in India
are rising, and companies in other countries are actively
pursuing and promoting their services.
China
This country attracts outsourcing projects largely in
areas such as low-end, PC-based application development, quality-assurance testing, system integration,
data processing, and product development. India is
even outsourcing work to China.
Market research firm IDC’s Global Delivery Index—
which ranks locations according to criteria such as available skills, political risk, and labor costs—said Chinese
cities have made significant investments in infrastructure, English-language instruction, and Internetconnection availability.
The country has undergone a massive telecommunica-
14
tions expansion as a result of national economic policy.
Also, China is producing 400,000 college graduates in
technology fields annually, said Kenneth Wong, managing partner at SmithWong Associates, a China-focused
US consulting firm.
“Language issues are no longer the major handicap
to China-based outsourcing,” noted Wong. “Many IT
personnel in China today are US-educated. The Chinese
government knows it has a way to go in reaching
English-proficiency parity with India, but the signs are
encouraging.”
Paul Schmidt, a partner with outsourcing consultancy
TPI, added that demand for the country’s outsourcing
services is driven largely by multinational corporations
looking for access to China’s domestic market and a
presence in the rest of the Asia-Pacific region.
Russia
Russia’s principal outsourcing competencies include
Internet programming, Web design, and Web-server
and Web-database application development.
The country has an educated, experienced labor
force, with the world’s third-largest pool of engineers
and scientists per capita. English competency is good
for mid- to higher-level managers and acceptable for
Computer
Computer
Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next Page
A
BEMaGS
F
Computer
Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next Page
capture a market niche or to expand
their customer base. This work can
entail either parts of larger jobs or
small, individual projects.
To remain competitive, the larger
providers are also looking at the
smaller end of the outsourcing market, in part by standardizing their
offerings to make them less expensive, noted IBM’s Parker.
Outsourcing consultancy TPI says
the number of commercial outsourcing contracts of more than $50
million, especially in the US, has
declined.
According to TPI, many Indian
providers are seeking smaller jobs than
in the past and thus are expected to
increase revenue from North American companies by 37 percent during
the next two years, even though the
region is outsourcing less work.
Niche outsourcing providers are
well placed for the smaller contracts—often for more focused projects—that are becoming more
popular, said Peter Allen, TPI partner and managing director of marketing development.
The growth of niche outsourcing
is contributing to multisourcing, in
which companies farm out different
parts of a project to multiple specialty service providers, said Brian
C. Daly, public relations director for
Unisys’ outsourcing business.
“This makes it possible to choose
best-of-breed outsourcing firms to
handle different tasks and to optimize costs,” noted Intetics’ Golod.
Outsourcing new
and more complex tasks
As outsourcers have gained more
skills, companies are beginning to
farm out more difficult technology
tasks. Banks and other businesses,
for instance, are moving well beyond
outsourcing low-level applicationmaintenance work and are increasingly relying on offshore providers
for help with full-system projects.
F
Cost not the only factor
Companies are farming out work
not only to save money but also to
gain long-term access to outsourcing firms’ talent, as well as their
innovative, creative, and advanced
approaches, said Unisys’ Daly.
Many companies find recruiting
to be a tedious and expensive
advantage of time-zone proximity to the US. This
makes Latin America ideal for time-sensitive projects
and work where the outsourcer must communicate
quickly and regularly with the outsourcing company.
Eastern Europe
The Philippines
Countries such as Belarus, Bulgaria, the Czech
Republic, Hungary, Poland, Romania, and Ukraine specialize largely in application development, particularly
for complex scientific projects or commercial products.
The region has a solid educational base, producing
qualified scientists and engineers, explained Golod.
Also, he said, the workers in these companies prefer
complex, challenging projects to simple coding.
In addition, larger clients are interested in sending
work to Eastern Europe because they want to diversify
their outsourcing across geographic regions.
Eastern Europe gets a lot of projects from Western
Europe and the UK, which are in nearby time zones, as
well as the US, he said.
This nation has a large English-speaking population
and is carving out a niche for call-center operations.
Evalueserve Business Research, a market-analysis firm,
reports that favorable factors include a 94 percent literacy rate, a high-quality telecommunications infrastructure, familiarity with Western corporate culture, and
government initiatives such as exemptions from license
fees and export taxes that have stimulated outsourcing
growth. Also, wages are low.
Countries in this region—particularly Argentina,
Brazil, and Mexico—specialize in outsourcing projects
such as application development. They have the
BEMaGS
These include providing online
banking capabilities, customizing
enterprise-resource-planning systems, or conducting statistical and
actuarial projects for insurance
firms.
Outsourcing the modernization of
legacy applications is another area
with significant potential. This
would help, for example, companies
that no longer have the in-house
expertise to work with older applications written in languages such as
Cobol and that don’t want to spend
the time and money necessary to
rewrite them in other languages.
developers who communicate directly with clients,
noted Alex Golod, vice president of business development for Intetics, a global software-development
company.
Latin America
A
Developed countries
Ireland has a favorable IT services infrastructure and
is strong in software development and testing. Israeli
outsourcing providers specialize in commercial software development, particularly security and antivirus
products. Although both nations have higher labor
costs than developing countries, they continue to
attract business because of their well-educated workforces and stable governments.
15
December 2007
Computer
Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next Page
A
BEMaGS
F
Computer
Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next Page
A
BEMaGS
F
INDUSTRY TRENDS
process, with choices often limited
by local manpower resources,
added Rob Enderle, president and
principal analyst with the Enderle
Group, a market-research firm.
Global outsourcing firms have
access to extensive pools of talent
and well-established recruiting procedures.
I
ndustry observers expect offshore
outsourcing to continue growing.
For example, Forrester Research,
a market-analysis firm, estimates that
software outsourcing will account
for 28 percent of IT budgets in the
US and Europe by 2009, up from 20
percent in 2006.
Forrester also predicts that the
number of overseas software developers working on projects for firms
in developed countries will rise from
360,000 this year to about 1 million
by 2009.
Companies will increasingly outsource telecommunications- and
Internet-based development because
the technologies in these areas are
based on international standards
and are thus easy for offshore vendors to work on.
Security is a big concern for many
companies, with the growing use of
mobile technology and the increasing complexity of threats. More
companies will thus outsource shortterm and ongoing security efforts, as
keeping up with the work themselves will require too much time
and money.
However, there may be a limit on
growth prospects, Golod said. For
example, he noted, international
political and economic problems
could reduce the amount of outsourced work.
According to Enderle, outsourcers
might face problems coordinating
their work with customers as they
Windows Kernel Source and Curriculum Materials
for Academic Teaching and Research.
The Windows® Academic Program from Microsoft® provides the
materials you need to integrate Windows kernel technology into
the teaching and research of operating systems.
The program includes:
• Windows Research Kernel (WRK): Sources to build and
experiment with a fully-functional version of the Windows
kernel for x86 and x64 platforms, as well as the original design
documents for Windows NT.
• Curriculum Resource Kit (CRK): PowerPoint® slides presenting
the details of the design and implementation of the Windows
kernel, following the ACM/IEEE-CS OS Body of Knowledge,
and including labs, exercises, quiz questions, and links to the
relevant sources.
take on a greater variety and number of clients and projects.
And, said Renga Rajan, technical
director for Fast Pvt. Ltd. (www.
____
fastindia.com), an Indian outsourc__________
ing supplier, providers will face challenges keeping their prices down
because of rising salaries for skilled
workers in developing countries. ■
Neal Leavitt is president of Leavitt
Communications (www.leavcom.com),
a Fallbrook, California-based international marketing communications company with affiliate offices in Brazil,
France, Germany, Hong Kong, India,
and the UK. He writes frequently on
technology topics and can be reached
at _____________
neal@leavcom.com.
Editor: Lee Garber, Computer,
l.garber@computer.org
______________
Looking for an
“Aha” idea?
Find it in CSDL
Computer Society
Digital Library
200,000+
articles and papers
Per article:
• ProjectOZ: An OS project environment based on the SPACE
kernel-less OS project at UC Santa Barbara, allowing students
to develop OS kernel projects in user-mode.
$9US (members)
These materials are available at no cost, but only for non-commercial use by universities.
$19US (nonmembers)
For more information, visit www.microsoft.com/WindowsAcademic
or e-mail _____________
compsci@microsoft.com.
16
Computer
Computer
Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next Page
A
BEMaGS
F
Computer
Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next Page
A
BEMaGS
F
TECHNOLOGY NEWS
A New Virtual
Private Network
for Today’s
Mobile World
Karen Heyman
V
irtual private networks
were a critical technology
for turning the Internet
into an important business
tool. Today’s VPNs establish secure connections between a
remote user and a corporate or other
network via the encryption of packets sent through the Internet, rather
than an expensive private network.
However, they traditionally have
linked only a relatively few nodes
that a company’s IT department controls and configures. This is not adequate for the many organizations
that now must let managers, employees, partners, suppliers, consultants,
e-commerce customers, and others
access networks from their own PCs,
laptops, publicly available computers like those at airport kiosks, and
even mobile devices, many not controlled by the organization.
VPNs based on Internet Protocol
security (IPsec) technology were not
designed for and are not well-suited
for such uses. Instead of restricting
remote users who should not have
access to many parts of a company’s
network,
explained
Graham
Titterington, principal analyst with
market-research firm Ovum, “IPsec
[generally] connects users into a
network and gives the same sort of
access they would have if they were
physically on the LAN.”
Organizations are thus increasingly adopting VPNs based on
Secure Sockets Layer technology
from vendors such as Aventail, Cisco
Systems, F5 Networks, Juniper
Networks, and Nortel Networks.
SSL VPNs enable relatively easy
deployment, added Chris Silva, an
analyst at Forrester Research, a
market-research firm. A company
can install the VPN at its headquarters and push any necessary software
to users, who then access the network
via their browsers, he explained.
Organizations thus don’t have to
manage, fix, update, or buy licenses
for multiple clients, yielding lower
costs, less maintenance and support,
and greater simplicity than IPsec
VPNs, Silva said.
“From a remote-access perspective, IPsec is turning into a legacy
technology,” said Rich Campagna,
Juniper’s SSL VPN product manager.
Nonetheless, IPsec VPNs are still
preferable for some uses, such as
linking a remote, company-controlled node, perhaps in a branch
office, with the corporate network.
Both VPN flavors are likely to
continue to flourish, with the choice
Published by the IEEE Computer Society
Computer
coming down to a corporation’s
physical setup and access needs.
However, Silva said, SSL VPNs
might eventually have the edge as
the world goes more mobile.
Meanwhile, SSL VPNs still face
some challenges to widespread
adoption.
VPN BACKGROUND
An early attempt to create a VPN
over the Internet used multiprotocol
label switching, which adds labels to
packets to designate their network
path. In essence, all packets in a data
set travel through designated tunnels
to their destinations. However,
MPLS VPNs don’t encrypt data.
IPsec and SSL VPNs, on the other
hand, use encrypted packets with
cryptographic keys exchanged
between sender and receiver over the
public Internet. Once encrypted, the
data can take any route over the
Internet to reach its final destination.
There is no dedicated pathway.
US Defense Department contractors began using this technique as far
back as the late 1980s, according to
Paul Hoffman, director of the VPN
Consortium (www.vpnc.org).
Introducing IPsec
Vendors initially used proprietary
and other forms of encryption with
their VPNs. However, to establish a
standard way to create interoperable VPNs, many vendors moved to
IPsec, which the Internet Engineering Task Force (IETF) adopted
in 1998.
With IPsec, a computer sends a
request for data from a server
through a gateway, acting essentially
as a router, at the edge of its network. The gateway encrypts the
data and sends it over the Internet.
The receiving gateway queries the
incoming packets, authenticates the
sender’s identity and designated network-access level, and if everything
checks out, admits and decrypts the
information.
Both the transmitter and receiver
must support IPsec and share a public encryption key for authentication.
17
December 2007
Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next Page
A
BEMaGS
F
Computer
Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next Page
A
BEMaGS
F
TECHNOLOGY NEWS
File and
media
server
Firewall
Terminal
services
Internet
SSL
encrypted
Remote user: traffic
Business partner
Kiosk user
Temporary staff
Traveling staff
Telecommuter
Desktop
Decrypted
traffic
SSL VPN:
Authentication
Authorization
Decryption
Integrity check
Web
proxy
Web
server
E-mail
server
Figure 1. In an SSL VPN, a remote user logs in to a dedicated Web site to access a company’s network. The user’s browser initiates the session with a corporate server or
desktop computer, which downloads the necessary software to the client. The software uses SSL for encrypting the transmitted data. At the corporate site, the VPN system authenticates users, determines what level of network access they should have,
and if everything checks out, decrypts the data and sends it to the desired
destination.
Unlike SSL, IPsec is implemented
as a full application installed on the
client. And it doesn’t take advantage
of existing browser code.
IPsec limitations
According to Forrester’s Silva, corporate IT departments increasingly
need to let remote users connect to
enterprise networks, which is challenging with IPsec.
The normal practice of configuring IPsec VPNs to allow full access
to a network can create vulnerabilities. To avoid this, administrators
would have to configure them to
permit access only to parts of a network, according to Peter Silva, technical marketing manager for F5
Networks SSL VPNs.
IPsec VPNs also have trouble letting certain traffic transverse firewalls, he explained. This isn’t usually
a problem, as most companies have
the same basic ports open both
inbound and outbound. However, it
is possible that one company would
let traffic out over a port that another
doesn’t leave open for inbound data.
By contrast, the vast majority of
companies have port 80 (dedicated
18
to HTTP traffic) or 443 (dedicated
to SSL or HTTPS) open inbound
and outbound, so crossing firewalls
is rarely a problem for SSL VPNs,
which are Web-based.
IPsec VPNs are full programs and
thus are large, generally 6 to 8
megabytes. This means they download more slowly and don’t always
work well on smaller devices.
ENTER THE SSL VPN
The first SSL VPN vendor was
Neoteris, purchased in 2003 by
NetScreen, which Juniper bought
the next year, according to Juniper’s
Campagna.
SSL
Netscape Communications developed SSL and released the first public version in 1994. The IETF
adopted the technology as a standard in 1999, naming it Transport
Layer Security. However, most users
still call it SSL. The technology,
which offers the same encryption
strengths as IPsec, has been used
largely to secure financial transactions on the Web.
In an SSL VPN, a user logs in to a
dedicated Web site. The browser initiates the session with the Web
server, which downloads the necessary software to the client, generally
using either ActiveX or Java controls. Administrators can configure
an SSL VPN gateway to conduct
additional checks, such as whether
the connecting device has the latest
security upgrades.
During this process, the client and
server identify common security
parameters, such as ciphers and
hash functions, and use the strongest
ones they both support.
The VPN gateway identifies itself
via a digital certificate that includes
information such as the name of the
trusted authority that issued the certificate, which the client can contact
for verification, and the server’s
public encryption key. The gateway
then sends an encrypted session
cookie to the browser to start the
communications.
To generate the encryption key
used for the session, the client
encrypts a random number with the
server’s public key and sends the
result to the server, which decrypts
it with a private key.
Once the user’s identity is authenticated, an SSL VPN, like an IPsec
VPN, allows the level of access
granted by company policies for different types of users. Thus, for example, the vice president of human
resources would have access to an
employee salary database while most
other visitors wouldn’t.
All major browsers are SSLenabled, so SSL VPNs can work with
almost any browser and are thus
platform- and operating-systemindependent, said the VPN Consortium’s Hoffman.
This makes them more convenient
to use, particularly for mobile users,
than IPsec VPNs.
SSL advantages
The mobile or stationary user connects to a company’s SSL VPN by
entering a URL in a browser and
then presenting login credentials,
usually a username and password.
Computer
Computer
Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next Page
A
BEMaGS
F
Computer
Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next Page
This begins the process of establishing a secure connection.
Basic functionality and implementation. Once the initial connection is made, the Web server downloads controllers that work with the
browser’s own code. Thus, the downloads are small, generally between
100 kilobits and 1 megabit. Because
they’re so small, they download fast
(making them easier for ad hoc use),
take up less space on the hard drive,
and work better on smaller devices
such as cellular phones.
Most SSL VPNs use a reverse
proxy, which rewrites the content
from the Web application and presents it to the browser. Proxy servers
present a single interface to users,
accelerate the encryption process,
perform data compression, and provide an additional layer of security.
Gateway boxes are the network
element that SSL vendors sell. They
streamline both ingoing and outgoing traffic and provide proxying,
authentication, and authorization.
Authentication and authorization. SSL VPNs check usernames,
passwords, and digital certificates to
authenticate visitors. The gateways
then consult a database to determine
the level of network access the user
should have.
This gateway functionality consolidates the combination of firewalls,
extranets, and other technologies previously used to provide authentication. This not only simplifies the
process but also reduces the amount
of equipment that companies must
manage.
Challenges
Many businesses might not want
to change from the IPsec VPNs
they’ve spent money on, at least until
they have recovered their investments. Users of new SSL VPNs also
face a learning curve.
Companies need different sets of
rules for different users, to provide
them with varying degrees of network access. For example, corporate
accountants should have access to
financial records, but outside visitors
should not. If an enterprise doesn’t
have such rules in place already,
designing and implementing them
can require considerable work.
In addition, companies must
choose among multiple approaches
to limiting access, including mapping certain users to parts of a network or building tunnels to specific
applications, servers, ports, or filters.
SSL VPNs used to be considerably
slower than their IPsec-based counterparts. SSL works with TCP, in
which data recipients must acknowledge every incoming packet. IPsec,
on the other hand, works with the
User Datagram Protocol, which is
quicker because it doesn’t require
acknowledgments. Also, many
backbone providers give UDP traffic higher priority than TCP communications.
Now, though, SSL VPNs use network optimization and data compression to improve performance.
SSL VPNs are also faster than before
because they can use the IETF’s relatively new Datagram Transport Layer
Security protocol, which runs over
UDP, said Cisco product marketing
manager Mark Jansen.
A
BEMaGS
F
O
ver time, SSL VPNs will gain
additional capabilities. For
example, Forrester’s Silva said
they are now able to manage users’
connections and preserve their sessions as, for example, they roam
from a Wi-Fi network to a public cellular network and back.
IPsec VPNs will still be sufficient
for communications between an ITmanaged machine and a network, or
for hub-and-spoke communications
within a network.
Over time, though, in an increasingly mobile world, said Forrester’s
Silva, SSL will become the obvious
choice for VPNs. ■
Karen Heyman is a freelance technology writer based in Santa Monica, California. Contact her at _________
klhscience@
yahoo.com.
Editor: Lee Garber, Computer,
l.garber@computer.org
______________
IEEE
Computer Society
members
SAVE
25%
on all
confer ences
sponsor ed by the
IEEE
Computer Society
w w w. c o m p u t e r. o r g / j o i n
19
December 2007
Computer
Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next Page
A
BEMaGS
F
Computer
Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next Page
A
BEMaGS
F
NEWS BRIEFS
Robot Adapts to
Changing Conditions
A
US researcher has developed a robot that can adapt
to changes to either its environment or its own structure, even if it is damaged.
University of Vermont assistant professor Josh Bongard said his Starfish
robot doesn’t regenerate a new limb if
one is destroyed, like its biological
counterpart, but rather adapts its
behavior in response to damage and
learns to walk in new ways.
“The approach is unique in that
the robot adapts by creating a computer simulation of itself: how many
legs it has, how those legs are
arranged, whether the ground over
which it’s moving is flat or rocky,” he
explained. “By creating such a simulation, it can mentally rehearse ways
of walking before executing them.”
“This is particularly useful in dan-
gerous situations,” he said. “If the
robot is perched on the edge of a cliff,
it shouldn’t simply try to walk forward and see what happens. It should
build an understanding of its immediate environment first.”
The technology could be used in
disasters or dangerous situations.
This work is also important to the
creation of self-configuring robots that
could change locomotion, for example from crawling to walking, based
on various conditions, said Bongard,
whose work is funded largely by the
US National Science Foundation.
Working with Cornell University’s
Computational Synthesis Laboratory (CCSL), researchers used 3D
printing technology to fabricate the
battery-powered robot’s plastic body.
Starfish contains a small PC-104
computer with a Pentium 166-MHz
The Starfish robot uses sensors and simulation software to adapt to changes to either
its environment or its own structure, such as the loss of a limb. Starfish rocks back and
forth to enable sensors to determine the nature of the surrounding terrain or the
robot’s current physical structure. It then simulates possible movements to determine
which will best fulfill its mission.
20
Computer
Computer
processor, 64 megabits of RAM,
and 512 megabits of compact flash
memory.
It gets a sense of its environment
and physical structure by rocking
back and forth. This activates jointangle sensors, which determine how
the robot is tilting in response to the
surrounding terrain or missing parts.
The robot collects data on its flash
card and uploads the information to
an external PC for processing by the
simulation software. The external PC
sends the robot acquired information
about its actions and behavior.
Other hardware includes a
Diamond DMM-32X-AT dataacquisition board, used for collecting joint-angle sensor data; and a
Pontech SV203 servo control board,
which drives Starfish’s hinge motors.
The researchers program the robot
with basic information about its
design, such as the mass and shape of
its parts. Starfish then builds a virtual
model of itself, using Open Dynamics
Engine software, which integrates the
equations of motion for a connected
set of physical objects, such as those
that make up the robot.
Starfish can thus consider the
physical repercussions of a given set
of torques applied over time, such as
whether a specific motor program—
an application that determines how
the robot will move—will cause it to
go forward or simply shake in place.
According to Bongard, his team
layered an optimization technique
on top of the simulator that lets the
robot determine which of its large set
of motor programs will best fulfill its
mission.
“[Other] lines of inquiry that I’m
pursuing involve looking at ways to
get teams of robots to collectively
simulate their surroundings, and
then share those simulations,” he
added. “In this way, robots could
work together to solve tasks that are
beyond any one of them.” ■
News Briefs written by Linda Dailey
Paulson, a freelance technology writer
based in Ventura, California. Contact
her at ________________
ldpaulson@yahoo.com.
Published by the IEEE Computer Society
Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next Page
A
BEMaGS
F
Computer
Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next Page
A
BEMaGS
F
New Approach Will Speed Up USB
I
ntel is leading an effort to create a
new, much faster version of the
Universal Serial Bus standard used
for connecting computers to peripherals and other devices. Proponents
say faster connections are necessary
to transfer the large amounts of dataintensive digital content found today
in a reasonable amount of time
between computers and devices.
To develop SuperSpeed USB technology, Intel has formed the USB 3.0
Promoters Group with members
Hewlett-Packard, Microsoft, NEC,
NXP Semiconductors, and Texas
Instruments.
Once they design the standard, the
USB Implementers Forum (www.
____
usb.org) will handle tasks such as
product-compliance testing and certification, as well as trademark protection and user education, said Jeff
Ravencraft, Intel technology strategist and Promoters Group chair.
Proponents plan for USB 3.0 to
provide 10 times the bandwidth of
USB 2.0, which would raise speeds
from 480 megabits per second to
4.8 Gbps.
The goal is to reduce the wait that
consumers currently experience
moving rich digital video, audio,
and other content between their PCs
and portable devices, explained
Ravencraft.
USB 3.0 users could transfer a
27-gigabyte high-definition movie
to a portable media player in 70
seconds, as opposed to the 14
Interface Makes a Computer Desktop Look like a Real Desktop
He said the BumpTop software works in the
Anand Agrawala has something for computer users
background when other applications are open and
who like their work in piles rather than files.
does not unduly tax the host computer’s CPU.
Agrawala, cofounder and CEO of BumpTop, has
BumpTop hooks into the operating and file systems
created a computer desktop environment that lets
so that it accurately reflects when files are created,
users place their work in files or piles, represented by
moved, or modified.
graphics that make the environment look like a real
Although the system currently works only with
desktop. They can also manipulate and move their
Windows, Agrawala said he plans to make it support
material and convert it from piles to files and back
multiple platforms.
again, just as they would with physical items.
Agrawala said he is still working on a business model
This enables them to function more comfortably by
for BumpTop and is developing an alpha version of the
letting them handle their computer documents in the
system. He plans to release a finished product in the
same way they work with their physical documents,
near future. ■
explained Agrawala, who created the new interface
during his master’s degree studies
at the University of Toronto.
The interface also gives the desktop a third dimension, which provides more information density
than conventional environments.
To add functionality to the standard desktop, Agrawala used gaming technology. For example, the
system employs the Ageia PhysX
physics engine to handle the
graphics in a stable manner.
BumpTop uses a gaming engine
to run the complex mathematics
that controls the movement of documents on the desktop and renders
them properly. “Say you want to
move an object by throwing it into
the corner. The math determines
Anand Agrawala (right), cofounder and CEO of BumpTop, demonstrates his new computer
how fast an object will slow down
desktop environment, which lets users place their work in files or piles, shown by graphics
and the friction that will come into that make the interface look like a real desktop. Users can also manipulate and move their
play,” Agrawala explained.
material, just as they would with physical items.
21
December 2007
Computer
Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next Page
A
BEMaGS
F
Computer
Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next Page
A
BEMaGS
F
NEWS BRIEFS
minutes required with USB 2.0.
The new technology might even
allow a device to send high-definition
signals to a TV, noted analyst Carl
D. Howe, a principal with consultancy Blackfriars Communications.
USB 3.0 would also work with
USB drives, camcorders, external
hard drives, MP3 players, portable
telephones, and other flash-capable
portable devices, Ravencraft said.
It will be backward compatible with
earlier USB versions, using the same
connectors and programming models,
and will support multiple communications technologies, he added.
USB 3.0 will improve performance
in part by using two channels to sep-
arate data transmissions and
acknowledgements, thereby letting
devices send and receive simultaneously, rather than serially.
Unlike USB 2.0, the new technology won’t continuously poll devices
to find out if they have new data to
send, which should reduce power
consumption.
USB 3.0 will also add quality-ofservice capabilities to guarantee
specified levels of resources to highpriority traffic.
Ravencraft declined to provide
more detail, explaining that “the
specification is still being defined.”
Intel expects the final standard to
be finished by mid-2008 and to
appear in peripherals and devices in
2009 or 2010.
USB’s main competitors among
external bus technologies are FireWire (IEEE 1394), which has a new
standard that runs at 3.2 Gbps; and
eSATA (external serial advanced
technology attachment), which offers
up to 3 Gbps now and a planned 6
Gbps by mid-2008.
Said Howe, “My guess is that video
recorders and the like may go with
FireWire, which can run … without imposing the big computational
load that USB does. And there are
other systems, like wireless USB, that
promise similar speeds but without
the wires.” ■
Standardization Comes
to Virtualization
S
everal companies are cooperating to develop virtualization
standards. This would enable
products from virtualization firms to
work together, an important development for both vendors and users of
the increasingly popular technology.
Virtualization software enables a
single computer to run multiple operating systems simultaneously, via the
use of virtual machines (VMs). The
products let companies use a single
server for multiple tasks that would
normally have to run on multiple servers, each working with a different
OS. This reduces the number of servers a company needs, thereby saving
money and using resources efficiently.
Currently, different vendors’ VMs
use their own formats and won’t
necessarily run on other virtualization vendors’ platforms, explained
Simon Crosby, chief technology officer with virtualization vendor
XenSource. Different vendors’ products also use separate formats for
storing VMs on a user’s hard drive.
To overcome this, virtualization
vendors Microsoft, VMware, and
XenSource, along with server makers Dell, Hewlett-Packard, and IBM
and the Distributed Management
22
Task Force are working on the
Open Virtual Machine Format. The
DMTF is an international industry
organization that develops management standards.
OVF defines an XML wrapper
that encapsulates multiple virtual
machines and provides a common
interface so that the VMs can run on
any virtualization system that supports OVF. A vendor could deliver
an OVF-formatted VM to customers
who could then deploy it on the virtualization platform of their choice.
For example, a VM packaged for a
VMware system could run on a
XenSource system.
OVF utilizes existing tools to combine multiple virtual machines with
the XML wrapper, which gives the
user’s virtualization platform a package containing all required installation and configuration parameters
for the VMs. This lets the platforms
run any OVF-enabled VM.
Vendors are cooperating in this
effort because users demand interoperability among different products for
running multiple VMs on one computer, noted Laura DiDio, vice president and research fellow for Yankee
Group, a market research firm. More-
over, interoperability would increase
the overall use of virtualization.
DiDio said Yankee Group research
shows 96 percent of all organizations
plan to adopt virtualization, and
one-third of them will use multiple
virtualization products to get the
exact array of capabilities they need.
Standards-based interoperability will
thus be critical, she explained.
In addition to enabling interoperability, OVF provides security by
attaching digital signatures to virtual
machines. This verifies that the OVF
file actually came from the indicated
source and lets users’ systems determine whether anyone has tampered
with the VMs.
There is no firm timeline for adopting and implementing the OVF yet.
“I’m optimistic that we’ll see something in the first half of 2008,” said
Winston Bumpus, DMTF president
and VMware’s director of standards
architecture.
The task force is also working on
standards for other aspects of virtualization, added Bumpus. ■
Editor: Lee Garber, Computer,
l.garber@computer.org
______________
Computer
Computer
Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next Page
A
BEMaGS
F
Computer
Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next Page
PURPOSE: The IEEE Computer Society is the world’s largest
association of computing professionals and is the leading provider
of technical information in the field.
MEMBERSHIP: Members receive the monthly magazine Computer,
discounts, and opportunities to serve (all activities are led by
volunteer members). Membership is open to all IEEE members,
affiliate society members, and others interested in the computer field.
COMPUTER SOCIETY WEB SITE: www.computer.org
OMBUDSMAN: To check membership status or report a change of
address, call the IEEE Member Services toll-free number, +1 800
678 4333 (US) or +1 732 981 0060 (international). Direct all other
Computer Society-related questions—magazine delivery or
unresolved complaints—to ___________
help@computer.org.
CHAPTERS: Regular and student chapters worldwide provide the
opportunity to interact with colleagues, hear technical experts, and
serve the local professional community.
AVAILABLE INFORMATION: To obtain more information on any of
the following, contact Customer Service at +1 714 821 8380 or
+1 800 272 6657:
•
•
•
•
•
•
•
•
•
Membership applications
Publications catalog
Draft standards and order forms
Technical committee list
Technical committee application
Chapter start-up procedures
Student scholarship information
Volunteer leaders/staff directory
IEEE senior member grade application (requires 10 years practice
and significant performance in five of those 10)
PUBLICATIONS AND ACTIVITIES
Computer. The flagship publication of the IEEE Computer Society,
Computer, publishes peer-reviewed technical content that covers all
aspects of computer science, computer engineering, technology,
and applications.
Periodicals. The society publishes 14 magazines, 9 transactions, and
one letters. Refer to membership application or request information
as noted above.
Conference Proceedings & Books. Conference Publishing Services
publishes more than 175 titles every year. CS Press publishes books
in partnership with John Wiley & Sons.
Standards Working Groups. More than 150 groups produce IEEE
standards used throughout the world.
Technical Committees. TCs provide professional interaction in over 45
technical areas and directly influence computer engineering
conferences and publications.
Conferences/Education. The society holds about 200 conferences each
year and sponsors many educational activities, including computing
science accreditation and certification.
A
BEMaGS
F
EXECUTIVE COMMITTEE
President: Michael R. Williams*
President-Elect: Rangachar Kasturi*
Past President: Deborah M. Cooper*
VP, Conferences and Tutorials: Susan K. (Kathy) Land (1ST VP)*
VP, Electronic Products and Services: Sorel Reisman (2ND VP)*
VP, Chapters Activities: Antonio Doria*
VP, Educational Activities: Stephen B. Seidman†
VP, Publications: Jon G. Rokne†
VP, Standards Activities: John Walz†
VP, Technical Activities: Stephanie M. White*
Secretary: Christina M. Schober*
Treasurer: Michel Israel†
2006–2007 IEEE Division V Director: Oscar N. Garcia†
2007–2008 IEEE Division VIII Director: Thomas W. Williams†
2007 IEEE Division V Director-Elect: Deborah M. Cooper*
Computer Editor in Chief: Carl K. Chang†
Executive Director: Angela R. Burgess†
* voting member of the Board of Governors
† nonvoting member of the Board of Governors
BOARD OF GOVERNORS
Term Expiring 2007: Jean M. Bacon, George V. Cybenko, Antonio Doria, Richard
A. Kemmerer, Itaru Mimura, Brian M. O’Connell, Christina M. Schober
Term Expiring 2008: Richard H. Eckhouse, James D. Isaak, James W. Moore, Gary
McGraw, Robert H. Sloan, Makoto Takizawa, Stephanie M. White
Term Expiring 2009: Van L. Eden, Robert Dupuis, Frank E. Ferrante, Roger U.
Fujii, Ann Q. Gates, Juan E. Gilbert, Don F. Shafer
EXECUTIVE STAFF
Executive Director: Angela R. Burgess
Associate Executive Director: Anne Marie Kelly
Associate Publisher: Dick Price
Director, Administration: Violet S. Doan
Director, Finance and Accounting: John Miller
COMPUTER SOCIETY OFFICES
Washington Office. 1730 Massachusetts Ave. NW, Washington, DC 20036-1992
Phone: +1 202 371 0101 • Fax: +1 202 728 9614
Email: ___________
hq.ofc@computer.org
Los Alamitos Office. 10662 Los Vaqueros Circle, Los Alamitos, CA 90720-1314
Phone: +1 714 821 8380
Email: __________
help@computer.org
Membership and Publication Orders:
Phone: +1 800 272 6657 • Fax: +1 714 821 4641
Email: ___________
help@computer.org
Asia/Pacific Office. Watanabe Building, 1-4-2 Minami-Aoyama, Minato-ku,
Tokyo 107-0062, Japan
Phone: +81 3 3408 3118 • Fax: +81 3 3408 3553
Email: ____________
tokyo.ofc@computer.org
IEEE OFFICERS
President: Leah H. Jamieson
President-Elect: Lewis Terman
Past President: Michael R. Lightner
Executive Director & COO: Jeffry W. Raynes
Secretary: Celia Desmond
Treasurer: David Green
VP, Educational Activities: Moshe Kam
VP, Publication Services and Products: John Baillieul
VP, Regional Activities: Pedro Ray
President, Standards Association: George W. Arnold
VP, Technical Activities: Peter Staecker
IEEE Division V Director: Oscar N. Garcia
IEEE Division VIII Director: Thomas W. Williams
President, IEEE-USA: John W. Meredith, P.E.
Next Board Meeting: 16 May 2008, Las Vegas, NV, USA
revised 11 Oct. 2007
Computer
Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next Page
A
BEMaGS
F
Computer
Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next Page
A
BEMaGS
F
COMPUTING PRACTICES
Examining the
Challenges of
Scientific Workflows
Workflows have emerged as a paradigm for representing and
managing complex distributed computations and are used to
accelerate the pace of scientific progress. A recent National
Science Foundation workshop brought together domain,
computer, and social scientists to discuss requirements of future
scientific applications and the challenges they present to current
workflow technologies.
Yolanda Gil and
Ewa Deelman
University of Southern
California
Mark Ellisman
University of California,
San Diego
Thomas Fahringer
University of Innsbruck
Geoffrey Fox and
Dennis Gannon
Indiana University
Carole Goble
Manchester University
Miron Livny
University of WisconsinMadison
Luc Moreau
University of Southampton
Jim Myers
National Center for
Supercomputing Applications
24
Computer
Computer
S
ignificant scientific advances are increasingly achieved through complex
sets of computations and data analyses. These computations may comprise thousands of steps, where each step might integrate diverse models and data sources that different groups develop. The applications and
data might be also distributed in the execution environment. The assembly and management of such complex distributed computations present many
challenges, and increasingly ambitious scientific inquiry is continuously pushing
the limits of current technology.
Workflows have recently emerged as a paradigm for representing and managing complex distributed scientific computations, accelerating the pace of scientific progress.1-6 Scientific workflows orchestrate the dataflow across the individual
data transformations and analysis steps, as well as the mechanisms to execute
them in a distributed environment.
Each step in a workflow specifies a process or computation to be executed (for
instance, a software program or Web service). The workflow links the steps
according to the data flow and dependencies among them. The representation of
these computational workflows contains many details required to carry out each
analysis step, including the use of specific execution and storage resources in distributed environments. Figure 1 shows an example of a high-level workflow
developed within the context of an earthquake science application, CyberShake
(www.scec.org), which generates shake maps of Southern California.7
Workflow systems exploit these explicit representations of complex computational processes at various levels of abstraction to manage their life cycle and
automate their execution. In addition to automation, workflows can provide the
information necessary for scientific reproducibility, result derivation, and result
sharing among collaborators. By providing automation and enabling reproducibility, they can accelerate and transform the scientific-analysis process.
Workflow systems have demonstrated these capabilities in a variety of applications where workflows comprising thousands of components processed large,
Published by the IEEE Computer Society
0018-9162/07/$25.00 © 2007 IEEE
Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next Page
A
BEMaGS
F
Computer
Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next Page
A
BEMaGS
F
XYZInputFile_1
SortedSRLFile_1
File_SeisParamVals1
FileCollection_RVM1
CollOfCollection_RuptureVariations1
Node_GenMetaforPeakValCal
PreSGTInputFile_1
Node_PreSGT
Node_PreSGT_Partial
PreSGTFileCollectionDescription_1
Node_BoxNameCheck
FileCollection_SGTFileDescriptions1
RupturesForSite_1
IN3DFile_1
Node_pvm1_chk1
FileCollection_SeisMetadataforPVC1
SGTFilesForSiteDir_1
Node_CreateCVMFilesDesc
GRDFile_1
EdgeFileCollection_1
CMVFileCollectionDescription_1
PreSGTFileForSite_1
CollOfCollection_SGTCollection1
Node_SynthSGT_Collection
Node_XYZGRD
CVMFileForSite_1
Node_FD_GRID_CVM
IN3DFile_2
Node_pvm1_chk2
SGTFilesForSiteDir_2
FileCollection_Seismogram1
Node_PeakValCal_Collection
FileCollection_SAOutputFile1
Figure 1. A visual representation of a high-level workflow developed within the context of an earthquake science application.
Double-lined nodes indicate computations that the system will parallelize automatically.
distributed data sets on high-end computing resources.
Some workflow systems are deployed for routine use in
scientific collaboratories—virtual entities that allow scientists to collaborate with each other across organizations and physical locations. Figure 2 shows an image of
the Orion Nebula that the Montage8 application produced. Montage uses workflow technologies9 to generate science-grade mosaics of the sky. Researchers recently
used such mosaics to verify a bar in the M31 galaxy.10
Much research is under way to address issues of creation, reuse, provenance tracking, performance optimization, and reliability. However, to fully realize the
promise of workflow technologies, we must meet many
additional requirements and challenges. Scientific applications are driving workflow systems to examine issues
such as supporting dynamic event-driven analyses, handling streaming data, accommodating interaction with
users, intelligent assistance and collaborative support
for workflow design, and enabling result sharing across
collaborations.
As a result, we need a more comprehensive treatment
of workflows to meet the long-term requirements of scientific applications. The National Science Foundation’s
2006 Workshop on Challenges of Scientific Workflows
brought together domain, computer, and social scientists
to discuss requirements of future scientific applications
and the challenges they present to current workflow technologies. As part of the workshop, we examined application requirements, workflow representations, dynamic
workflows, and system-related challenges.
Figure 2.The Montage application uses workflow technologies
to generate science-grade mosaics of the sky.
25
December 2007
Computer
Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next Page
A
BEMaGS
F
Computer
Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next Page
A
BEMaGS
F
APPLICATION REQUIREMENTS
tion with the analysis results, reproducing important
discoveries involving complex computations can be
impractical or even impossible.
To support reproducibility, workflow-management
systems must capture and generate provenance information as a critical part of the workflow-generated data.
Collaborations
Workflow-management systems must also consume the
Combining distributed data, computation, models, provenance information associated with input data and
and instruments at unprecedented scales can enable associate that information with the resulting data prodtransformative research. The analysis of large amounts ucts. Systems must associate and store provenance with
of widely distributed data is becoming commonplace. the new data products and contain enough details to
This data, and the experimental apparatus or simula- enable reproducibility.
tion systems that produce it, typically
Scientists also need interoperable,
belong to collaborations rather than
persistent repositories of data and
individuals. Within these collaboraanalysis definitions, with linkage to
Many disciplines benefit
tions, various individuals are responopen data and publications, as well
from the use of
sible for different aspects of data
as to the algorithms and applicaacquisition, processing, and analysis,
tions used to transform the data.
workflow-management
and entire projects often generate
Workflow systems must complesystems to automate
publications. Such environments
ment existing data repositories with
computational activities.
demand tools that can orchestrate
provenance and metadata repositothe steps of scientific discovery and
ries that enable the discovery of the
bridge the differing expertise of colworkflows and application compolaboration members.
nents used to create the data. Two important concerns
Many disciplines benefit from the use of workflow- for scientists in these highly collaborative endeavors
management systems to automate such computational are credit assignment and recognition of individual
activities, including astronomy, biology, chemistry, envi- contributions.
ronmental science, engineering, geosciences, medicine,
physics, and social sciences.
Flexible environments
The scientific community perceives that workflows are
Systems must be flexible in terms of supporting both
important in accelerating the pace of scientific discov- common analyses that many scientists perform, as well
eries. Today, complex scientific analyses increasingly as unique analyses. Researchers should find it easy to
require tremendous amounts of human effort and man- set up and execute routine analyses based on common
ual coordination. Thus, researchers need more effective cases. At the same time, individual scientists should be
tools to prevent being inundated by the ever-growing able to steer the system to conduct unique analyses and
data and associated computational processing tasks.
create novel workflows with previously unseen combinations and configurations of models.
Reproducibility
From an operational perspective, there’s a need to proThe NSF workshop participants identified repro- vide secure, reliable, and scalable solutions. Scientists
ducibility of scientific analyses and processes as an must trust that their input and output data is secure and
important application requirement. Reproducibility is free from inappropriate data access or malicious manipat the core of the scientific method, enabling scientists to ulation. Current infrastructure must incorporate trust
evaluate the validity of each other’s hypotheses and pro- and reputation systems for data providers.
viding the basis for establishing known truths.
Finally, scientists need easy-to-use tools that provide
Reproducibility requires rich provenance information intelligent assistance for such complex workflow capaso researchers can repeat techniques and analysis meth- bilities. Automation of low-level operational aspects of
ods to obtain scientifically similar results.
workflows is a key requirement. Success will depend on
Today, reproducibility for complex scientific applica- interaction modalities that hide unnecessary complexitions is virtually impossible. Many scientists are ties and speak the scientist’s language.
involved, and the provenance records are highly fragmented, existing in e-mails, wiki entries, database SHARED WORKFLOW DESCRIPTIONS
queries, journal references, codes, and other sources for
Given the broad practice and benefits of sharing
communication. All this information, often stored in instruments, data, computing, networking, and many
various locations and forms, must be appropriately other science products and resources, why don’t
indexed and made available for referencing. Without researchers widely capture and share scientific computracking and integrating these crucial bits of informa- tations and processes as well?
Given the exponential growth in computing, sensors,
data storage, network, and other performance elements,
why is the growth of scientific data analysis and understanding not proportional?
26
Computer
Computer
Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next Page
A
BEMaGS
F
Computer
Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next Page
A
BEMaGS
F
Process sharing
rich semantic representations of requirements and conScientists have always relied on technology to share straints on workflow models and components. With
information about experiments, from pen and paper to semantic descriptions of the data format and type requiredigital cameras, e-mail, the Web, and computer soft- ments of a component, it’s possible to incorporate autoware. Workflow description and execution capabilities mated reasoning and planning capabilities that could
offer a new way of sharing and managing information automatically add data conversion and transformation
to electronically capture full processes and share them steps. Similarly, rich descriptions of the execution requirefor future reference and reuse.
ments of each workflow component would enable autoThis new way of sharing information—agreeing on mated resource selection and dynamic optimizations.
processes’ semantics and the infrastructure to support
Levels of description. Abstractions would let scientheir execution—continues the historic push for making tists identify what levels of description are useful to share
representations explicit and actionable and reducing the in their workflows, and they could package such descripbarriers to coordination. We should
tions as a self-contained sharable
encourage scientists to bring workobject that other scientists could then
Workflow representations
flow representations to their practices
refine and instantiate. We need refineand share the descriptions of their sciment and abstraction capabilities for
must incorporate rich
entific analyses and computations in
all first-class entities that workflow
information about analysis
ways that are as formal and explicit
systems must manipulate: workflow
processes to support
as possible. However, no commonly
scripts (regarded as specifications of
accepted and sufficiently rich reprefuture execution), provenance logs
discovery, creation,
sentations exist in the scientific com(descriptions of process and data hismerging, and execution.
tory), data, and metadata. There’s
munity.
relevant work in related fields of
computer science, such as refinement
Representations
Workflow representations must accommodate scien- calculi, model-driven architectures, and semantic modtific process descriptions at multiple levels. For instance, eling, but researchers haven’t applied these techniques
domain scientists might want a sophisticated graphical widely to scientific workflows, which are potentially
interface for composing relatively high-level scientific or large scale, might involve multiple technologies, and
mathematical steps, whereas the use of a workflow lan- must operate on heterogeneous systems.
guage and detailed specifications of data movement and
The sophistication of required descriptions depends
job execution steps might concern computer scientists. on the workflow capabilities needed. For example, a
To link these views and provide needed capabilities, workflow that adapts dynamically to changes in enviworkflow representations must include rich descriptions ronment or data values requires formal and comprethat span abstraction levels and include models of how hensive descriptions to enable automatic adaptation.
to map between them. Further, to support the end-to- Even for a human to make choices related to making
end description of multidisciplinary, community-scale changes to a workflow would require access to a broad
research, we need definitions of workflow and prove- variety of descriptions.
nance that are broad enough to describe workflows-ofworkflows that are linked through reference data, the Scientific versus business workflows
scientific literature, and manual processes in general.
Understanding the differences between scientific
Other important and necessary dimensions of abstrac- workflows and practices and those used in business
tion are experiment-critical versus non-experiment-crit- could yield useful insights. On the one hand, scientific
ical representations, where the former refers to scientific and business workflows aren’t obviously distinguishissues and the latter is more concerned with operational able, since both might share common important charmatters.
acteristics. Indeed, the literature contains examples of
workflows in both domains that are data-intensive and
Abstractions
highly parallel. On the other hand, scientific research
Workflow representations must incorporate rich infor- requires flexible design and exploration capabilities that
mation about analysis processes to support discovery, appear to depart significantly from the more prescriptive
creation, merging, and execution. These activities will use of workflows in business. Workflows in science are
become a natural way to conduct experiments and share a means to support detailed scientific discourse as well
scientific methodology within and across scientific as a way to ensure repeatable processes.
communities.
Another distinctive issue of scientific workflows is the
Automation. Wherever possible, workflow represen- variety and heterogeneity of data within a single worktations need to support automation of the workflow cre- flow. For example, a scientific workflow might involve
ation and management processes. This capability requires numeric and experimental data in proprietary formats
27
December 2007
Computer
Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next Page
A
BEMaGS
F
Computer
Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next Page
(such as those used for raw data that scientific instruments involved in a process produce), followed by
processed data resulting in descriptions related to
scientific elements, leading to textual, semistructured,
and structured data, and formats used for visual
representation.
To clarify the research issues in developing scientific
workflow capabilities, the community needs to identify
where there are real differences between scientific and
business activities, beyond domain-specific matters. It’s
important to balance the desire for sharing workflow
information against the dangers of premature standardization efforts that might constrain future requirements and capabilities.
A
BEMaGS
F
challenge is to develop mechanisms to create, manage,
and capture dynamic workflows to allow for reproducibility of significant results.
Scientific practice will routinely give rise to dynamic
workflows that base decisions about subsequent steps
on the latest available information. Researchers might
need to dynamically design a workflow to look at the
initial steps’ results before making a decision on carrying out later analysis steps. For example, examining the
results of an image’s initial preprocessing might require
subsequent steps to look at specific areas that preprocessing identified.
External events
A dynamic workflow could also result from an external event changing the workflow’s
Most scientific activity consists of
basic structure or semantics. For
The management
exploration of variants and experiexample, in severe-storm prediction,
mentation with alternative settings,
data-analysis computations might
of dynamic
which would involve modifying
search for patterns in radar data.
workflows is
workflows to understand their
Depending upon the specific pattern
complex due to
effects and provide a means for
of events, enacting different branches
explaining those effects. Hence, an
of a storm-prediction workflow
their evolution
important challenge in science is repmight require significant computaand life cycles.
resentation of workflow variants,
tional resources on-demand.
which aims at understanding the
In this case, the workflow must
impact that a change has on the
adapt to changes in storm intensity
resulting data products as an aid to scientific discourse. or resource availability. Some experimental regimens
While acknowledging that sharing representations is might draw on workflows that are heuristic or employ
important to the scientific process, the workshop group untried activities, thus these workflows might break
recognized that workflows must accommodate multi- down or fail during their execution, necessitating fault
ple collaboration and sharing practices. In some cases, diagnosis and repair. Two workflows could also affect
it’s suitable to share workflows, but not data. In other each other by sharing results, being classified as dynamic
cases, scientists want to share an abstract description as they respond to events arising in each other’s execuof the scientific protocol without actually communi- tion.
Finally, some scientific endeavors are large scale. They
cating details, parameters, and configurations, which
are their private expertise. In other situations, a involve large teams of scientists and technicians, and
description of a specific previous execution (prove- they engage in experimental methods or procedures that
nance) is desirable, with or without providing execu- take a long time to complete and require human intertion details.
vention and dynamic steering throughout the process.
For example, an astrophysical study of deep-space phenomena might require the use and coordination of mulDYNAMIC WORKFLOWS
How can workflows support both the exploratory tiple observation devices operating in different spaces,
nature of science and the dynamic processes involved in capturing data at different frequencies or modalities.
scientific analysis?
Workflow variants
Workflow life cycle
Changing context and infrastructure
Given that both the user’s experimental context and
the distributed infrastructure that the workflows operate over are in flux, the notion of static workflows is an
odd one. The vision of supporting dynamic, adaptive,
and user-steered workflows is to enable and accelerate
distributed and collaborative scientific methodology via
rapid reuse and exploration accompanied by continuous adaptation and improvement. Reproducibility
becomes ever more elusive in this kind of setting. The
28
The management of dynamic workflows is complex
due to their evolution and life cycles. As Figure 3 shows,
there’s no beginning or end to the life-cycle process of a
workflow—scientists can start at any point and flow
through the figure in any direction. They might build or
assemble a workflow, refine one that a shared repository has previously published, run their design, evolve
it, run it again, share fragments of it as they go along,
find other fragments they need, run it a few more times,
and learn from the protocol they’re developing.
Computer
Computer
Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next Page
A
BEMaGS
F
Computer
Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next Page
They might settle on the workflow and run it many
times, learning from the results produced, or they might
run it just once because that’s all they need. While running, the workflows could adapt to external events and
user steering. The results of the whole activity feed into
the next phases of investigation. The user is ultimately
at the center, interacting with the workflows and interpreting the outcomes.
Supporting scientists in complex exploratory processes
involving dynamic workflows is an important challenge.
Researchers will need to design a human-centered decision-support system that accommodates the information
needs of a scientist tracking and understanding such complex processes. The workflow will need appropriate user
interfaces that enable scientists to browse/traverse, query,
recapitulate, and understand this information. Simplifying the exploratory process also requires novel and scalable means for scientists to manipulate the workflows,
explore slices of the parameter space, and compare the
results of different configurations.
Learning workflow patterns
An interesting direction for future research explores
the question of how to improve, redesign, or optimize
workflows through data mining of workflow life-cycle
histories to learn successful (and unsuccessful) workflow patterns and designs, and assist users in following
(or avoiding) them. Researchers can extract one kind of
pattern from successful execution trails and use the
information to build recommendation systems. For
example, if a model M is added, the system could suggest additional models that other people often use
together with M in a workflow, or suggest values commonly used for the parameters in the model. Researchers
could extract another kind of pattern from unsuccessful
trails. These patterns can, for example, help identify
incompatible parameter settings, unreliable servers or
services, or gross inefficiencies in resource usage.
Researchers can subsequently analyze, reenact (reproduce), and validate workflow patterns in order to facilitate their reuse, continuous improvement, and
redeployment into new locations or settings.
SYSTEM-LEVEL WORKFLOW MANAGEMENT
Given the continuous evolution of infrastructure and
associated technology, how can we ensure reproducibility of computational analyses over a long period of time?
Engineering reproducibility
A key challenge in scientific workflows is ensuring
engineering reproducibility to enable the reexecution of
analyses, and the replication of results. Scientific reproducibility implies that someone can follow the general
methodology, relying on the same initial data, and
obtain equivalent results. Engineering reproducibility
requires more knowledge of the data manipulations, of
A
BEMaGS
F
Build
and
refine
Learn
Shared
repositories
Share
Run
Figure 3. A view of the workflow life cycle, where the processes
of workflow generation, sharing, running, and learning are
continuous.
the actual software and execution environment (hardware, specific libraries), to replicate the results bit-bybit. Researchers need the former capability when they
want to validate each other’s hypotheses, whereas the
latter is beneficial when they find unusual results or
errors and need to trace and understand them.
System stability
Providing a stable view in spite of continuous technology and platform changes at the system level will be
challenging. Researchers must design the underlying execution system to provide a stable environment for the
software layers managing the high-level scientific
process. It must be possible to reexecute workflows
many years later and obtain the same results. This
requirement poses challenges in terms of creating a stable layer of abstraction over a rapidly evolving infrastructure while providing the flexibility needed to
address evolving requirements and applications and to
support new capabilities.
To provide consistent and efficient access to resources,
resource management must consider both physical
resources (computers, networks, and data servers) and
logical resources (data repositories, programs, application components, and workflows). Uniform interfaces
should inform both. Enhancing resource descriptions
with semantic annotations can enable easier, more organized, and possibly automated provisioning, provenance, configuration, and deployment of new resources.
Extending current information services with meaningful
semantic descriptions of resources should allow for semiautomatic discovery, brokering, and negotiation.
29
December 2007
Computer
Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next Page
A
BEMaGS
F
Computer
Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next Page
A
BEMaGS
F
Dynamic configuration and life-cycle management of
resources should minimize human interaction.
Researchers have made some efforts to provide semiautomatic discovery and brokering of physical resources
and management of software components that might
become part of scientific-workflow environments.
However, there’s still much opportunity for improvements, since most existing systems require manual or
semimanual deployment of software components and
force application builders to hard-code software component locations on specific resources into their workflows.
becoming large as the quantities of data operated on
become larger. As workflows scale from 1,000 to 10,000
and perhaps 1 million or more tasks, researchers might
need new techniques to represent sets of tasks, manage
those tasks, dispatch tasks efficiently to resources, monitor task execution, detect and deal with failures, and
so on.
A second important scaling dimension is the number
of workflows. Particularly in large communities, many
users might submit many workflows at once. If these
workflows compete for resources or otherwise interact,
the runtime environment needs appropriate supporting
Quality of service
mechanisms to arbitrate among competing demands.
Workflow end users frequently need to specify qualityA third scaling dimension concerns the number of
of-service (QoS) requirements. The
resources involved. Ultimately, we
underlying runtime environment
can imagine workflows running on
Issues of scale will
should then guarantee, or at least
millions of data and computing
maintain, these requirements on a
resources (indeed, some systems such
increasingly require
best-effort basis. However, current
as SETI@home already operate at
advances over the
systems are mostly restricted to bestthat scale). A fourth scaling dimenstate of the art,
effort optimizations for time-based
sion concerns the number of particcriteria such as reducing overall exeipants. In a simple case, a single user
and they occur in
cution time or maximizing bandprepares and submits a workflow. In
multiple dimensions.
width. Researchers must address
a more complex case, many particiseveral problems to overcome current
pants might help define the worklimitations.
flow, contributing relevant data,
We must extend QoS parameters beyond time-based managing its execution, and interpreting results.
criteria to cover other important aspects of workflow
We need to provide new infrastructure services to supbehavior such as responsiveness, fault tolerance, secu- port workflow management. Some of these services are
rity, and costs. To provide a basis for interoperable analogous to existing data management and informaworkflow environments or services, this effort will tion services, such as workflow repositories and regrequire collaborative work on the definition of QoS istries. Other novel services will be concerned with
parameters that scientists can widely accept.
workflows as active processes and the management of
Coping with multicriteria optimization or planning their execution state.
might require radically changing current optimization
and planning approaches. Many systems exist for sin- Infrastructure constraints
gle or bi-criteria optimization, but few systems tackle
There’s a perceived tension between workflow
multicriteria optimization problems. There’s no ready- research challenges and the constraints that existing proto-use methodology that can deal with this problem in duction-quality infrastructures impose. Shared infra____
an efficient and effective way; thus, many opportunities structures such as the Open Science Grid (www.
for research exist.
opensciencegrid.org),
______________ the TeraGrid (www.teragrid.org),
Reservation mechanisms will be an important tool in and NMI (www.nsf-middleware.org) provide widely
developing runtime environment support for QoS. Both used and well-tested capabilities to build on. These sysimmediate and advance reservations can make the tem-level infrastructure layers are designed to be prodynamic behavior of infrastructures more predictable, duction quality, but out of necessity haven’t been
an important prerequisite to guarantee QoS parameters designed to address workflows’ specific requirements.
such as responsiveness and dependability. Moreover, Rather, they aim to meet a broader research commuadvance reservation can also simplify the scheduling of nity’s needs. It’s unlikely that we can make commitments
by selecting particular architectures or implementations
workflow tasks to resources.
at the workflow layers of shared cyberinfrastructure.
Scaling
We must explore alternative architectures to understand
Challenging issues of scale arise in workflow execu- design tradeoffs in different contexts. Examples include
tion. These issues will increasingly require advances
over the state of the art, and they occur in multiple
• workflows designed and tested on a desktop and run
dimensions.
with larger data in a cluster,
First, in many disciplines, individual workflows are
• workflows to handle streaming data,
30
Computer
Computer
Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next Page
A
BEMaGS
F
Computer
Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next Page
• event-driven workflow-management engines, and
• architectures centered on interactivity.
At the same time, we could design these architectures
to be interoperable and compatible, where feasible, with
some overall end-to-end, multilevel framework. Followon discussions and workshops to understand and
address these issues will be extremely beneficial.
RECOMMENDATIONS
Workflows provide a formal specification of the scientific-analysis process from data collection, through
analysis, to data publication. We can view workflows
as recipes for cyberinfrastructure computations, providing a representation describing the end-to-end
processes involved in carrying out heterogeneous interdependent distributed computations.
Once scientists capture this process in declarative
workflow structures, they can use workflow-management tools to accelerate the rate of scientific progress by
creating, merging, executing, and reusing these
processes. By assisting scientists in reusing well-known
and common practices for analyses, complex computations will become a daily commodity for use in scientific
discovery. As scientists conduct experiments in neighboring disciplines, cross-disciplinary scientific analyses
will become commonplace.
The NSF workshop participants made the following
recommendations:
• Support basic research in computer science to create a science of workflows.
• Make explicit workflow representations that capture
scientific analysis processes at all levels the norm when
performing complex distributed scientific computations.
• Integrate workflow representations with other forms
of scientific record.
• Support and encourage cross-disciplinary projects
involving relevant areas of computer science as well
as domain sciences with distinct requirements and
challenges.
• Provide long-term, stable collaborations and programs.
• Define a road map to advance the research agenda of
scientific workflows while building on existing cyberinfrastructure.
• Coordinate between existing and new projects on
workflow systems and interoperation frameworks for
workflow tools.
• Hold follow-up, cross-cutting workshops and meetings and encourage discussions between subdisciplines of computer science.
Scientists view workflows as key enablers for reproducibility of experiments involving large-scale compu-
A
BEMaGS
F
tations. Reproducibility is ingrained in the scientific
method, and there’s concern that without this ability,
scientists will reject cyberinfrastructure as a legitimate
means for conducting experiments. Representing scientific processes with enough fidelity and flexibility will be
a key challenge for the research community. Recognizing
that science has an exploratory and evolutionary nature,
workflows need to support dynamic and interactive
behavior. Thus, workflow systems need to become more
dynamic and amenable to steering by users and be more
responsive to changes in the environment.
W
orkflows should become first-class entities in the
cyberinfrastructure architecture. For domain
scientists, they’re important because workflows
document and manage the increasingly complex
processes involved in exploration and discovery through
computation. For computer scientists, workflows provide a formal and declarative representation of complex
distributed computations that must be managed efficiently through their life cycle from assembly, to execution, to sharing. ■
Acknowledgments
The NSF sponsored this workshop under grant #IIS0629361. We thank Maria Zemankova, program manager of the Information and Intelligent Systems Division,
for supporting the workshop and contributing to the
discussions. We also thank the workshop attendees
(http://vtcpc.isi.edu/wiki/index.php/Participants) for
their contributions.
References
1. E. Deelman and Y. Gil, eds. Final Report of NSF Workshop on
Challenges of Scientific Workflows, Nat’l Science Foundation;
http://vtcpc.isi.edu/wiki/images/b/b2/NSFWorkshopFlyer-final.
pdf.
__
2. E.Deelman and I. Taylor, eds., J. Grid Computing, special issue
on scientific workflows, vol. 3, no. 3-4, Sept. 2005.
3. E. Deelman, Z. Zhao, and A. Belloum, eds., Scientific Programming J., special issue on workflows to support large-scale
science, vol. 14, no. 3-4, 2006.
4. G. Fox and D. Gannon, eds., Concurrency and Computation:
Practice and Experience, special issue on workflow in grid systems, vol. 18, no. 10, Aug. 2006.
5. B. Ludaescher and C. Goble, eds., SIGMOD Record, special
issue on scientific workflows, vol. 34, no. 3, Sept. 2005; www.
___
sigmod.org/record/issues/0509/index.html.
6. I.J. Taylor et al., eds., Workflows for e-Science: Scientific
Workflows for Grids, Springer-Verlag, 2006.
7. Y. Gil et al., “Wings for Pegasus: Creating Large-Scale Scientific Applications Using Semantic Representations of Computational Workflows,” Proc. Conf. Innovative Applications
31
December 2007
Computer
Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next Page
A
BEMaGS
F
Computer
Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next Page
of Artificial Intelligence (IAAI), 2007, pp. 1767-1774; ____
http://
dblp.uni-trier.de/rec/bibtex/conf/aaai/GilRDMK07.
_______________________________
8. G.B. Berriman et al., “Montage: A Grid-Enabled Engine for
Delivering Custom Science-Grade Mosaics On Demand,”
Proc. SPIE, vol. 5493, SPIE, 2004, pp. 221-234.
9. E. Deelman et al., “Pegasus: A Framework for Mapping Complex Scientific Workflows onto Distributed Systems,” Scientific Programming J., vol. 13, 2005, pp. 219-237; ____
http://
vtcpc.isi.edu/wiki/images/f/f4/Pegasus.doc.
__________________________
10. R.L. Beaton et al., “Unveiling the Boxy Bulge and Bar of the
Andromeda Spiral Galaxy,” Astrophysical J. Letters (in submission).
Yolanda Gil is the associate division director for research
of the Intelligent Systems Division at the University of
Southern California’s Information Sciences Institute (ISI)
and a research associate professor in the Computer Science
Department. She cochaired the NSF’s Workshop on Challenges of Scientific Workflows. Her research interests include
intelligent interfaces for knowledge-rich problem solving.
Gil received a PhD in computer science from Carnegie Mellon University. She is a member of the Association for the
Advancement of Artificial Intelligence. Contact her at
gil@isi.edu.
_______
Ewa Deelman is a project leader in the Advanced Systems
Division at USC’s ISI and cochaired the NSF’s Workshop on
Challenges of Scientific Workflows. Her main research interest is scientific workflow management in distributed environments. Deelman received a PhD in computer science
from Rensselaer Polytechnic Institute. She is a member of
the IEEE Computer Society. Contact her at _______
deelman@
isi.edu.
_____
Mark Ellisman is the director of the Center for Research in
Biological Systems and National Center for Microscopy and
Imaging Research and a professor of neurosciences and bioengineering at the University of California, San Diego. His
research interests include the molecular and cellular basis
of nervous system function as well as the use of advanced
imaging and information technologies in brain research.
Ellisman received a PhD in molecular, cellular, and developmental biology from the University of Colorado, Boulder.
He is a founding fellow of the American Institute of Medical and Biological Engineering. Contact him at ________
mellisman@
ucsd.edu.
______
Thomas Fahringer is a professor and head of the Distributed and Parallel Systems Group at the Institute of Computer Science at the University of Innsbruck, Austria. His
32
A
BEMaGS
F
research interests include distributed and parallel systems.
Fahringer received a PhD in computer science from the
Technical University of Vienna. He is a member of the IEEE
and the ACM. Contact him at ____________
tf@dps.uibk.ac.at.
Geoffrey Fox is a professor of physics and computer science at Indiana University and a distinguished scientist in
its Community Grids Laboratory. His research interests
include grids and parallel computing. Fox received a PhD
in theoretical physics from Cambridge University. He is a
member of the ACM and the IEEE Computer Society and
a fellow of the American Physical Society. Contact him at
gcf@indiana.edu.
___________
Dennis Gannon is a professor of computer science in the
School of Informatics at Indiana University. His research
interests include cyberinfrastructure, programming systems
and tools, and distributed computing. Gannon received a
PhD in mathematics from the University of California,
Davis, and a PhD in computer science from the University
of Illinois. Contact him at ________________
gannon@cs.indiana.edu.
Carole Goble is a professor at the University of Manchester, and director of the myGRID project. Her research interests are the Semantic Web, e-science, and grid communities.
Goble received a BSc from Manchester University. She is a
member of the IEEE, the ACM, and the British Computer
Society. Contact her at _____________________
carole.goble@manchester.ac.uk.
Miron Livny is a computer science professor at the University of Wisconsin-Madison. His research interests include
high-throughput computing, visual data exploration, and
experiment-management environments. Livny received a
PhD in computer science from the Weizmann Institute of
Science, Israel. Contact him at _____________
miron@cs.wisc.edu.
Luc Moreau is a professor of computer science at the University of Southampton. His research interests include largescale open distributed systems and provenance. He received
a PhD from the University of Liège, Belgium. He is a fellow
of the British Computer Society and a member of the ACM.
Contact him at ___________________
L.Moreau@ecs.soton.ac.uk.
Jim Myers leads the Cyberenvironments and Technologies
Directorate at the National Center for Supercomputing
Applications. His research interests include open source collaborative tools. Myers received a PhD in chemistry from
the University of California, Berkeley. He is a member of
the American Chemical Society, the American Physical Society, the ACM, and the IEEE. Contact him at ________
jimmyers@
ncsa.uiuc.edu.
_________
Computer
Computer
Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next Page
A
BEMaGS
F
Computer
Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next Page
C O V E R
A
BEMaGS
F
F E A T U R E
The Case for
Energy-Proportional
Computing
Luiz André Barroso and Urs Hölzle
Google
Energy-proportional designs would enable large energy savings in servers, potentially
doubling their efficiency in real-life use. Achieving energy proportionality will require
significant improvements in the energy usage profile of every system component,
particularly the memory and disk subsystems.
E
nergy efficiency, a new focus for general-purpose
computing, has been a major technology driver
in the mobile and embedded areas for some time.
Earlier work emphasized extending battery life,
but it has since expanded to include peak power
reduction because thermal constraints began to limit further CPU performance improvements.
Energy management has now become a key issue for
servers and data center operations, focusing on the
reduction of all energy-related costs, including capital,
operating expenses, and environmental impacts. Many
energy-saving techniques developed for mobile devices
became natural candidates for tackling this new problem
space. Although servers clearly provide many parallels
to the mobile space, we believe that they require additional energy-efficiency innovations.
In current servers, the lowest energy-efficiency region
corresponds to their most common operating mode.
Addressing this mismatch will require significant
rethinking of components and systems. To that end, we
propose that energy proportionality should become a
primary design goal. Although our experience in the
server space motivates these observations, we believe
that energy-proportional computing also will benefit
other types of computing devices.
DOLLARS & CO2
Recent reports1,2 highlight a growing concern with
computer-energy consumption and show how current
0018-9162/07/$25.00 © 2007 IEEE
Computer
trends could make energy a dominant factor in the total
cost of ownership.3 Besides the server electricity bill,
TCO includes other energy-dependent components such
as the cost of energy for the cooling infrastructure and
provisioning costs, specifically the data center infrastructure’s cost. To a first-order approximation, both
cooling and provisioning costs are proportional to the
average energy that servers consume, therefore energy
efficiency improvements should benefit all energy-dependent TCO components.
Efforts such as the Climate Savers Computing Initiative
(www.climatesaverscomputing.org) could help lower
worldwide computer energy consumption by promoting
widespread adoption of high-efficiency power supplies
and encouraging the use of power-savings features
already present in users’ equipment. The introduction of
more efficient CPUs based on chip multiprocessing has
also contributed positively toward more energy-efficient
servers.3 However, long-term technology trends invariably indicate that higher performance means increased
energy usage. As a result, energy efficiency must improve
as fast as computing performance to avoid a significant
growth in computers’ energy footprint.
SERVERS VERSUS LAPTOPS
Many of the low-power techniques developed for
mobile devices directly benefit general-purpose servers,
including multiple voltage planes, an array of energyefficient circuit techniques, clock gating, and dynamic
Published by the IEEE Computer Society
33
December 2007
Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next Page
A
BEMaGS
F
Computer
Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next Page
A
BEMaGS
F
ments because minor traffic fluctuations or any internal disruption, such as hardware or
software faults, could tip it over
the edge. Moreover, the lack of a
0.025
reasonable amount of slack
makes
regular
operations
exceedingly
complex
because
0.02
any maintenance task has the
potential to cause serious service
disruptions. Similarly, well-pro0.015
visioned services are unlikely to
spend significant amounts of
time completely idle because
0.01
doing so would represent a substantial waste of capital.
Even during periods of low ser0.005
vice demand, servers are unlikely
to be fully idle. Large-scale services usually require hundreds of
0
servers and distribute the load
0
0.1
0.2
0.3
0.4
0.5
0.6
0.7
0.8
0.9
1.0
over these machines. In some
CPU utilization
cases, it might be possible to
completely idle a subset of servers
Figure 1. Average CPU utilization of more than 5,000 servers during a six-month period.
during low-activity periods by,
Servers are rarely completely idle and seldom operate near their maximum utilization,
for example, shrinking the numinstead operating most of the time at between 10 and 50 percent of their maximum
ber of active front ends. Often,
utilization levels.
though, this is hard to accomplish because data, not just comvoltage-frequency scaling. Mobile devices require high putation, is distributed among machines. For example,
performance for short periods while the user awaits a common practice calls for spreading user data across
response, followed by relatively long idle intervals of many databases to eliminate the bottleneck that a censeconds or minutes. Many embedded computers, such tral database holding all users poses.
as sensor network agents, present a similar bimodal
Spreading data across multiple machines improves
usage model.4
data availability as well because it reduces the likeliThis kind of activity pattern steers designers to empha- hood that a crash will cause data loss. It can also help
size high energy efficiency at peak performance levels hasten recovery from crashes by spreading the recovand in idle mode, supporting inactive low-energy states, ery load across a greater number of nodes, as is done
such as sleep or standby, that consume near-zero energy. in the Google File System.6 As a result, all servers must
However, the usage model for servers, especially those be available, even during low-load periods. In addition,
used in large-scale Internet services, has very different networked servers frequently perform many small backcharacteristics.
ground tasks that make it impossible for them to enter
Figure 1 shows the distribution of CPU utilization lev- a sleep state.
els for thousands of servers during a six-month interWith few windows of complete idleness, servers canval.5 Although the actual shape of the distribution varies not take advantage of the existing inactive energysignificantly across services, two key observations from savings modes that mobile devices otherwise find so
Figure 1 can be generalized: Servers are rarely com- effective. Although developers can sometimes restrucpletely idle and seldom operate near their maximum uti- ture applications to create useful idle intervals during
lization. Instead, servers operate most of the time at periods of reduced load, in practice this is often difficult
between 10 and 50 percent of their maximum utiliza- and even harder to maintain. The Tickless kernel7 exemtion levels. Such behavior is not accidental, but results plifies some of the challenges involved in creating and
from observing sound service provisioning and distrib- maintaining idleness. Moreover, the most attractive inacuted systems design principles.
tive energy-savings modes tend to be those with the highAn Internet service provisioned such that the average est wake-up penalties, such as disk spin-up time, and
load approaches 100 percent will likely have difficulty thus their use complicates application deployment and
meeting throughput and latency service-level agree- greatly reduces their practicality.
Fraction of time
0.03
34
Computer
Computer
Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next Page
A
BEMaGS
F
Computer
Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next Page
ENERGY EFFICIENCY AT VARYING
UTILIZATION LEVELS
A
BEMaGS
F
100
Typical operating region
TOWARD ENERGY-PROPORTIONAL MACHINES
Addressing the mismatch between the servers’
energy-efficiency characteristics and the behavior of
server-class workloads is primarily the responsibility
of component and system designers. They should aim
to develop machines that consume energy in proportion to the amount of work performed. Such energyproportional machines would ideally consume no
power when idle (easy with inactive power modes),
nearly no power when very little work is performed
(harder), and gradually more power as the activity level
increases (also harder).
Energy-proportional machines would exhibit a wide
dynamic power range—a property that might be rare
today in computing equipment but is not unprecedented
in other domains. Humans, for example, have an average daily energy consumption approaching that of an
old personal computer: about 120 W. However, humans
at rest can consume as little as 70 W,8 while being able
to sustain peaks of well over 1 kW for tens of minutes,
with elite athletes reportedly approaching 2 kW.9
Breaking down server power consumption into its
main components can be useful in helping to better
CPU contribution to server power
Server power usage (percent of peak)
Server power consumption responds
90
differently to varying utilization levels.
80
We loosely define utilization as a measure of the application performance—
70
such as requests per second on a Web
60
server—normalized to the perfor50
mance at peak load levels. Figure 2
shows the power usage of a typical
40
energy-efficient server, normalized to
30
its maximum power, as a function of
utilization. Essentially, even an energy20
efficient server still consumes about
Power
10
Energy efficiency
half its full power when doing virtually no work. Servers designed with
0
0
10
20
30
40
50
60
70
80
90
100
less attention to energy efficiency often
Utilization (percent)
idle at even higher power levels.
Seeing the effect this narrow dynamic
power range has on such a system’s Figure 2. Server power usage and energy efficiency at varying utilization levels,
energy efficiency—represented by the from idle to peak performance. Even an energy-efficient server still consumes
red curve in Figure 2—is both enlight- about half its full power when doing virtually no work.
ening and discouraging. To derive
power efficiency, we simply divide utilization by its cor60
responding power value. We see that peak energy efficiency occurs at peak utilization and drops quickly as
50
utilization decreases. Notably, energy efficiency in the 20
to 30 percent utilization range—the point at which
40
servers spend most of their time—has dropped to less
than half the energy efficiency at peak performance.
30
Clearly, such a profile matches poorly with the usage
20
characteristics of server-class applications.
10
0
2005 server
(peak)
2007 server
(peak)
2007 server
(idle)
Google servers
Figure 3. CPU contribution to total server power for two generations of Google servers at peak performance (the first two
bars) and for the later generation at idle (the rightmost bar).
understand the key challenges for achieving energy proportionality. Figure 3 shows the fraction of total server
power consumed by the CPU in two generations of
Google servers built in 2005 and 2007.
The CPU no longer dominates platform power at
peak usage in modern servers, and since processors are
adopting energy-efficiency techniques more aggressively than other system components, we would expect
CPUs to contribute an even smaller fraction of peak
power in future systems. Comparing the second and
third bars in Figure 3 provides useful insights. In the
same platform, the 2007 server, the CPU represents an
even smaller fraction of total power when the system
35
December 2007
Computer
Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next Page
A
BEMaGS
F
Computer
Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next Page
A
BEMaGS
F
Compared to today’s machines,
servers with a dynamic power range of
Typical operating region
90 percent, shown in Figure 4, could cut
90
by one-half the energy used in data cen80
ter operations.5 They would also lower
peak power at the facility level by more
70
than 30 percent, based on simulations
60
of real-world data center workloads.
50
These are dramatic improvements,
especially considering that they arise
40
from optimizations that leave peak
30
server power unchanged. The power
efficiency curve in Figure 4 fundamen20
tally explains these gains. This server
Power
10
Energy efficiency
has a power efficiency of more than 80
percent of its peak value for utilizations
0
0
10
20
30
40
50
60
70
80
90
100
of 30 percent and above, with efficiency
Utilization (percent)
remaining above 50 percent for utilization levels as low as 10 percent.
Figure 4. Power usage and energy efficiency in a more energy-proportional server.
In addition to its energy-savings
This server has a power efficiency of more than 80 percent of its peak value for
potential, energy-proportional hardutilizations of 30 percent and above, with efficiency remaining above 50 percent
ware could obviate the need for power
for utilization levels as low as 10 percent.
management software, or at least simplify it substantially, reducing power
is idle, suggesting that processors are closer to exhibit- management to managing utilization.
Fundamentally, the latency and energy penalties
ing the energy-proportional behavior we seek.
Two key CPU features are particularly useful for incurred to transition to the active state when starting
achieving energy proportionality and are worthy of an operation make an inactive energy-savings mode less
useful for servers. For example, a disk drive in a spunimitation by other components.
down, deep-sleep state might use almost no energy, but
a transition to active mode incurs a latency penalty 1,000
Wide dynamic power range
Current desktop and server processors can consume times higher than a regular access latency. Spinning up
less than one-third of their peak power at very-low activ- the platters also carriers a large energy penalty. Such a
ity modes, creating a dynamic range of more than 70 huge activation penalty restricts spin-down modes to sitpercent of peak power. CPUs targeted at the mobile or uations in which the device will be idle for several minembedded markets can do even better, with idle power utes; this rarely occurs in servers. On the other hand,
often reaching one-tenth or less of peak power.10 They inactive energy-savings modes with wake-up penalties
achieve this even when not using any performance- of only a small fraction of the regular operations’ latency
impacting—or software-visible—energy-saving modes. are more likely to benefit the server space, even if their
In our experience, the dynamic power range of all low-energy state operates at relatively higher energy levother components is much narrower: less than 50 per- els than would be possible in deep-sleep modes.
cent for DRAM, 25 percent for disk drives, and 15 perActive energy-savings schemes, by contrast, are useful
cent for networking switches.
even when the latency and energy penalties to transition
to a high-performance mode are significant. Since active
Active low-power modes
modes are operational, systems can remain in lowA processor running at a lower voltage-frequency mode energy states for as long as they remain below certain
can still execute instructions without requiring a perfor- load thresholds. Given that periods of low activity are
mance-impacting mode transition. It is still active. There more common and longer than periods of full idleness,
are no other components in the system with active low- the overheads of transitioning between active energypower modes. Networking equipment rarely offers any savings modes amortize more effectively.
low-power modes, and the only low-power modes currently available in mainstream DRAM and disks are fully
inactive. That is, using the device requires paying a latency
ervers and desktop computers benefit from much
and energy penalty for an inactive-to-active mode transiof the energy-efficiency research and development
tion. Such penalties can significantly degrade the perforthat was initially driven by mobile devices’ needs.
mance of systems idle only at submillisecond time scales. However, unlike mobile devices, which idle for long
Server power usage (percent of peak)
100
S
36
Computer
Computer
Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next Page
A
BEMaGS
F
Computer
Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next Page
periods, servers spend most of their time at moderate utilizations of 10 to 50 percent and exhibit poor efficiency
at these levels. Energy-proportional computers would
enable large additional energy savings, potentially doubling the efficiency of a typical server. Some CPUs already
exhibit reasonably energy-proportional profiles, but most
other server components do not.
We need significant improvements in memory and disk
subsystems, as these components are responsible for
an increasing fraction of the system energy usage.
Developers should make better energy proportionality
a primary design objective for future components and
systems. To this end, we urge energy-efficiency benchmark developers to report measurements at nonpeak
activity levels for a more complete characterization of a
system’s energy behavior. ■
Acknowledgments
We thank Xiaobo Fan and Wolf-Dietrich Weber for
coauthoring the power provisioning study that motivated this work, and Catherine Warner for her comments on the manuscript.
References
1. US Environmental Protection Agency, “Report to Congress
on Server and Data Center Energy Efficiency: Public Law 109431”; www.energystar.gov/ia/partners/prod_development/
downloads/EPA_Datacenter_Report_Congress_Final1.pdf.
____________________________________
2. J.G. Koomey, “Estimating Total Power Consumption by
Servers in the U.S. and the World”; http://enterprise.amd.com/
Downloads/svrpwrusecompletefinal.pdf.
_________________________
3. L.A. Barroso, “The Price of Performance: An Economic Case
for Chip Multiprocessing,” ACM Queue, Sept. 2005, pp. 4853.
4. J. Hill et al., “System Architecture Directions for Networked
Sensors,” Proc. SIGOPS Oper. Syst. Rev., ACM Press, vol.
34, no. 5, 2000, pp. 93-104.
5. X. Fan, W.-D. Weber, and L.A. Barroso, “Power Provisioning
for a Warehouse-Sized Computer”; http://research.google.
com/archive/power_provisioning.pdf.
6. S. Ghemawat, H. Gobioff and S.-T. Leung, “The Google File
System”; www.cs.rochester.edu/meetings/sosp2003/papers/
p125-ghemawat.pdf.
_____________
7. S. Siddha, V. Pallipadi, and A. Van De Ven, “Getting Maximum Mileage Out of Tickless,” Proc. 2007 Linux Symp.,
2007, pp. 201-208.
8. E. Ravussin et al., “Determinants of 24-Hour Energy Expenditure in Man: Methods and Results Using a Respiratory
Chamber”; http://www.pubmedcentral.nih.gov/picrender.
fcgi?artid=423919&blobtype=pdf.
_____________________
9. E.F. Coyle, “Improved Muscular Efficiency Displayed as Tour
de France Champion Matures”; http://jap.physiology.org/
cgi/reprint/98/6/2191.
_____________
A
BEMaGS
F
10. Z. Chen et al., “A 25W(max) SoC with Dual 2GHz Power
Cores and Integrated Memory and I/O Subsystems”; www.
___
pasemi.com/downloads/PA_Semi_ISSCC_2007.pdf.
Luiz André Barroso is a distinguished engineer at Google.
His interests range from distributed system software infrastructure to the design of Google’s computing platform.
Barroso received a PhD in computer engineering from the
University of Southern California. Contact him at ____
luiz@
google.com.
Urs Hölzle is the senior vice president of operations at
Google and a Google Fellow. His interests include largescale clusters, cluster networking, Internet performance,
and data center design. Hölzle received a PhD in computer
science from Stanford University. Contact him at ____
urs@
google.com.
Computer
Wants You
Computer is always looking for
interesting editorial content. In
addition to our theme articles, we
have other feature sections such as
Perspectives, Computing Practices,
and Research Features as well as
numerous columns to which you can
contribute. Check out our author
guidelines at
www.computer.org/computer/
author.htm
________
for more information about how to
contribute to your magazine.
37
December 2007
Computer
Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next Page
A
BEMaGS
F
Computer
Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next Page
A
BEMaGS
F
_____________________________________________________
Computer
Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next Page
A
BEMaGS
F
Computer
Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next Page
C O V E R
A
BEMaGS
F
F E A T U R E
Models and Metrics to
Enable Energy-Efficiency
Optimizations
Suzanne Rivoire, Stanford University
Mehul A. Shah and Parthasarathy Ranganathan, Hewlett-Packard Laboratories
Christos Kozyrakis, Stanford University
Justin Meza, University of California, Los Angeles
Power consumption and energy efficiency are important factors in the initial design and
day-to-day management of computer systems. Researchers and system designers need
benchmarks that characterize energy efficiency to evaluate systems and identify promising
new technologies.To predict the effects of new designs and configurations, they also need
accurate methods of modeling power consumption.
I
n recent years, the power consumption of servers
and data centers has become a major concern.
According to the US Environmental Protection
Agency, enterprise power consumption in the US
doubled between 2000 and 2006 (www.energystar.
gov/ia/partners/prod_development/downloads/EPA_
_______________________________________
Datacenter_Report_Congress_Final1.pdf), and will dou_____________________________
ble again in the next five years. Server power consumption not only directly affects a data center’s electricity
costs, but also necessitates the purchase and operation
of cooling equipment, which can consume from
one-half to one watt for every watt of server power
consumption.
All of these power-related costs can potentially exceed
the cost of purchasing hardware. Moreover, the environmental impact of data center power consumption is
receiving increasing attention, as is the effect of escalating power densities on the ability to pack machines into
a data center.1
The two major and complementary ways to approach
this problem involve building energy efficiency into the
initial design of components and systems, and adaptively
managing the power consumption of systems or groups
of systems in response to changing conditions in the
workload or environment. Examples of the former
approach include
0018-9162/07/$25.00 © 2007 IEEE
Computer
• circuit techniques such as disabling the clock signal
to a processor’s unused parts;
• architectural techniques such as replacing complex
uniprocessors with multiple simple cores; and
• support for multiple low-power states in processors,
memory, and disks.
At the system level, the latter approach requires policies to intelligently exploit these low-power states for
energy savings. Across multiple systems in a cluster or
data center, these policies can involve dynamically adapting workload placement or power provisioning to meet
specific energy or thermal goals.1
To facilitate these optimizations, we need metrics to
define energy efficiency, which will help designers compare designs and identify promising energy-efficient technologies. We also need models to predict the effects of
dynamic power management policies, particularly over
many systems. Unlike the significant body of work on
power management and optimization, there has been
relatively little focus on metrics and models.
We address the challenges in defining metrics for energy
efficiency with a specific case study on JouleSort, which
provides a complete, full-system benchmark for energy
efficiency across a variety of system classes.2 The
“Approaches to Power Modeling in Computer Systems”
Published by the IEEE Computer Society
39
December 2007
Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next Page
A
BEMaGS
F
Computer
Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next Page
A
BEMaGS
F
Approaches to Power Modeling in Computer Systems
Power models used in simulators of proposed hardware trade speed and portability for increased accuracy,
relying on detailed knowledge of component architecture and circuit technology. Wattch1 is a widely used
CPU power model that estimates the power costs of
accessing different parts of the processor and combines
this information with activity counts from a performance
simulator to yield power estimates. Similar models have
been proposed for other components, including memory, disks, and networking, as well as complete
systems.2,3 These simulators are highly accurate, but also
closely tied to specific systems and simulation infrastructures that are much slower than actual hardware.
Models used in online powermanagement policies, for which
8
speed is a first-class constraint,
cannot rely on such detailed simu7
lation. Using real-time system
6
events instead of simulated activity
counts addresses this drawback.
5
Frank Bellosa proposed using
ppred, t = 14.45 + 23.61 * ucpu, t – 0.85 * umem, t + 22.32 * udisk, t + 0.03 * unet, t
4
processor performance counter
registers to provide on-the-fly
3
power characterization of real systems.4 His simple and portable
2
model used the counts of instruc1
tions executed and memory
accesses to drive the selection of a
0
Matrix
Stream
SPECjbb
SPECweb
SPECint
SPECfp
processor’s frequency states. More
Benchmark
detailed and processor-specific
performance-counter-based models have been developed to model
Figure A. Accuracy of the model created by Mantis9 for a low-power blade.The equation
both power5 and thermal6 propershows the power predicted at a given time t as a function of CPU utilization, number of
ties. Finally, since researchers develmemory and disk accesses, and number of network accesses sampled at that time. Each utioped performance counter options
lization input u is given as a percentage of its maximum value.The average error of this
with application profiling rather
coarse-grained linear model is less than 10 percent for every benchmark—sufficient for
than power estimation in mind,
most scheduling optimizations.
Average prediction error (percent)
Power models are fundamental to energy-efficiency
research, whether the goal is to improve the components’ and systems’ design or to efficiently use existing
hardware. Developers use these models offline to evaluate proposed designs, and online in policies to exploit
component power modes within a system, or to efficiently distribute work across several systems.
An ideal power model has several properties. First, it
must be accurate. It should also be portable across a
variety of existing and future hardware designs, and
applicable under a variety of workloads. Finally, it should
be cost-effective in its hardware and software requirements and execute swiftly.
sidebar describes different approaches to modeling power
consumption in components, systems, and data centers.
ENERGY-EFFICIENCY METRICS
An ideal benchmark for energy efficiency would consist
of a universally relevant workload, a metric that balances
power and performance in a universally appropriate way,
and rules that provide impossible-to-circumvent, fair
comparisons. Since this benchmark is impossible, several
different approaches have addressed pieces of the energyefficiency evaluation problem, from chips to data centers.
Table 1 summarizes these approaches.
Each metric in Table 1 addresses a particular energyrelated problem, from minimizing power consumption
40
in embedded processors (EnergyBench) to evaluating the
efficiency of data center cooling and power provisioning
(Green Grid metrics). However, only JouleSort specifies
a workload, a metric to compare two systems, and rules
for running the benchmark.
At the processor level, Ricardo Gonzalez and Mark
Horowitz argued in 19963 that the energy-delay product provides the appropriate metric for comparing two
designs. They observed that a chip’s performance and
power consumption are both related to the clock frequency, with performance directly proportional to, and
power consumption increasing quadratically with,
clock frequency. Comparing processors based on
energy, which is the product of execution time and
Computer
Computer
Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next Page
A
BEMaGS
F
Computer
Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next Page
Ismail Kadayif and colleagues proposed an interface
based on “energy counters” that would virtualize the
existing performance counters.7
Processor performance counters can be used to estimate processor and memory power consumption, but
do not take other parts of the system, such as I/O, into
account. Some optimizations, such as data-center-level
optimizations that turn off unused machines, must
consider the full-system power. In this case, OS utilization metrics can be used to model the base system
components quickly, portably, and with reasonable
accuracy. Taliver Heath and colleagues8 and Dimitris
Economou and colleagues9 build linear models based
on OS-reported utilization of each component. Both
approaches require an initial calibration phase, in
which developers connect the system to a power meter
and run microbenchmarks to stress each component.
They then fit the utilization data to the power measurements to construct a model.
Figure A shows an example of one such model.9
Parthasarathy Ranganathan and Phil Leech used a similar
approach to predict both power and performance by
constructing lookup tables based on utilization.10 Finally,
researchers from Google found that an even simpler
model, based solely on OS-reported processor utilization, proved sufficiently accurate to enable optimizations
over a large group of homogeneous machines.11
Optimizations for energy efficiency rely on accurate,
fast, cost-effective, and portable power models. While
many models have been developed to address individual
needs, creating systematic methods of generating widely
portable and highly accurate models remains an open
problem. Such methods could facilitate further innovations in energy-efficient system design and management.
References
A
BEMaGS
F
work for Architectural-Level Power Analysis and Optimizations,” Proc. Int’l Symp. Computer Architecture (ISCA), ACM
Press, 2000, pp. 83-94.
2. W. Ye et al., “The Design and Use of SimplePower: A CycleAccurate Energy Estimation Tool,” Proc. Design Automation
Conf. (DAC), IEEE CS Press, 2000, pp. 340-345.
3. S. Gurumurthi et al., “Using Complete Machine Simulation
for Software Power Estimation: The SoftWatt Approach,”
Proc. High-Performance Computer Architecture Symp. (HPCA),
IEEE CS Press, 2002, pp. 141-150.
4. F. Bellosa, “The Benefits of Event-Driven Energy Accounting
in Power-Sensitive Systems,” Proc. SIGOPS European Workshop, ACM Press, 2000, pp. 37-42.
5. G. Contreras and M. Martonosi, “Power Prediction for Intel
XScale Processors Using Performance Monitoring Unit
Events,” Proc. Int’l Symp. Low-Power Electronics and Design
(ISLPED), ACM Press, 2005, pp. 221-226.
6. K. Skadron et al., “Temperature-Aware Microarchitecture:
Modeling and Implementation,” ACM Trans. Architecture
and Code Optimization, vol. 1, no. 1, 2004, pp. 94-125.
7. I. Kadayif et al., “vEC: Virtual Energy Counters,” Proc. PASTE,
ACM Press, 2001, pp. 28-31.
8. T. Heath et al., “Energy Conservation in Heterogeneous
Server Clusters,” Proc. ACM SIGPLAN Symp. Principles and
Practice of Parallel Programming (PpoPP), ACM Press, 2005,
pp. 186-195.
9. D. Economou et al., “Full-System Power Analysis and Modeling for Server Environments”; http://csl.stanford.edu/
%7Echristos/publications/2006.mantis.mobs.pdf.
______________________________
10. P. Ranganathan and P. Leech, “Simulating Complex Enterprise Workloads Using Utilization Traces”; www.hpl.hp.com/
personal/Partha_Ranganathan/papers/2007/2007_caecw_
____________________________________
bladesim.pdf.
________
11. X. Fan, W-D. Weber, and L.A. Barroso, “Power Provisioning
for a Warehouse-Sized Computer,” Proc. Int’l Symp. Computer Architecture (ISCA), ACM Press, 2007, pp. 13-23.
1. D. Brooks, V. Tiwari, and M. Martonosi, “Wattch: A Frame-
power, would therefore motivate processor designers
to focus solely on lowering clock frequency at the
expense of performance. On the other hand, the
energy-delay product, which weighs power against the
square of execution time, would show the underlying
design’s energy efficiency rather than merely reflecting
the clock frequency.
In the embedded domain, the Embedded Microprocessor Benchmark Consortium (EEMBC) has proposed the EnergyBench4 processor benchmarks.
EnergyBench provides a standardized data acquisition
infrastructure for measuring processor power when running one of EEMBC’s existing performance benchmarks.
Benchmark scores are then reported as “netmarks per
Joule” for networking benchmarks and “telemarks per
Joule” for telecommunications benchmarks.
The single-system level is the target of several recent
metrics and benchmarking efforts. Performance per watt
became a popular metric for servers once power became
an important design consideration. Performance is typically specified with either MIPS or the rating from peak
performance benchmarks like SPECint or TPC-C. Sun
Microsystems has proposed the SWaP (space, watts, and
performance) metric to include data center space efficiency as well as power consumption.5
Two evolving standards in system-level energy efficiency
are the US government’s Energy Star certification guidelines for computers and the SPEC Power and Performance
41
December 2007
Computer
Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next Page
A
BEMaGS
F
Computer
Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next Page
A
BEMaGS
F
Table 1. Summary of energy efficiency benchmarks and metrics.
Benchmark
Metric
Level
Domain
Workload
Comment
Analysis tool
PerformanceN per watt
Any
Any
Unspecified
Different balances of
performance and power are
important in different
contexts. N = 0 represents
power alone, and N = 2
corresponds to the energydelay product.
EnergyBench
SWaP
Throughput per Joule
Performance/(space watts)
Processor
System(s)
Embedded
Enterprise
EEMBC benchmarks
Unspecified
Energy Star
certification:
workstations
Certify if “typical” power is less
than 35 percent of “maximum”
power
System
Enterprise
Energy Star
certification:
other systems
SPEC Power and
Performance
JouleSort
Certify if each mode is below a
predefined threshold for that
system class
Not yet released
System
System
Mobile,
desktop,
small server
Enterprise
Sleep, idle, and standby
power (typical); Linpack
and SPECviewperf
(maximum)
Sleep, idle, and standby
modes
Records sorted per Joule
System
Green Grid DCE
Percent of facility power that
Data center
reaches IT equipment
Work done/total facility power (W) Data center
Green Grid DCPE
committee’s upcoming benchmark. Energy Star is a
designation given by the US government to highly energyefficient household products, and has recently been
expanded to include computers.6 For most system classes,
systems with idle, sleep, and standby power consumptions
below a certain threshold will receive the Energy Star rating. For workstations, however, the Energy Star rating
requires that the “typical” power—a weighted function
of the idle, sleep, and standby power consumptions—not
exceed 35 percent of the “maximum power” (the power
consumed during the Linpack and SPECviewperf benchmarks, plus a factor based on the number of installed hard
disks). Energy Star certification also requires that a system’s power supply efficiency exceed 80 percent.
The SPEC power and performance benchmark
remains under development.7 The workload will be
server-side Java-based, and designed to exercise the system at a variety of usage levels, since servers tend to be
underutilized in data center environments. The committee expects to release the specific workload and metric of comparison in late 2007.
At the data center level, metrics have been proposed
to guide holistic optimizations. To optimize data center
cooling, Chandrakant Patel and others have advocated
a metric based on weighing performance against the
42
Mobile,
desktop,
enterprise
Enterprise
Enterprise
Server-side Java under
varying loads
External sort
Addresses both space and
power concerns
Expected late 2007
Has three benchmark
classes with different
workload size
n/a
Not yet determined
exergy destroyed. Roughly speaking, exergy is the
energy available for doing useful work.8
Finally, the Green Grid, an industrial consortium that
includes most major hardware vendors, recently introduced the data center efficiency metric.9 The Green Grid
proposal defines DCE as the percentage of total facility
power that goes to the “IT equipment”—primarily compute, storage, and network. In the long term, rather than
using IT equipment power as a proxy for performance, the
Green Grid advocates data center performance efficiency,
or the useful work divided by the total facility power.
Each of these metrics is useful in evaluating energy
efficiency in a particular context, from embedded
processors to underutilized servers to entire data centers. However, researchers have not methodically
addressed energy-efficiency metrics for many important
computing domains. For example, there are no full-system benchmarks that specify a workload, a metric to
compare two systems, and rules for running the benchmark. The recently proposed JouleSort benchmark
addresses this space.
JOULESORT BENCHMARK
We designed the JouleSort benchmark with several
goals in mind. First, the benchmark should evaluate the
Computer
Computer
Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next Page
A
BEMaGS
F
Computer
Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next Page
A
BEMaGS
F
power-performance tradeoff—that is, the
Table 2. Summary of sort benchmarks.10
benchmark score should not reward high
performance or low power alone. Two reaBenchmark
Description
Status
sonable metrics for the benchmark are thus
energy (the product of average power conPennySort
Sort as many records as possible for one
Active
sumption and execution time) and the
cent, assuming a 3-year depreciation.
energy-delay product, which places more
MinuteSort
Sort as many records as possible in less
Active
emphasis on performance.
than a minute.
We chose energy for two reasons. First, plenty
TerabyteSort
Sort a Tbyte of data (10 billion records) as
Active
of performance benchmarks already exist, so
quickly as possible.
we wanted to be sure our benchmark emphaDatamation
Sort 1 million records as quickly as
Deprecated
sized power. Second, the tradeoff between perpossible.
formance and power at the system level does
JouleSort
Sort a fixed number of records (approx.
Proposed
not display the straightforward quadratic rela10 Gbytes, 100 Gbytes, 1 Tbyte) using as
tionship seen at the processor level, which
little energy as possible.
motivated use of the energy-delay metric.
Further, the benchmark should evaluate a
system’s peak energy efficiency, which for today’s sys- records a system can sort for one penny, assuming a
tems occurs at peak utilization. While peak utilization three-year depreciation. MinuteSort and TerabyteSort
offers a realistic scenario in some domains, data center measure a system’s pure performance in sorting for a
servers in particular are notoriously underutilized. fixed time of one minute and a fixed data set of one
However, benchmarking at peak utilization is justified Tbyte, respectively. JouleSort, to measure the powerfor several reasons. First, peak utilization is simpler to performance tradeoff, is thus a logical addition to the
define and measure, and it makes the benchmark more sort benchmark repertoire. The original Datamation sort
difficult to circumvent. Additionally, knowing the upper benchmark compared the amount of time systems took
bound on energy efficiency for a particular system is use- to sort 1 million records; it is now deprecated since this
ful. In enterprise environments, for example, this upper task is trivial on modern systems.
bound provides a target for server consolidation.
The workload can be summarized as follows: Sort a
Next, the benchmark should be balanced. It should file consisting of randomly permuted 100-byte records
stress all core system components, and the metric should with 10-byte keys. The input file must be read from—
incorporate the energy that all components use. It should and the output file written to—nonvolatile storage. The
also be representative of important workloads and sim- output file must be newly created rather than overwritple to implement and administer.
ing the input file, and all intermediate files that the sort
Finally, the benchmark should be inclusive, encom- program uses must be deleted.
passing as many past, current, and future systems as posThis workload meets our benchmark goals satisfacsible. For inclusiveness, the benchmark must be torily. It is balanced, stressing I/O, memory, the CPU,
meaningful and measurable on as many system classes the OS, and the file system. It is representative and
as possible. The workload and metric should apply to a inclusive; it resembles sequential, I/O-intensive datamanagement workloads that are found on most platwide range of technologies.
forms, from cell phones processing multimedia data to
Benchmark workload
clusters performing large-scale parallel data analysis.
For our benchmark’s workload, we chose to use the The Sort Benchmark’s longevity testifies to its enduring
external sort from the sort benchmarks’ specification applicability as technology changes.
(http://research.microsoft.com/research/barc/SortBench
mark/default.htm).
Benchmark metric
_____________ External sort has been a benchmark
of interest in the database community since 1985, and
Designing a metric that allows fair comparisons across
researchers have used it to understand the system-level systems and avoids loopholes that obviate the benchmark
effectiveness of algorithmic and component improve- presents a major challenge in benchmark development.
ments and identify promising technology trends. For JouleSort, we seek to evaluate the power-performance
Previous sort benchmark winners have foreshadowed balance of different systems, giving power and perforthe transition from supercomputers to commodity clus- mance equal weight. We could have defined the JouleSort
ters, and recently showed the promise of general-pur- benchmark score in three different ways:
pose computation on graphics processing units (GPUs).10
The sort benchmarks currently have three active cat• Set a fixed energy budget for the sort, and compare
egories, as summarized in Table 2. PennySort is a pricesystems based on the number of records sorted
performance benchmark that measures the number of
within that budget.
43
December 2007
Computer
Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next Page
A
BEMaGS
F
Computer
Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next Page
A
BEMaGS
F
The first problem, then, is the disincentive to continue sorting beyond the largest
one-pass sort. With a budget of one
16,000
minute, this particular machine would
14,000
achieve its best records-sorted-per-Joule
rating if it sorted 15 million records,
12,000
which takes 10 seconds, and went into a
10,000
low-power sleep mode for the remaining
50 seconds. In the extreme case, a system
8,000
optimized for this benchmark could spend
6,000
most of the benchmark’s duration in sleep
mode—thus voiding the goal of measur4,000
ing a utilized system’s efficiency.
2,000
The second problem is the (N lg N)
0
algorithmic complexity of sort, which
1.0E+05
1.0E+06
1.0E+07
1.0E+08
1.0E+09
1.0E+10
causes the downward trend in efficiency
Records sorted
for large data sets. While constant factors
initially obscure this complexity, once the
Figure 1. Problems with using a fixed time budget and a metric of records sorted sort becomes CPU-bound, the number of
per Joule.The dramatic drop in efficiency at the transition from one-pass to
records sorted per Joule begins to decrease
two-pass sorts (here, at 15 million records) creates an incentive to sleep for
because the execution time now increases
some, or even most, of the time budget.The (N lg N) complexity of sort causes
superlinearly with the number of records.
the slow drop-off in efficiency for large data sets at the rightmost part of the
In light of these problems with a fixed time
graph and creates a similar problem.
budget and fixed energy budget, we settled on using a fixed input size. This deci• Set a fixed time budget for the sort, and compare sys- sion necessitates multiple benchmark classes, similar to
tems based on the number of records sorted and the the TPC-H benchmark, since different workload sizes
amount of energy consumed, expressed as records are appropriate to different system classes. The JouleSort
classes are 100 million records (about 10 Gbytes), 1 bilsorted per Joule.
• Set a fixed workload size for the sort, and compare lion records (about 100 Gbytes), and 10 billion records
systems based on the amount of energy consumed. (about 1 Tbyte). The metric of comparison then becomes
the minimum energy or records sorted per Joule, which
The fixed-energy budget and fixed workload both are equivalent for a fixed workload size.
We prefer the latter metric because it highlights effihave the drawback that a single fixed budget will not be
applicable to all classes of systems, necessitating multi- ciency more clearly and allows rough comparisons across
ple benchmark classes and updates to the class defini- different benchmark classes, with the caveats we have
tions as technology changes. The fixed-energy budget described. We do anticipate that the benchmark classes
has the further drawback of being difficult to bench- will change as systems become more capable. However,
mark. Since energy is the product of power and time, it since sort performance is improving more slowly than
is affected by variations in both quantities. Measurement Moore’s law, we expect the current classes to be relevant
error from power meters only compounds this problem. for at least five years. Therefore, given our criteria, the
By contrast, using a reasonably low fixed-time bud- fixed input size offers the most reasonable option.
get and a metric of records sorted per Joule would avoid
this problem; however, two more serious issues elimi- Energy measurement
nate it from consideration. Figure 1 illustrates these,
While we can borrow many of the benchmark rules
showing the records sorted per Joule for our best-per- from the existing sort benchmarks, energy measurement
forming system while running different workload sizes. requires additional guidelines. The most important areas
From the left, the smallest data set sizes take only a few to consider are the boundaries of the system to be measeconds and thus poorly amortize the startup overhead. sured, constraints on the ambient environment, and
As data sets grow larger, this overhead amortizes better, acceptable methods of measuring power consumption.
while efficiency increases, up to 15 million records. This is
The energy consumed to power the physical system
the largest data set that fits completely in memory. For executing the sort is measured from the wall outlet.
larger sizes, the system must temporarily write data to disk, This approach accounts for power supply inefficiendoubling the amount of I/O and decreasing performance cies in converting from AC to DC power, which can be
dramatically. After this transition, energy efficiency stays significant.1 If a component remains unused in the sort
relatively constant, with a slow trend downward.
and cannot be physically removed from the system,
JouleSort score (records/J)
18,000
44
Computer
Computer
Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next Page
A
BEMaGS
F
Computer
Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next Page
A
BEMaGS
F
Energy efficiency of previous
sort benchmark winners
First, to understand historical trends in
energy efficiency, we retrospectively
applied our benchmark to previous sort
benchmark winners over the past decade,
computing their scores in records sorted
per Joule. Since there are no power measurements for these systems, we estimated
the power consumption based on the
benchmark winners’ posted reports on
the Sort Benchmark Web site, which
include both hardware configuration
information and performance data. The
estimation methodology relies on the fact
that these historical winners have used
desktop- and server-class components
that should be running at or near peak
power for most of the sort. Therefore, we
JouleSort score (records/J)
we include its power consumption in the
Table 3. Summary of JouleSort benchmark definitions.
measurement.
The benchmark accounts for the energy
Workload
External sort
consumed by elements of the cooling infrastructure, such as fans, that physically conBenchmark classes
108 records (10 Gbytes), 109 records (100 Gbytes),
nect to the hardware. While air conditioners,
1010 records (1 Tbyte)
blowers, and other cooling devices consume
Benchmark categories Daytona = commercially supported hardware and software
significant amounts of energy in data cenIndy = “no holds barred” implementations
ters, it would be unreasonable to include
Metric
Energy to sort a fixed number of records (records sorted
them for all but the largest sorting systems.
per Joule)
We do specify that the ambient temperature
Energy measurement Measure power at the wall, subject to EPA power meter
at the system’s inlets be maintained at
guidelines
between 20° to 25° C—typical for data cenEnvironment
Maintain ambient temperature of 20°-25° C
ter environments.
Finally, energy consumption should be measured as the product of the wall clock time used for the can approximate component power consumption as
sort and the average power over the sort’s execution. The constant over the sort’s length.
execution time will be measured as the existing sort
We validated our estimation methodology on singlebenchmarks specify. The easiest way to measure the node desktop- and server-class systems, for which
power is to plug the system into a digital power meter, the estimates were accurate within 5 to 25 percent—
which then plugs into the wall; the SPEC Power com- sufficiently accurate to draw high-level conclusions.
mittee and the Energy Star guidelines have jointly proThe historical data, shown in Figure 2, supports a few
posed minimum power meter requirements,6,7 which we observations.
First, the PennySort winners tend to be the most
adopt for JouleSort as well. Finally, we define two benchmark categories: Daytona, for commercially supported energy-efficient systems, for the simple reason that
hardware and software, and Indy, which is unconstrained. PennySort is the only benchmark to weigh performance
against a resource constraint. While low cost and low
Table 3 summarizes the final benchmark definition.
power consumption do not always correlate, both
metrics tend to encourage minimizing the number of
JOULESORT BENCHMARK RESULTS
Using this benchmark, we evaluated
energy efficiency for a variety of computer
3,500
systems. We first estimated the energy efficiency of previous Sort Benchmark winPennySort Daytona
3,000
PennySort Indy
ners and then experimentally evaluated
MinuteSort Daytona
different systems with the JouleSort
MinuteSort Indy
2,500
benchmark.
Terabyte Daytona
2,000
Terabyte Indy
Datamation
1,500
1,000
500
0
1996
1998
2000
2002
2004
2006
2008
Sort benchmmark winners by year
Figure 2. Estimated energy efficiency, in records sorted per Joule, of historical
Sort Benchmark winners.The Daytona category is for commercially supported
sorts, while the Indy category has no such restrictions.The pink arrow shows the
energy efficiency trend for cost-efficient sorts, which is improving at a rate of 25
percent per year.The blue arrow shows the trend for performance-oriented
sorts, whose energy efficiency is improving at 13 percent per year. Both of
these rates fall well below the rates of improvement in performance and cost
performance.
45
December 2007
Computer
Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next Page
A
BEMaGS
F
Computer
Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next Page
A
BEMaGS
F
power measures the entire enclosure,
which is designed to deliver power to 15
blades and is thus both overprovisioned
Name
Description
and inefficient at this low load. We thereLaptop
A modern laptop with an Intel Core 2 Duo processor and 3 Gbytes
fore include both the wall power of the
of RAM
enclosure—blade-wall—and a more realBlade-wall
A single low-power blade plus the full wall power of its enclosure
istic calculation of the power consump(designed for 16 blades)
tion for the blade itself, plus a
Blade-amortized A single low-power blade plus its proportionate share of the
proportionate share of the enclosure
enclosure power
overhead, which we call blade-amortized.
Standard server A standard server with Intel Xeon processor, 2 Gbytes of RAM,
The laptop and the fileserver proved to
and 2 hard disks
be the two most efficient “off-the-shelf”
Fileserver
A fileserver with 2 disk trays containing 6 disks per tray
systems by far—both have energy effiCoolSort
A desktop with a high-end mobile processor, 2 Gbytes of RAM,
ciency similar to the most efficient hisand 13 SATA laptop disks
torical system. The file server’s high
Gumstix
An ultra-low-power system used in embedded devices
energy efficiency is not surprising because
Soekris
A board typically used for networking applications
the CPU and I/O both operate at peak
VIA-laptop
A VIA picoITX multimedia machine with laptop hard disks
utilization, which corresponds to peak
VIA-flash
A VIA picoITX multimedia machine with flash drives
energy efficiency for today’s equipment.
The laptop, however, shows high energy
efficiency even though its CPU is drasticomponents and using lower-performance components cally underutilized. These results suggest that a benchwithin a class.
mark-winning JouleSort machine could be constructed
Second, comparing this graph to the published per- by creating a balanced sorting machine out of mobileformance records shows that the energy-efficiency scores class components.
of sort benchmark winners have not improved at nearly
Based on these insights, we identified two approaches
the same rate as performance or price-performance to custom-assembled machines. The first builds a
scores. The PennySort winners have improved in both machine from mobile-class components and attempts
performance and cost efficiency at rates greater than 50 to maximize performance. The second tries to minimize
percent per year. Their energy efficiency, on the other power while still designing a machine with reasonable
hand, has improved by just 25 percent per year, most of performance. Both approaches lead to energy efficiencies
which came in the past two years.
more than 2.5 times greater than in previous systems.
The winners of the performance sorts (MinuteSort,
The former approach led to the design of the CoolSort
TerabyteSort, and Datamation) have improved their machine. CoolSort uses a high-end mobile CPU conperformance by 38 percent per year, but have improved nected to 13 SATA laptop disks over two PCI-Express
energy efficiency by only 13 percent per year. It remains interfaces. The laptop disks use less than one-fifth of the
unclear whether these sort benchmark contest winners power of server-class disks, while providing about onewere the most energy-efficient systems of their time, half the bandwidth. At 13 disks, the CPU is fully utilized
which suggests the need for a benchmark to track energy during the input pass of the sort, and the motherboard
efficiency trends.
and disk controllers cannot provide any additional I/O
bandwidth. In the 10-Gbyte and 100-Gbyte categories,
Current-system and custom-configuration
CoolSort’s scores of approximately 11,500 records
sorted per Joule are more than three times better than
energy efficiency
We ran the JouleSort benchmark on a variety of sys- those of any previously measured or estimated systems.
tems, including off-the-shelf machines representing
Since these notebook-class components have proven
major system classes, as well as specialized sorting sys- more energy efficient than their desktop- or server-class
tems we created from commodity components. Table 4 counterparts, it makes sense to ask whether a system with
summarizes these systems. Since we focus chiefly on even lower-power components could be more energy
comparing hardware configurations, we use Ordinal efficient than CoolSort. We examined three embeddedTechnologies’ NSort software for all our experiments.
class systems: a Gumstix device; a Soekris machine, typThe commodity machines span several classes of sys- ically found in routers and networking equipment; and
tems: a laptop, low-power blade, standard server, and a Via picoITX-based machine, typically used for embedfileserver. In all systems but the fileserver, the CPU is ded multimedia applications. Figure 3 shows the
underutilized in the sort because I/O is the bottleneck; JouleSort results of all our measured systems.
CPU and I/O utilizations balance in the file server. We
Vendors use the smallest and lowest power of these
give two measurements for the blade because the wall systems, the Gumstix, in a variety of embedded devices.
Table 4. Systems for which the JouleSort rating was experimentally measured.
46
Computer
Computer
Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next Page
A
BEMaGS
F
Computer
Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next Page
A
BEMaGS
F
JouleSort score (records/J)
Our version uses a 600-MHz ARM
18,000
processor, 128 Mbytes of memory, and an
8-Gbyte CompactFlash card. Its power
16,000
consumption is a mere 2 W, but the I/O
14,000
bandwidth is low; a 100-Mbyte in-memory sort takes 137 seconds, sorting 3,650
12,000
records per Joule at a bandwidth of 730
10,000
Via-laptop
Via-flash
Kbps. For a 2-pass, 1-Gbyte sort, the
CoolSort
Soekris
energy efficiency would probably drop to
8,000
Gumstix
Laptop
about 1,820 records per Joule.
Fileserver
Blade-wall
6,000
Blade-amortized
Std. server
Moving up in power and performance,
the Soekris board we used, designed for
4,000
networking equipment, contains a 2662,000
MHz AMD Geode processor, 256 Mbytes
of memory, and an 8-Gbyte CompactFlash
0
1.0E+05
1.0E+06
1.0E+07 1.0E+08
1.0E+09
1.0E+10
1.0E+11
card. The power used during sort is 6 W,
Records
sorted
three times that of the Gumstix, but the
sort bandwidth is a much higher 3.7 Mbps,
yielding 5,945 records sorted per Joule for Figure 3. Measured JouleSort scores of commodity and custom machines.The
a 1-Gbyte sort. We determined that the I/O commodity machines, marked with dashes, performed less well than the
interface caused the bottleneck, not the custom systems.
processor or I/O device itself.
The final system, the Via picoITX, has a 1-GHz flash storage over traditional disks. GPUTeraSort’s sucprocessor and 1 Gbyte of DDR2 memory. We tried both cess among the historical sort benchmark winners shows
laptop disks and flash as I/O devices; the flash used less that the high performance of GPUs comes at a relatively
power and provided higher bandwidth. For a 1-Gbyte small energy cost, although it is unclear whether this will
sort, the flash-based version consumed 15 W and sorted continue to hold as GPUs grow ever more power hungry.
10,548 records per Joule—a number close to that of Finally, because sort is a highly parallelizable algorithm, we
CoolSort’s two-pass sorts. Although laptop disks are speculate that multicores will be excellent processors in
theoretically faster, the limitations of the board allowed energy-efficient sorting systems.
Although JouleSort addresses a computer system’s
for fewer laptop disks than flash disks, and thus the flash
energy efficiency, energy is just one piece of the system’s
configuration gave more total I/O bandwidth.
The CoolSort and VIA machines improve upon the total cost of ownership (TCO). From the system purprevious year’s efficiency by more than 250 percent, a chaser’s viewpoint, a TCO-Sort would be the most desirmarked departure from the 12 to 25 percent yearly able sort benchmark; however, the TCO components
improvement rates over the past decade. Thus, creating vary widely from user to user. Combining JouleSort and
benchmarks helps to recognize and drive improvements PennySort to benchmark the costs of purchasing and
powering a system is a possible first step. For the highin energy efficiency.
efficiency machines we studied, the VIA achieves a
JouleSort score comparable to CoolSort, at a much
INSIGHTS AND FUTURE WORK
The highest-scoring JouleSort machines provide sev- lower price: $1,158 versus $3,032. This result highlights
eral insights into system design for energy efficiency. In the potential of flash as a cost-effective technology for
the CoolSort machine, we chose components for their achieving high energy efficiency.
An emerging area of concern is the scaledown effipower-performance tradeoffs and connected them with
high-performance interfaces. While CoolSort’s mobile ciency of components and systems—that is, their ability
processor combined with 13 laptop disks offers an to reduce power consumption in response to low utiextreme example, it does highlight the promise of lization.11 Traditionally, components have consumed
designing reasonably well-performing servers from well over half their peak power, even when underutimobile components. On the other hand, the lower- lized or idle. Manufacturers are starting to address this
power machines suffered because of the limited perfor- inefficiency. JouleSort captures scaledown efficiency to
mance of their I/O interfaces, rather than the flash a small extent, since CPU and I/O will not be perfectly
devices or CPU. Integration of flash memory closer to balanced during both sort phases, but it does not necthe CPU could create more energy-efficient systems.
essarily assess efficiency at low utilization.
Second, JouleSort continues the Sort Benchmark’s traFinally, JouleSort can be extended to include metrics of
dition of identifying promising new technologies. The VIA importance in the data center. The benchmark’s current
system demonstrates the energy efficiency advantages of version does not account for losses in power delivery at
47
December 2007
Computer
Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next Page
A
BEMaGS
F
Computer
Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next Page
the data center or rack level. An appropriate benchmark
in that setting might be an Exergy JouleSort where the
metric of interest is records sorted per Joule of exergy8
expended.
A
s concerns about enterprise power consumption
continue to increase, we need metrics that assess
and improve energy efficiency. The JouleSort
benchmark can help assess improvements in end-to-end,
system-level energy efficiency. This addition to the family of sort benchmarks provides a simple and holistic
way to chart trends and identify promising new technologies. The most energy-efficient sorting systems use
a variety of emerging technologies, including low-power
mobile components, and flash-based storage. ■
A
BEMaGS
F
7. Standard Performance Evaluation Corporation (SPEC),
“SPEC Power and Performance Committee”; www.spec.org/
specpower.
______
8. C.D. Patel, “A Vision of Energy-Aware Computing from
Chips to Data Centers,” Proc. ISMME, Japan Soc. Mechanical Engineers, 2003, pp. 141-150.
9. The Green Grid, “Green Grid Metrics: Describing Datacenter Power Efficiency,” 2007; www.thegreengrid.org/gg_
_________________
content/Green_Grid_Metrics_WP.pdf.
_______________________
10. N.K. Govindaraju et al., “GPUTeraSort: High Performance
Graphics Coprocessor Sorting for Large Database Management,” Proc. SIGMOD Conf., ACM Press, 2006, pp. 325336.
11. R. Mayo and P. Ranganathan, “Energy Consumption in
Mobile Devices: Why Future Systems Need RequirementsAware Energy Scale-Down,” Special Issue on Power Management, LNCS, Springer, 2003, pp. 26-40.
References
1. C.D. Patel and P. Ranganathan, “Enterprise Power and Cooling: A Chip-to-Datacenter Perspective,” tutorial, Proc. Int’l
Conf. Architectural Support for Programming Languages
and Operating Systems (ASPLOS 2006), ACM Press, 2006,
pp.1-101.
2. S. Rivoire et al., “JouleSort: A Balanced Energy-Efficiency
Benchmark,” Proc. SIGMOD Conf., ACM Press, 2007, pp.
365-376.
3. R. Gonzalez and M. Horowitz, “Energy Dissipation in General-Purpose Microprocessors,” IEEE J. Solid-State Circuits,
Sept. 1996, pp. 1277-1284.
4. Embedded Microprocessor Benchmark Consortium (EEMBC),
“EnergyBench v 1.0 Power/Energy Benchmarks”; www.eembc.
org/benchmark/power_sl.asp.
5. Sun Microsystems, “SWaP (Space, Watts, and Performance)
Metric”; www.sun.com/servers/coolthreads/swap.
6. US Environmental Protection Agency, “ENERGY STAR
Program Requirements for Computers: Version 4.0”;
www.energystar.gov/ia/partners/prod_development/revisions/
downloads/computer/Computer_Spec_Final.pdf.
_____________________________
Submit your
manuscript online!
Visit http://computer.org/computer
and click on “Write for Computer”.
IEEE Computer Society
48
Suzanne Rivoire is a PhD candidate in the Department of
Electrical Engineering, Stanford University. Her research
interests include energy-efficient system design, data center
power management, and data-parallel architectures. Rivoire
received an MS in electrical engineering from Stanford University. Contact her at ______________
rivoire@stanford.edu.
Mehul A. Shah is a research scientist in the Storage Systems
Department at Hewlett-Packard Laboratories. His research
interests include energy efficiency of computer systems,
database systems, distributed systems, and long-term digital preservation. Shah received a PhD in computer science
from the University of California, Berkeley. Contact him at
mehul.shah@hp.com.
______________
Parthasarathy Ranganathan is a principal research scientist at HP Laboratories. His research interests include lowpower design, system architecture, and parallel computing.
Ranganathan received a PhD in electrical and computer
engineering from Rice University. Contact him at _____
partha.
ranganathan@hp.com.
_______________
Christos Kozyrakis is an assistant professor of electrical
engineering and computer science at Stanford University.
His research interests include transactional memory, architectural support for security, and power management techniques. Kozyrakis received a PhD in computer science from
the University of California, Berkeley. Contact him at ____
christos@ee.stanford.edu.
______________
Justin Meza is an undergraduate in computer science at the
University of California, Los Angeles. His research interests include open source software, Web standards, electronics, and programming. Contact him at _________
justin.meza@
ucla.edu.
______
Computer
Computer
Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next Page
A
BEMaGS
F
Computer
Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next Page
A
BEMaGS
F
Welcomes Your Contribution
Computer
magazine
looks
ahead
to
future
technologies
• Computer, the flagship publication of
the IEEE Computer Society, publishes
peer-reviewed technical content that
covers all aspects of computer science,
computer engineering, technology, and
applications.
• Articles selected for publication in
Computer are edited to enhance
readability for the nearly 100,000
computing professionals who receive
this monthly magazine.
• Readers depend on Computer to
provide current, unbiased, thoroughly
researched information on the newest
directions in computing technology.
To submit a manuscript for
peer-review, see Computer’s
author guidelines:
www.computer.org/computer/
author.htm
______________
Computer
Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next Page
A
BEMaGS
F
Computer
Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next Page
C O V E R
A
BEMaGS
F
F E A T U R E
The Green500 List:
Encouraging Sustainable
Supercomputing
Wu-chun Feng and Kirk W. Cameron
Virginia Tech
The performance-at-any-cost design mentality ignores supercomputers’ excessive power
consumption and need for heat dissipation and will ultimately limit their performance.
Without fundamental change in the design of supercomputing systems, the performance
advances common over the past two decades won’t continue.
A
lthough there’s now been a 10,000-fold
increase since 1992 in the performance of
supercomputers running parallel scientific
applications, performance per watt has only
improved 300-fold and performance per
square foot only 65-fold. In response to the lagging
power and space-efficiency improvements, researchers
have had to design and construct new machine rooms,
and in some cases, entirely new buildings.
Compute nodes’ exponentially increasing power
requirements are a primary driver behind this less efficient use of power and space. In fact, the top supercomputers’ peak power consumption has been on the
rise over the past 15 years, as Figure 1 shows.
Today, several of the most powerful supercomputers
on the TOP500 List (www.top500.org) require up to 10
megawatts of peak power—enough to sustain a city of
40,000. And even though IBM Blue Gene/L, the world’s
fastest machine, was custom-built with low-power components, the system still consumes several megawatts of
power. At anywhere from $200,000 to $1.2 million per
megawatt, per year, these are hardly low-cost machines.
THE ENERGY CRISIS IN SUPERCOMPUTING
Power is a disruptive technology that requires us to
rethink supercomputer design. As a supercomputer’s
nodes consume and dissipate more power, they must be
spaced out and aggressively cooled. Without exotic cooling facilities, overheating makes traditional supercom50
Computer
Computer
puters too unreliable for application scientists to use.
Unfortunately, building exotic cooling facilities can cost
as much as the supercomputer itself, and operating and
maintaining the facilities costs even more.
As “The Energy-Efficient Green Destiny” sidebar
details, the low-power supercomputer that we developed
was extremely reliable, with no unscheduled downtime
in its two-year lifespan, despite residing in a dusty warehouse without cooling, humidification, or air filtration.
The hourly cost of such downtime ranges from $90,000
for a catalog sales operation to nearly $6.5 million for a
brokerage operation, according to Contingency Planning
Research’s 2001 cost-of-downtime survey.
There’s still no guarantee that the supercomputer
won’t fail, as Table 1 illustrates. Total cost of ownership
now exceeds initial acquisition costs.
Performance at any cost
The performance-at-any-cost supercomputer design
paradigm is no longer feasible. Clearly, without significant change in design, the performance gains of the
past two decades won’t continue. Unfortunately, performance-only metrics don’t capture improvements in
power efficiency. Nonetheless, performance-only metrics derived from the Linpack benchmarks and the
Standard Performance Evaluation Corp. (SPEC) code
suite have significantly influenced the design of modern
high-performance systems, including servers and supercomputers.
Published by the IEEE Computer Society
0018-9162/07/$25.00 © 2007 IEEE
Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next Page
A
BEMaGS
F
Computer
Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next Page
TMC
CM-5
5 kW
IBM SP
ASCI White
Intel
Earth
IBM
ASCI Red 6,000 kW Simulator Blue Gene/L
850 kW
12,000 kW ~4,000 kW
Fujitsu
Numerical
Wind Tunnel
100 kW
Performance
104 Gflops
103 Gflops
F
High-speed
electric train
10,000 kW
)
R max
ak (
nts
k pe
eme
r
c
i
a
u
np
eq
e Li
er r
erag
pow
v
k
a
a
5
Pe
Top
BEMaGS
Small power plant
generating capacity
300,000 kW
106 Gflops
105 Gflops
A
Commercial data
center 1,374 kW
Efficiency
gap
tion
pplica
Real a
Residential airconditioner
15 kW
102 Gflops
10 Gflops
1 Gflop
1993
1995
1997
1999
2001
2003
2005
2007
2009
Figure 1. Rising power requirements. Peak power consumption of the top supercomputers has steadily increased over the past
15 years.
The Energy-Efficient Green Destiny
As a first step toward reliable and available energyefficient supercomputing, in 2002 we built a lowpower supercomputer at Los Alamos National
Laboratory. Dubbed Green Destiny, the 240-processor
supercomputer took up 5 square feet (the size of a
standard computer rack) and had a 3.2-kilowatt power
budget (the equivalent of two hairdryers) when booted
diskless.1,2 Its 101-gigaflops Linpack rating (equivalent
to a 256-processor SGI Origin 2000 supercomputer
or a Cray T3D MC1024-8) would have placed it at
no. 393 on the TOP500 List at that time.
Garnering widespread media attention, Green
Destiny delivered reliable supercomputing with no
unscheduled downtime in its two-year lifetime. It
endured sitting in a dusty warehouse at temperatures
of 85-90 degrees Fahrenheit (29-32 degrees Celsius)
and an altitude of 7,400 feet (2,256 meters). Furthermore, it did so without air-conditioning, humidification
control, air filtration, or ventilation.
Yet despite Green Destiny’s accomplishments, not
everyone was convinced of its potential. Comments
ranged from Green Destiny’s being so low power that
it ran just as fast when it was unplugged to the notion
that no one in high-performance computing would
ever care about power and cooling.
However, in the past year, we’ve seen a dramatic
attitude shift with respect to power and energy, particularly in light of how quickly supercomputers’ thermal
power envelopes have increased in size, thus adversely
impacting the systems’ power and cooling costs, reliability, and availability.
The laboratory’s Biosciences Division bought a
Green Destiny replica about six months after Green
Destiny’s debut. In 2006, we donated Green Destiny
to the division so it could run a parallel bioinformatics
code called mpiBLAST. Both clusters are run in the
same environment, yet half of the nodes are inoperable on the replica, which uses higher-powered processors. Hence, although the original Green Destiny was
0.150 gigahertz slower in clock speed, its productivity
in answers per month was much better than the faster
but often inoperable replica.
Green Destiny is no longer used for computing, and
resides in the Computer History Museum in Mountain
View, California.
References
1. W. Feng, “Making a Case for Efficient Supercomputing,”
ACM Queue, Oct. 2003, pp. 54-64.
2. W. Feng, “The Importance of Being Low Power in HighPerformance Computing,” Cyberinfrastructure Technology
Watch, Aug. 2005, pp. 12-21.
51
December 2007
Computer
Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next Page
A
BEMaGS
F
Computer
Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next Page
Table 1. Reliability and availability of large-scale computing systems.
System
Processors
ASC Q
8,192
ASC White
8,192
PSC Lemieux
3,016
Google
(estimate)
450,000
Reliability and availability
Mean time between interrupts: 6.5 hours,
114 unplanned outages/month
Outage sources: storage, CPU, memory
Mean time between failures: 5 hours (2001)
and 40 hours (2003)
Outage sources: storage, CPU, third-party hardware
Mean time between interrupts: 9.7 hours
Availability: 98.33 percent
600 reboots/day; 2-3 percent replacement/year
Outage sources: Storage and memory
Availability: ~100 percent
Source: D.A. Reed
Developing new metrics
Performance-only metrics are likely to remain valuable for comparing existing systems prior to acquisition
and helping drive system design. Nonetheless, we need
new metrics that capture design differences in energy
efficiency. For example, two hypothetical high-performance machines could both achieve 100 teraflops running Linpack and secure a high equivalent ranking on
the TOP500 List. But enable smart-power-management
hardware or software1,2 on one machine that can sustain performance and reduce energy consumption by 10
percent, and the TOP500 rankings remain the same.
Unfortunately, metric development is fraught with
technical and political challenges. On the technical side,
operators must perceive the metric and its associated
benchmarks as representative of the workloads typically
running on the production system. On the political side,
metrics and benchmarks need strong community buy-in.
THE GREEN500 LIST
We’ve been working to improve awareness of energyefficient supercomputer (and data center) design since
the turn of the century. After interaction with government agencies, vendors, and academics, we identified a
need for metrics to fairly evaluate large systems that run
scientific production codes. We considered a number of
methodologies for use in ranking supercomputer efficiency. To promote community buy-in, the initial
Green500 List used a single metric and widely accepted
workload while the intent is to extend the Green500
methodology to eventually include rankings for a suite
of parallel scientific applications.
The Green500 List ranks supercomputers based on
the amount of power needed to complete a fixed amount
of work. This effort is focused on data-center-sized
deployments used primarily for scientific production
codes. In contrast, the SPECPower subcommittee of
SPEC is developing power-performance efficiency bench52
A
BEMaGS
F
marks for servers running commercial production codes. The diverse types of evaluations that efforts like the Green500 and
SPECPower (www.spec.org/specpower)
provide will give users more choice in
determining efficiency metrics for their systems and applications.
Measuring efficiency
In the Green500 effort, we treat both
performance (speed) and power consumption as first-class design constraints for
supercomputer deployments.
Speed and workload. The supercomputing community already accepts the flops
metric for the Linpack benchmark, which
the TOP500 List uses. Although TOP500
principals acknowledge that Linpack isn’t
the be-all or end-all benchmark for high-performance
computing (HPC), it continues to prevail despite the
emergence of other benchmarks. As other benchmark
suites gain acceptance, most notably the SPEChpc3 and
HPC Challenge benchmarks,4 we plan to extend our
Green500 List methodology as mentioned. For now,
since the HPC community seems to identify with the
notion of a clearly articulated and easily understood single number that indicates a machine’s prowess, we opt
to use floating-point operations per second as a speed
metric for supercomputer performance and the Linpack
benchmark as a scalable workload.
EDn metric. There are many possibilities for performance-efficiency metrics, including circuit design’s EDn
metric—with E standing for the energy a system uses
while running a benchmark, D for the time to complete
that same benchmark,5-8 and n a weight for the delay
term. However, the EDn metrics are biased when applied
to supercomputers, particularly as n increases. For
example, with large values for n, the delay term dominates so that very small changes in execution time
impact the metric dramatically and render changes in E
undetectable in comparisons.
Flops per watt. For the Green500 List, we opted to
use flops per watt for power efficiency. However, this
metric might be biased toward smaller supercomputing
systems. A supercomputer’s wattage will scale (at least)
linearly with the number of compute nodes while the
flops performance will scale (at most) linearly for embarrassingly parallel problems and sublinearly for all other
problems. This implies smaller systems would have better ratings on such a scale.
Nonetheless, flops per watt is easy to measure and has
traction in the scientific community. Furthermore, we
can reduce the bias toward small systems by ranking
systems that first achieve a minimum performance
rating. We simply set a minimum flops threshold for
entry into the Green500 List and allow bigger super-
Computer
Computer
Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next Page
A
BEMaGS
F
Computer
Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next Page
computers to rerun their Linpack benchmark, if desired,
to meet this minimum threshold and obtain the corresponding power consumption during a benchmark’s
rerun. That is, the Green500 List ranks supercomputers based on the amount of power needed to complete
a fixed amount of work at a rate greater than or equal
to the minimum flops threshold.
Measuring power consumption
Even after choosing the benchmark and power-efficiency metric, issues surrounding the selection of the flopsper-watt metric for a given supercomputer remained
unresolved. First, starting with the metric’s numerator,
what will be the minimum flops threshold for entry into
the Green500 List? We intend to use the flops rating that
the no. 500 supercomputer achieves on the latest TOP500
List as the Green500 List’s minimum flops threshold.
With the numerator addressed, this leaves wattage as
the denominator. Surprisingly to some, the denominator
for flops per watt might be more difficult to determine,
since there are many permutations for what we could
measure and report. For example, we could
• measure the entire supercomputer’s power consumption,
• measure a single supercomputer node and extrapolate it to the entire supercomputer, or
• use the manufacturers’ advertised peak power numbers (as we used in Figure 1).
Measuring wattage for a supercomputer the size of a
basketball court is difficult. However, using advertised
peak power numbers could result in overinflation of the
power numbers. We suggest measuring a single compute
node’s power consumption and multiplying by the number of compute nodes (loosely defining a node as an
encased chassis, whether the chassis has the form factor
of a standard 1U server or an entire rack).
Power meters. To measure power consumption, we
propose using a power meter that can sample consumption at granularities of one second or less. The digital
meters range in capability from the commodity Watts
Up? Pro (www.wattsupmeters.com) to the industrialstrength Yokogawa WT210/WT230 (http://yokogawa.
com/tm/wtpz/wt210/tm-wt210_01.htm). Figure 2 shows
a high-level diagram of how a digital power meter measures a given system under test (single compute node)
via a common power strip and logs the measurements to
a profiling computer.
Duration. We also need to address how long to measure power and what we should record and report.
Given the meters’ recording capabilities, we suggest
measuring and recording power consumption for the
duration of the Linpack run and using the average power
consumption over the entire run. Coupling average
power consumption with the Linpack run’s execution
Profiling
computer
System
under test
Digital
power meter
Power
strip
A
BEMaGS
F
Wall power
outlet
Figure 2. Power-measurement infrastructure. A digital power
meter measures a system under test via a common power strip
and logs the measurements to a profiling computer.
time adds inferred overall energy consumption, where
energy is average power multiplied by time.
Cooling. Finally, we considered whether to include
cooling-facility power consumption in the measurement.
We decided against inclusion because the Green500 List
is intended to measure the supercomputer’s power efficiency, rather than cooling systems (which vary widely in
power efficiency). Even if we considered cooling for the
supercomputer under test, it would be difficult to break
out and measure cooling’s specific contribution for one
supercomputer, given that cooling facilities are designed
to support all the machines in a given machine room.
TOP500 VERSUS GREEN500
Table 2 presents the Green500 and TOP500 rankings
of eight supercomputers, as well as their flops ratings
and their peak power usage. This list also shows the
results of using the flops-per-watt metric for these supercomputers with their peak performance number (for
peak power efficiency) and their Linpack performance
number (for actual power efficiency).
As mentioned, using peak power numbers for comparisons isn’t optimal. Nonetheless, the relative comparisons using peak power numbers are useful to gauge
power-efficiency progress. Beginning with the November
2007 Green500 List, we’ll use metered measurements
in rankings whenever available. As the list matures, we
anticipate metering and verifying all measurements.
Various presentations, Web sites, and magazine and
newspaper articles provide the source for these peak
power numbers. For the IBM Blue Gene/L supercomputer at Lawrence Livermore National Laboratory
(LLNL), the TOP500 wiki reports 1.5 MW as its peak
power consumption. LLNL’s Web site reports that 7.5
MW is needed to power and cool ASC Purple, while
Eurekalert estimates it uses 8 MW.
According to LLNL, for every watt of power the system consumes, 0.7 watts of power is required to cool it.
Hence, the power required to merely run ASC Purple
would be between 4.4 and 4.7 MW, which matches the
4.5 MW number provided in a presentation at a Blue
Gene/L workshop.
Jaguar at Oak Ridge National Laboratory is a
hybrid system consisting of 56 XT3 cabinets and 68
53
December 2007
Computer
Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next Page
A
BEMaGS
F
Computer
Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next Page
A
BEMaGS
F
Table 2. June 2007 Green500 and TOP500 rankings.
Green500
rank
(power
efficiency)
Supercomputer
1
2
3
4
5
6
7
8
Blue Gene/L
MareNostrum
Jaguar
System X
Columbia
ASC Purple
ASC Q
Earth Simulator
Peak
performance
(Gflops)
Linpack
performance
(Gflops)
367,000
94,208
119,350
20,240
60,960
92,781
20,480
40,960
280,600
62,630
101,700
12,250
51,870
75,760
13,880
35,860
Peak
power
(kW)
Peak
power
efficiency
(Mflops/W)
Green500
rank (peak
power
efficiency
Power
efficiency
(Mflops/W)
TOP500
rank
1,200
1,344
2,087
310
2,000
4,500
2,000
7,680
305.83
70.10
57.19
65.29
30.48
20.62
10.24
5.33
1
2
4
3
5
6
7
8
233.83
46.60
48.73
39.52
25.94
16.84
6.94
4.67
1
9
2
71
13
6
62
20
Table adapted from a figure provided by NXP Semiconductors.
XT4 cabinets. The peak power consumption of an
XT3 cabinet is 14.5 kW while the XT4 cabinet is
18.75 kW, as per Cray datasheets. Thus, the aggregate
peak power of Jaguar is about 2 MW.
The 4,800-processor MareNostrum debuted fifth on
the June 2005 TOP500 List with an estimated 630-kW
power budget to run the machine. More recently,
Barcelona Supercomputing Center’s MareNostrum
was upgraded and expanded into a 10,240-processor
BladeCenter JS21 cluster. If we extrapolate from the original MareNostrum’s 630-kW power budget, the 10,240processor MareNostrum would have a power budget of
1.3 MW.
For the Columbia supercomputer at NASA Ames
Research Center, the reported power usage just to run
the system is 2 MW. The thermal design power of
Itanium-2 processors is 130 watts, so it takes 1.33 MW
to run the 10,240 processors in the Columbia.
Therefore, 2 MW seems reasonable if Columbia’s other
components use only 700 kW of power, consistent with
our Itanium-based server’s power profile.
Powering and cooling Japan’s 5,120-processor Earth
Simulator requires 11.9 MW, enough to power a city of
40,000 and a 27,000-student university. The Earth
Simulator configures the 5,120 processors into 640
eight-way nodes, where each eight-way node uses 20
kilovolt-amperes. Assuming a typical power-factor conversion of 0.6, each node then consumes 20 kVA 0.6
= 12 kW. Thus, power consumption for the entire 640node Simulator is 640 12 kW = 7,680 kW, leaving
4,220 kW for cooling.
The power budgets for ASC Q and ASC White run at
approximately 2 MW, while System X at Virginia Tech
consumes a paltry 310 kW, as measured directly from
System X’s power distribution units. As Table 2 shows,
despite its large size, Blue Gene/L is the only custom lowpower supercomputer among the TOP500. It’s routinely
the highest-ranking supercomputer on both the TOP500
54
and Green500 lists, with a performance-power ratio
that’s up to two orders of magnitude better than the
other supercomputers in Table 2.
The power efficiencies of MareNostrum (semicommodity) and System X (commodity) are 2.5 times better
than the other supercomputers, and this ranked them
second and fourth on the June 2007 Green500 List, as
shown in Table 2. Interestingly, Apple, IBM, and
Motorola’s commodity PowerPC processor drives both
of these power-efficient supercomputers. On the other
hand, ASC Purple, which ranked sixth on that TOP500
list, is also based on the PowerPC processor, albeit the
Power5, its higher-powered relative. Power5 ultimately
contributes to ASC Purple’s lower power efficiency and
its sixth-place ranking on the 2007 Green500.
OPERATIONAL COSTS AND RELIABILITY
Power consumption has become an increasingly
important issue in HPC. Ignoring power consumption
as a design constraint results in an HPC system with
high operational costs and diminished reliability, which
often translates into lost productivity.
With respect to high operational costs, ASC Purple has
a 7.5-MW appetite (approximately 4.5 MW to power
the system and 3 MW for cooling). With a utility rate of
12 cents per kW/hour, the annual electric bill for this system would run nearly $8 million. If we scaled this architecture to a petaflops machine, powering up and cooling
down the machine would require approximately 75 MW.
The system’s annual power bill could run to $80 million,
assuming energy costs remained the same.
Table 1 shows that the reliability and availability of
large-scale systems, ranging from supercomputers to a
large-scale server farm, is often measured in hours.
Further scaling of such supercomputers and data centers would result in several failures per minute.9 This
diminished reliability results in millions of dollars per
hour in lost productivity.
Computer
Computer
Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next Page
A
BEMaGS
F
Computer
Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next Page
A
BEMaGS
F
In light of the above, the HPC community could use an
EnergyGuide sticker, such as the Green Destiny sticker
shown in Figure 3. The community could also use studies showing that annual server power and cooling costs
are approaching annual spending on new machines.
The HPC community needs a Green500 List to rank
supercomputers on speed and power requirements and
to supplement the TOP500 List. Vendors and system
architects worldwide take substantial pride and invest
tremendous effort toward making the biannual TOP500
List. We anticipate that the Green500 List effort will do
the same and encourage the HPC community and operators of Internet data centers to design more power-efficient supercomputers and large-scale data centers. For
the latest Green500 List, visit www.green500.org. ■
Acknowledgments
We thank David Bailey, Jack Dongarra, John Shalf,
and Horst Simon for their support. Intel, IBM, Virginia
Tech, the US Department of Energy, and the NSF (CNS
0720750 and 0615276; CCF 0709025 and 0634165)
sponsored portions of this work. We also acknowledge
colleagues who suggested a Green500 List after an April
2005 keynote at the IEEE International Parallel &
Distributed Processing Symposium and a follow-up talk
at North Carolina State University. This article is dedicated to those who lost their lives in the 16 April 2007
tragedy at Virginia Tech.
__________
Figure 3. EnergyGuide sticker for Green Destiny. Such a sticker
could remind those in the HPC community of a computer’s
energy use and hourly operating costs.
References
1. C. Hsu and W. Feng, “A Power-Aware Run-Time System for
High-Performance Computing,” Proc. ACM/IEEE SC Conf.
(SC05), IEEE CS Press, 2005, p. 1.
2. K.W. Cameron et al., “High-Performance, Power-Aware Distributed Computing for Scientific Applications,” Computer,
Nov. 2005, pp. 40-47.
3. M. Mueller, “Overview of SPEC HPC Benchmarks,” BOF presentation, ACM/IEEE SC Conf. (SC06), 2006.
4. J. Dongarra and P. Luszczek, Introduction to the HPC Challenge Benchmark Suite, tech. report, Univ. Tennessee, 2004;
www.cs.utk.edu/~luszczek/pubs/hpcc-challenge-intro.pdf.
5. A. Martin, “Towards an Energy Complexity of Computation,” Information Processing Letters, vol. 77, no. 2-4, 2001,
pp. 181-187.
6. D. Brooks and M. Martonosi, “Dynamically Exploiting Narrow
Width Operands to Improve Processor Power and Performance,”
Proc. 5th Int’l Symp. High-Performance Computer Architecture,
IEEE CS Press, 1999, p. 13.
7. R. Gonzalez and M. Horowitz, “Energy Dissipation in General-Purpose Microprocessors,” IEEE J. Solid-State Circuits,
Sept. 1996, pp. 1277-1284.
8. A. Martin, M. Nystrm, and P. Penzes, ET2: A Metric for Time
and Energy Efficiency of Computation, Kluwer Academic
Publishers, 2002.
9. S. Graham, M. Snir, and C. Patterson, eds., Getting Up to
Speed: The Future of Supercomputing, Nat’l Academies Press,
2005.
Wu-chun Feng is an associate professor of computer science and electrical and computer engineering at Virginia
Tech. His research interests are high-performance networking and computing. Feng received a PhD in computer
science from the University of Illinois at Urbana-Champaign. He is a senior member of the IEEE Computer Society. Contact him at __________
feng@cs.vt.edu.
Kirk W. Cameron is an associate professor of computer science at Virginia Tech and director of its Scalable Performance Lab. His research interests are power and
performance in high-performance applications and systems.
Cameron received a PhD in computer science from
Louisiana State University. He is a member of the IEEE
Computer Society. Contact him at _____________
cameron@cs.vt.edu.
55
December 2007
Computer
Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next Page
A
BEMaGS
F
Computer
Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next Page
C O V E R
A
BEMaGS
F
F E A T U R E
Life Cycle Aware
Computing: Reusing
Silicon Technology
John Y. Oliver, Cal Poly San Luis Obispo
Rajeevan Amirtharajah and Venkatesh Akella, University of California, Davis
Roland Geyer and Frederic T. Chong, University of California, Santa Barbara
Despite the high costs associated with processor manufacturing, the typical chip is used for
only a fraction of its expected lifetime. Reusing processors would create a “food chain”of
electronic devices that amortizes the energy required to build chips over several computing
generations.
T
he past decade has seen unprecedented growth
in the number of electronic devices available to
consumers. Many of these devices, from computers to set-top boxes to cell phones, require
sophisticated semiconductors such as CPUs and
memory chips. The economic and environmental costs
of producing these processors for new and continually
upgraded devices are enormous.
Because the semiconductor manufacturing process
uses highly purified silicon, the energy required is quite
high—about 41 megajoules (MJ) for a dynamic random
access memory (DRAM) die with a die size of 1.2 cm2.1
To illustrate the macroeconomic impact of this energy
cost, Japan’s semiconductor industry is expected to consume 1.7 percent of the country’s electricity budget by
2015.2 Approximately 600 kilograms of fossil fuels are
needed to generate enough energy to create a 1-kilogram
semiconductor.3 Furthermore, according to chip consortium Sematech, foundry energy consumption also
continues to increase.4
In terms of environmental impact, 72 grams of toxic
chemicals are used to create a 1.2 cm2 DRAM die. The
semiconductor industry manufactured 28.4 million cm2
of such dies in 2000, which translates to 1.7 billion kilograms of hazardous material.2 Due to the increasing number of semiconductor devices manufactured each year,
semiconductor disposal costs are likewise increasing.
Despite these costs, the typical processor is used for
only a fraction of its expected lifetime. While rapid tech-
56
Computer
Computer
nological advances are quickly making silicon obsolete,
chips could be removed from recycled electronics and
reused for less demanding computing tasks. A processor reuse strategy would create a “food chain” of computing devices that amortizes the energy required to
build processors—particularly low-power, embedded
processors—over several computing generations.
PROCESSOR LIFETIME ENERGY CONSUMPTION
The lifetime energy consumption of a processor or
memory chip can be expressed as the sum of the
• manufacturing energy cost, including the creation of
silicon wafers, the chemical and lithography
processes, and chip assembly and packaging; and
• utilization energy cost.
A comparative analysis of these two components
reveals that the energy required to manufacture a
processor can dominate the energy consumed over
the processor’s lifetime.
Manufacturing energy cost
Semiconductor manufacturing involves many steps,
from crystal growth to dicing to packaging. Total energy
cost can be expressed as Emanufacturing = Edie + Eassembly. Edie
is the energy required to manufacture the die of the
processor or memory chip and includes wafer growth,
epitaxial layering, applying photo resists, etching,
Published by the IEEE Computer Society
0018-9162/07/$25.00 © 2007 IEEE
Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next Page
A
BEMaGS
F
Computer
100
100
80
80
(a)
F
50 mm2
in 2006
40
(b)
2012
2009
2006
2012
2009
2006
2012
2009
2012
(32 nm)
2006
0
2009
(45 nm)
BEMaGS
20
0
2006
(65 nm)
100 mm2
in 2006
A
Packaging and
assembly cost
Die Cost
2012
20
150 mm2
in 2006
60
2009
200 mm2
150 mm2
100 mm2
50 mm2
40
200 mm2
in 2006
2006
60
Energy (MJ)
Yield (percent)
Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next Page
Year
Figure 1. Semiconductor yield and manufacturing energy costs over time. (a) Shrinking the processor increases yield, which
(b) decreases manufacturing energy costs over subsequent generations.
implantation/diffusion, and managing these procedures.
Eassembly represents the cost to assemble the chip and
includes wafer testing, dicing, bonding, encapsulation,
and burn-in testing.
Based on this simple formula, the authors of a recent
study1 made several assumptions about the manufacturing energy cost for any CMOS-based semiconductor. First, they assumed that the energy required to
manufacture a 1.2 cm2 processor at any lithographical
level is the same—thus, the energy costs of manufacturing a 1.2 cm2 DRAM die and 1.2 cm2 processor die
are identical. Another assumption is that the manufacturing energy required is proportional to the semiconductor die area (Edie = 1/yield area), so that a 0.6 cm2
processor requires half as much energy for die manufacture as a 1.2 cm2 die, adjusted for yield. Finally, they
assumed that the assembly energy cost is a constant 5.9
MJ, regardless of the die size. For a 1.2 cm2 DRAM
chip, Emanufacturing = 41 MJ, Edie = 35.1 MJ, and Eassembly =
5.9 MJ.
As part of their manufacturing energy analysis, the
researchers employed the SUSPENS (Stanford University
System Performance Simulator) yield model.5 According
to this model, yield = eD0 area, with the D0 constant
taken from the International Technology Roadmap for
Semiconductors.6
Figure 1a shows the yield curves for four hypothetical
processors over time. Shrinking the processor clearly
increases yield. For example, the yield for a 200 mm2
processor in 2006 is 60 percent; the same processor,
shrunk using 2012 technology, has a yield over 80 percent. The manufacturing cost for subsequent generations of processors thus has the potential to decrease due
to shrinking transistor geometry.
Figure 1b demonstrates the energy required to manufacture a processor with fixed functionality over time.
The energy savings in subsequent years is due to shrinking transistor geometries and yield improvements.
Processors with larger dies have a higher percentage of
energy savings because packaging costs are a smaller
portion of the overall manufacturing cost. Also, in the
extreme case, shrinking a processor might make it pad
limited. The physical dimensions of a pad are unlikely
to shrink far below 60 m on a side.7
We believe that the projected figures shown in Figure
1b are on the conservative side. The data the researchers
used is from a 4-inch wafer fab, and modern 12-inch
wafers require more energy per unit area to process.4 In
addition, many modern semiconductor processes have
more layers than the process used in the study.
The amount of energy required to manufacture a
processor die is clearly considerable. A 300-mm wafer
uses 2 gigajoules of energy, which is roughly the amount
contained within 200 gallons of gasoline. The good
news is that the total manufacturing energy cost diminishes with every process shrink. Unfortunately, packaging and assembly costs are relatively fixed.
Utilization energy cost
A processor’s utilization energy cost can be determined
by simply multiplying its power consumption by the
time it is operational. For example, the Intel XScale
PX273 consumes 0.77 watts of power in full operation.8
Assuming that an XScale-based PDA is used two hours
per day 365 days per year, the PX273 consumes just over
2 MJ of energy annually.
One factor that can impact a processor’s power consumption is the manufacturing process technology. A
benefit of shrinking transistor geometry is that circuits’
switching capacitance decreases with each shrink. For
low-end cell phones and other devices with relatively
fixed performance, processor power consumption may
benefit from process shrinks unless leakage current
becomes problematic. Higher amounts of leakage make
processor reuse a more attractive solution than upgrading to a new process technology, as the processors manufactured with older process technologies will have
lower leakage current.
57
December 2007
Computer
Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next Page
A
BEMaGS
F
Computer
Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next Page
A
BEMaGS
F
PROCESSOR FOOD CHAIN
140
Energy (MJ)
120
100
80
60
No upgrade
Upgrade every 2 years
Upgrade every 4 years
40
20
0
Energy (MJ)
(a)
900
800
700
600
500
400
300
200
100
0
1
2
3
4
5
6
Time (years)
7
8
9
No upgrade
Upgrade every 2 years
Upgrade every 4 years
10
Mobile device processors are typically used for
only a fraction of their designed lifetime. “Computer
chips can operate for 80,000 hours, and usually
machines are thrown out after 20,000 hours,”
observed Guardian columnist John Keeble. “However, at the moment, 60 percent of chips cannot be
reused because of their specialized functions.”9
To facilitate reuse, researchers could standardize embedded processor footprints for a wide range
of embedded devices. In addition, instead of
reusing a processor in the same device, it could
serve a next-generation device with lower performance requirements. Researchers also could apply
power-savings techniques like voltage scaling,
given the secondary device’s lower computational
demand and corresponding operational frequency
and voltage.
Example: ARM9
To illustrate how a food chain of electronic
devices could reuse a processor, consider the
ARM9 processor, which is featured in the Alpine
1
2
3
4
5
6
7
8
9
10
Blackbird PMD-B100 and Sell GPS-350A auto(b)
Time (years)
motive navigation systems. The ARM9 implementation in these systems runs at 266 MHz. Once
Figure 2. Potential benefits of processor reuse. (a) Upgrading a 1-W
the navigation system is recycled, the processor can
processor does not improve the lifetime energy consumption for at
be removed and placed into a mobile phone like
least 10 years, making processor reuse an attractive alternative. (b)
the Sony Ericsson P800, which uses a similar
For processors that use more power—in this case, 20 W—upgrading
ARM9 processor running at 156 MHz. When this
with newer, more efficient technology makes sense.
phone is recycled, the processor can in turn be put
into a Nintendo DS portable game system, which
PROCESSOR REUSE
uses an ARM9 running at 77 MHz.
Figure 2a illustrates how processor reuse minimizes
Table 1 compares the lifetime energy consumption of
the lifetime energy consumption of a processor that uses a processor reuse strategy with a strategy that uses new
1 W of power. The two- and four-year upgrade curves processors in this chain of devices. These results assume
increase every two and four years, depicting the high that the automotive navigation system is used one hour
energy cost of manufacturing the processors. These per day, the mobile phone three hours per day, and the
results are based on the assumption that the processor Nintendo DS game system two hours per day, every day
has a die area of 1.2 cm2, is operated three hours every for three years, before being recycled.
day, and is dormant (but still leaking) when not in use.
Note that manufacturing energy constitutes a large
Processors with a 1-W rating or less clearly should not portion of the processors’ lifetime energy consumption.
be upgraded with new processors to reduce their life- In addition, the manufacturing energy cost of chips in
time energy consumption. On the other hand, as Figure 2009 and 2012 for the new-processor chain decreases
2b shows, upgrading is a viable option to minimize life- only slightly. Some decrease is expected, as the die size
time energy consumption for a higher-power proces- shrinks in each generation, but the decrease is limited
by the fixed amount of energy required to assemble the
sor—in this case, a 20-W processor.
To minimize lifetime energy consumption, it makes processors and the fact that pad size is unlikely to scale
sense to reuse a processor when it uses 100 kJ of energy with technology.7 Also noteworthy is that reused procesper day or less. Assuming that upgrading occurs in three- sors have a higher utilization cost than new ones. The
year cycles and the device containing the processor is increase is small, but it could be important for severely
used three hours per day, this is roughly equivalent to power-constrained devices.
the energy a 10-W processor consumes. For perspective,
This study neglects the energy required to reclaim a
100 kJ of energy is a bit less energy than is contained processor, but processor reuse has other benefits that
within a fully charged laptop battery, or about the same counterbalance this including reduced disposal costs and
amount in 10 cell-phone batteries.
decreased toxic chemical use. Also, a processor recla58
Computer
Computer
Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next Page
A
BEMaGS
F
Computer
A
BEMaGS
Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next Page
F
mation infrastructure already
exists, albeit in a black market
fashion.10
Table 1. Lifetime energy consumption: processor reuse versus using new processors.
New processor every 3 years
Processor reused every 3 years
Performance requirements
Manufacturing
Manufacturing
Utilization
Utilization
Despite its potential benefits, processor
reuse poses both technical and economic
obstacles.
Home tool
White good
MP3 player
Toy
Home stereo
Portable game
Printer
Cell phone
Set-top box
PDA
Automotive navigation
system
REUSABLE PROCESSOR
CHALLENGES
Digital video camera
BDTImark
Figure 3 shows the BDTImark
energy cost (MJ)
energy cost (kJ)
Year
energy cost (MJ) energy cost (kJ)
performance of a variety of electronic devices. The blue bars
6.88
36.92
2006
6.88
36.92
indicate devices that commonly
0
36.92
2007
0
36.92
use specialized hardware to
0
36.92
2008
0
36.92
accelerate processing and thus
6.40
28.87
2009
0
153.74
may have considerably higher
0
28.87
2010
0
153.74
requirements than indicated.
0
28.87
2011
0
153.74
The opportunities for processor
6.29
4.55
2012
0
50.59
reuse are evident: A processor
0
4.55
2013
0
50.59
used in a particular device
0
4.55
2014
0
50.59
should be capable of handling
19.57
211.02
Total
6.88
723.75
the processing required by all
19.78 MJ
Lifetime
7.60 MJ
devices to the right of it in
Figure 3. For example, the
processor from a PDA could be reused in an automo- Technical challenges
bile navigation system.
In order to facilitate processor reuse, it will be necesOver time, the range of performance requirements sary to support some circuit flexibility on the die of a
should continue to grow as the functionality of these reusable processor. To ascertain how much circuit area
devices expands. However, given the ever-present need overhead a reusable processor can tolerate, we comfor low-end processing, a food chain of applications will pared the manufacturing and utilization energy costs for
always exist in some form.
a strategy that uses new processors every three years
with one that uses a single processor every three years
Battery-constrained devices
for a total lifetime of nine years. Subtracting the energy
Because reused processors are manufactured with for the latter strategy from that for the former, we then
process technology that is potentially several years converted this energy differential to an amount of allowolder than state of the art, reused processors have able “additional area” on a reusable processor—that is,
higher utilization energy requirements than new ones. we assumed this extra circuitry consumes the same
Voltage scaling can mitigate this disadvantage. A reused processor that is
2,500
higher up on the food chain will have a
higher peak performance than what is
2,000
required by a device that is lower on the
food chain. Scaling back the frequency,
1,500
and therefore the voltage of the reused
processor, significantly reduces its energy
requirements.
1,000
In addition, many mobile devices already
have adequate battery life. For example,
500
the Nintendo DS game system can run up
to 10 hours on a single charge. If the system
0
is used two hours per day, it would have to
be recharged once every five days with a
new processor but potentially once every
four days with a reused processor.
Figure 3. BDTImark performance of various electronic devices. A processor
used in a particular device should be capable of handling the processing
required by all devices to the right of it.
59
December 2007
Computer
Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next Page
A
BEMaGS
F
Computer
Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next Page
ARM920T ARM920T ARM920T
XScale XScale XScale
XScale ARM9 ARM7
Additional area (mm2)
40
35
30
25
20
15
10
2
4
6
8
10
Utilization (hours/day)
BEMaGS
F
Free-market economic incentives, however, might be
insufficient. Environmental protection is often within the
purview of public policy. European Union directives to
reduce hazardous waste in electronic devices, such as the
Restriction of Hazardous Substances (RoHS 2002/95/
EC),11 have effectively led all major chipmakers to adopt
plans such as moving to lead-free solder. More relevant
to processor reuse, the Kyoto Protocol to the United
Nations Framework Convention on Climate Change
establishes a market economy for greenhouse gas emissions that creates an added financial incentive to reduce
energy usage and create carbon-neutral products.12
12
Figure 4. Chip area available for additional circuitry on three
processor reuse chains while still maintaining lifetime energy
efficiency.
amount of active energy per mm2 as the processor core.
Figure 4 shows the additional circuit area that can
support processor reuse while reducing the processor’s
lifetime energy consumption. This allowable area budget clearly depends on the processor’s utilization.
Processors used less frequently utilize less power and
therefore have a higher allowable area budget.
The top line in Figure 4 represents a chain of three
processors with capabilities similar to those of an
ARM920T. The higher the processor’s utilization, the
less processor area that can be used for reuse support.
For higher-power chips, such as the Intel XScale series
illustrated by the bottom line, the reuse-support area
decreases significantly. For reuse chains that involve
devices with subsequently smaller computational
requirements, the area available for reuse support is
quite high due to low utilization energy. This is shown
by the middle line, which is a reuse strategy based on an
XScale in the first generation, ARM9 in the second generation, and ARM7 processor in the third generation.
Overall, the additional area for supporting reuse is quite
large: An XScale processor core is about 20 mm2 in 130nm technology.
Economic challenges
A major obstacle to processor reuse is that chipmakers would not profit from this strategy unless they
become actively involved in salvaging and reselling operations. On the other hand, they would suffer financially
only if third parties sold reused chips that competed with
the manufacturer’s new offerings. Conceptually, the easiest solution would be for chipmakers to charge a premium price for reusable processors that owners of the
product containing the chip could recover when returning the product for recycling. Another option would be
to credit the chipmaker when one of its processors
is reused.
60
A
M
oore’s law has led to a disposable-chip economy
with increasingly severe economic and environmental costs. The energy required to manufacture low-power, embedded processors is so high that
reusing them can save orders of magnitude of lifetime
energy per chip. Processor reuse will require innovative
techniques in reconfigurable computing and hardwaresoftware codesign as well as governmental policies that
encourage silicon reuse, but the potential benefits to society will be well worth the effort. ■
References
1. E.D. Williams, R.U. Ayres, and M. Heller, “The 1.7 Kilogram
Microchip: Energy and Material Use in the Production of
Semiconductor Devices,” Environmental Science and Technology, vol. 36, no. 24, 2002, pp. 5504-5510.
2. R. Kuehr and E. Williams, eds., Computers and the Environment: Understanding and Managing Their Impacts, Kluwer
Academic Publishers, 2003.
3. E.D. Williams, “Environmental Impacts of Microchip Manufacture,” Thin Solid Films, vol. 461, no. 1, 2004, pp. 2-6.
4. “ISMI Study Finds Significant Cost Savings Potential in Fab
Energy Reduction,” Sematech news release, 22 Dec. 2005;
www.sematech.org/corporate/news/releases/20051222a.htm.
5. H.B. Bakoglu, Circuits, Interconnections, and Packaging for
VLSI, Addison-Wesley, 1990.
6. International Technology Roadmap for Semiconductors: 2005
Edition—System Drivers, ITRS, 2005; www.itrs.net/Links/
2005ITRS/SysDrivers2005.pdf.
___________________
7. J. Courtney, N. Aldahhan, and M. Engloff, “The ProbeCentric Future of Test,” presentation, 2006 Southwest Test
Workshop; www.swtest.org/swtw_library/2006proc/PDF/
S06_01_IMSI-SEMATECH.pdf.
____________________
8. L.T. Clark et al., “Standby Power Management for a 0.18 m
Microprocessor,” Proc. 2002 Int’l Symp. Low Power Electronics and Design, ACM Press, 2002, pp. 7-12.
9. J. Keeble, “From Hackers to Knackers,” The Guardian, supplement online, 21 May 1998; http://online.guardian.co.uk.
10. M. Pecht and S. Tiku, “Bogus: Electronic Manufacturing and
Consumers Confront a Rising Tide of Counterfeit Electronics,” IEEE Spectrum, vol. 43, no. 5, 2006, pp. 37-46.
Computer
Computer
Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next Page
A
BEMaGS
F
Computer
Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next Page
11. “Directive 2002/95/EC of the European Parliament and of the
Council of 27 January 2003 on the Restriction of the Use of
Certain Hazardous Substances in Electrical and Electronic
Equipment,” Official J. European Union, vol. 37, 2003, pp.
19-23; www.interwritelearning.com/rohs_compliance.pdf.
12. “Kyoto Protocol to the United Nations Framework Convention on Climate Change,” United Nations, 1998; http://unfccc.
int/resource/docs/convkp/kpeng.pdf.
______________________
John Y. Oliver is an assistant professor in the Department
of Computer Engineering at California Polytechnic State
University, San Luis Obispo. His research interests include
computer architecture, reliability in computing, and sustainable computing. Oliver received a PhD in computer
engineering from the University of California, Davis. He is
a member of the IEEE and the ACM. Contact him at
jyoliver@calpoly.edu.
______________
Rajeevan Amirtharajah is an assistant professor in the
Department of Electrical and Computer Engineering at the
University of California, Davis. His research interests
include digital and mixed-signal circuit design, low-power
signal processing, and system architectures. Amirtharajah
received a PhD in electrical engineering and computer science from the Massachusetts Institute of Technology. He is
a member of the IEEE and the American Association for
the Advancement of Science. Contact him at _______
amirtharajah@ece.ucdavis.edu.
______________
Venkatesh Akella is a professor in the Department of Electrical and Computer Engineering at the University of California, Davis. His current research interests include FPGAs,
computer architectures, and embedded systems with an
emphasis on low power and reconfigurability. Akella
received a PhD in computer science from the University of
Utah. He is a member of the ACM. Contact him at ______
akella@
ucdavis.edu.
________
Roland Geyer is an assistant professor in the Donald Bren
School of Environmental Science and Management at the
University of California, Santa Barbara. His research
focuses on the life cycle of manufactured goods, the environmental and economic potential of reuse and recycling
activities, and the evolution of green business plans. Geyer
received a PhD in engineering from the University of Surrey, Guildford, UK. Contact him at geyer@bren.ucsb.edu.
______________
Frederic T. Chong is a professor in the Department of Computer Science as well as director of the Computer Engineering Program at the University of California, Santa
Barbara. His research interests include next-generation
embedded architectures, quantum computing architectures,
and hardware support for system security. Chong received
a PhD in electrical engineering and computer science from
the Massachusetts Institute of Technology. He is a member
of the ACM. Contact him at _____________
chong@cs.ucsb.edu.
A
BEMaGS
F
How to Reach
Computer
Writers
We welcome submissions. For detailed information,
visit www.computer.org/computer/author.htm.
News Ideas
Contact Lee Garber at ________________
lgarber@computer.org with
ideas for news features or news briefs.
Products and Books
Send product announcements to ____________
developertools@
computer.org. Contact computer-ma@computer.org
____________________
with book announcements.
Letters to the Editor
Please provide an e-mail address with your letter.
Send letters to computer@computer.org.
__________________
On the Web
Explore www.computer.org/computer/ for free
articles and general information about Computer
magazine.
Magazine Change of Address
Send change-of-address requests for magazine
subscriptions to address.change@ieee.org.
__________________ Make
sure to specify Computer.
Missing or Damaged Copies
If you are missing an issue or received a damaged
copy, contact ____________________
membership@computer.org.
Reprint Permission
To obtain permission to reprint an article, contact
William Hagen, IEEE Copyrights and Trademarks
Manager, at whagen@ieee.org.
____________ To buy a reprint,
send a query to __________________
computer@computer.org.
61
December 2007
Computer
Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next Page
A
BEMaGS
F
Computer
Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next Page
A
BEMaGS
F
REPORT TO MEMBERS
Land Voted 2008 Computer
Society President-Elect
New vice presidents and Board of Governors
members also chosen
I
EEE Computer Society members
Publications Board, the Educational
recently selected Susan (Kathy)
Activities Board, the Conferences and
Land, CSDP, of Northrop GrumTutorials Board, the Standards Activiman Information Technology, to
ties Board, the Technical Activities
serve as the Society’s presidentBoard, the Chapter Activities Board,
elect for 2008.
and the Student Activities Board.
Land is currently the IEEE Computer
All appointed Society vice presidents
Society’s second vice president for stanalso serve as nonvoting members of
dards activities. She is chair of the IEEE
the Board of Governors. Holding votComputer Society Software Engineering positions on the Board are the
2008 IEEE Computer Society
ing Portfolio oversight committee, a
president, past president, presidentPresident-Elect Susan (Kathy) Land,
member of the IEEE Computer Society
elect, and the first and second vice
CSDP, will introduce efforts to ensure
International Design Competition
presidents. Additional nonvoting
that IEEE Computer Society products
Committee, and chair of the Computer
members are the Society’s staff execuremain relevant to the marketplace.
Society Technical Achievement Award
tive director, the editor in chief of
subcommittee. Land is the author of
Computer, and the IEEE directors for
Jumpstart CMM/CMMI Software
divisions V and VIII—the Computer
Process Improvement: Using IEEE
Society’s elected representatives on the
Software Engineering Standards (John
IEEE Board of Governors.
Wiley & Sons, 2005). She is coauthor
of Practical Support for CMMI-SW
BOARD OF GOVERNORS ADDS
Software Project Documentation:
SEVEN NEW MEMBERS
Using IEEE Software Engineering
In the 2007 Society election, which
Standards (John Wiley & Sons, 2005)
closed in early October, voters also
and Practical Support for ISO 9001
cast ballots to fill seven openings on
2008 President Rangachar Kasturi is
Software Project Documentation:
the IEEE Computer Society Board of
working to build a stronger and more
Using IEEE Software Engineering
Governors. The full Board consists of
agile organization.
Standards (John Wiley & Sons, 2006).
21 members. Each year, seven new or
Candidates elected to the Computer
returning members are elected to
Society presidency serve a three-year term in a leadership serve three-year terms. Members chosen for 2008-2010
role. After serving a year as president-elect under 2008 terms are André Ivanov, Phillip Laplante, Itaru Mimura,
president Rangachar Kasturi, Land will assume the duties Jon Rokne, Christina Schober, Ann Sobel, and Jeffrey
of Society president in 2009. Following her term as pres- Voas. Many of the successful candidates have had recent
ident, Land will continue to be an active Society leader Board of Governors experience.
Elected officers volunteer their time and talents to furin 2010 as past president.
ther the Society’s goals and to elevate the profile of the
NEW VICE PRESIDENTS ELECTED
computing profession in general. Society officers take a
George Cybenko was elected 2008 first vice president, lead role in promoting new publications, educational
while Michel Israel topped the balloting for 2008 second efforts, technical focus groups, and international standards
vice president. Each will serve as chair of one of the sev- that help Computer Society members attain career goals.
eral Computer Society boards. The sitting president also
The Computer Society mailed 73,571 ballots to memappoints vice presidents to complement the two elected bers in the 2007 election. Of the 7,106 ballots cast—a
VPs as leaders of individual Society activities boards: the return rate of 9.66 percent—4,264 were submitted via
62
Computer
Computer
Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next Page
A
BEMaGS
F
Computer
Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next Page
Table 1.These new officers will begin serving the IEEE Computer Society on
1 January 2008.
Office
Officer
2008 president-elect/2009 president
2008 first vice president
2008 second vice president
2008-2010 terms on the
Board of Governors
Susan (Kathy) Land, CSDP
George Cybenko
Michel Israel
André Ivanov
Phillip Laplante
Itaru Mimura
Jon Rokne
Christina Schober
Ann Sobel
Jeffrey Voas
Number
of votes
Percent
3,808
3,721
3,723
3,379
3,511
3,116
3,243
4,275
5,143
3,605
53.59*
52.36*
52.39*
6.79
7.06
6.26
6.52
8.59
10.34
7.25
Table 2. The full ballot for the 2007 election also included the following
candidates.
Office
Officer
2007 president-elect/2008 president
2007 first vice president
2007 second vice president
2008-2010 terms on the
Board of Governors
James Isaak
Sorel Reisman
Antonio Doria
Alfredo Benso
Fernando Bouche
Joseph Bumblis
Hai Jin
Gerard Medioni
Raghavan Muralidharan
Number
of votes
Percent
3,163
3,176
3,089
3,023
2,533
2,924
2,913
2,825
2,854
44.51*
44.69*
43.47*
6.08
5.09
5.88
5.86
5.68
5.06
*Percentage reflects only ballots cast for this office.
A
BEMaGS
F
the Web, 2,824 were mailed in, and 18
were cast by fax. Table 1 shows the
breakdown of votes cast for each officer.
The full ballot for the 2007 election also
included the candidates listed in Table 2.
LEADERS SERVE MEMBERS
Each year, Society members vote for the
next year’s president-elect, first and second vice presidents, and seven members
of the IEEE Computer Society Board of
Governors. The Society president and
vice presidents each serve a one-year
active term, while the 21 Board of
Governors members serve three-year
terms, rotating in three groups of seven.
The three presidents—incoming, active,
and outgoing—work together in setting
policy and making operational decisions.
The active Society president is responsible
for heading the annual Board of Governors
meetings and for addressing major issues
that affect the Computer Society during
the year.
NOMINATE A CANDIDATE
Any Computer Society member can
nominate candidates for Society offices.
Most members are also eligible to run for
a seat on the Board of Governors. Candidates for other offices must be full members of the IEEE and must have been
members of the Computer Society for at
least the preceding three years.
See www.computer.org/election for
more details on the 2007 IEEE Computer
Society elections. ■
John Vig Named IEEE President-Elect for 2008
IEEE members recently selected John Vig as their president-elect for 2008.Vig is an IEEE Fellow and the recipient of
the IEEE’s W.G. Cady Award and C.B. Sawyer Memorial
Award, which recognize outstanding contributions in frequency control.
Vig will serve one year as IEEE president-elect, participating in Board of Directors activities. He will then assume the
role of president in the following year.After his term in 2009,
Vig will serve as past president in 2010.
In the same election, IEEE members chose 2003
Computer Society president Stephen Diamond as division
VIII director-elect for 2008. Diamond, a managing director
at Picosoft, served as a member of the IEEE Board of
Directors in 2005 and 2006. He currently serves as the
chair of the IEEE Marketing and
Sales Committee.
Division directors represent IEEE
societies on the IEEE Board of
Directors and Technical Activities
Board. Division directors V and VIII
are elected to represent the
Computer Society membership.
Diamond will act as director-elect
in 2008 and as division director for
2009-2010.The division directors
also serve as ex officio members of
the Computer Society’s Board of
Governors and Executive Committee.
John Vig is a technical
advisor for the US
Defense Advanced
Research Projects
Agency.
63
December 2007
Computer
Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next Page
A
BEMaGS
F
Computer
Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next Page
A
BEMaGS
F
COMPUTER SOCIETY CONNECTION
Computer Recognizes
Expert Reviewers
O
rganized activities require
teamwork to succeed. Since
becoming Computer’s editor
in chief in January 2007, I
have witnessed the great
synergy between the professional editorial staff, volunteer editors, and an
impressive group of reviewers whose
collective wisdom and effort make
Computer a great publication.
First, I extend my heartfelt thanks to all the reviewers
who gave their time to review submitted manuscripts,
offering comments on organization and clarity, questions of accuracy, disputed definitions, and the effectiveness of visual aids, figures, or other ancillary
materials. By relying on such dedicated professionals,
we ensure the high quality of the peer-review process
that serves as a cornerstone of any first-rate professional
association publication. I encourage reviewers to continue to support the magazine and make themselves
available to review for Computer in the years to come.
Due to the magnitude of the workload, the editors at
Computer work in a highly structured manner. Reviewers
for Computer mostly work under the direction of
Associate Editors in Chief Kathleen
Swigger and Bill Schilit. Kathy and
Bill, as volunteer leaders in the
Computer Society, have contributed
extensively to Computer throughout
the years.
I offer my gratitude to the editors
who solicit or contribute Computer’s
columns and departments. There will
continue to be transitions as veteran
editors move on and new faces take charge as I begin
implementing my editorial vision and plan. Finally,
thanks to the area editors and advisory panel members
who have offered me support and advice through the
past year.
Current reviewers, please visit http://cs-ieee.
manuscriptcentral.com to update the profile of your
areas of expertise, if applicable. Also, please ask your
colleagues to join the team by registering themselves and
their areas of expertise at the same Web site.
—Carl Chang, Editor in Chief
A list of Computer’s expert reviewers is available on the
Web at www.computer.org/reviewers2007.
Task Force Becomes Technical Committee on Autonomous
and Autonomic Systems
The IEEE Computer Society Technical Activities Board
recently voted to raise the status of the growing Task
Force on Autonomous and Autonomic Systems to that
of a formal Technical Committee. Roy Sterritt, of the
University of Ulster in Northern Ireland, serves as chair
of the TC-AAS, while Mike Hinchey, of Loyola College
in Maryland and the NASA Goddard Space Flight
Center, serves as its vice chair.
The new technical committee will continue the task
force’s work promoting interest in self-managing,
-governing, and -organizing systems, including autonomic networking and communications; autonomous,
self-organizing and ubiquitous systems; and autonomic, grid, organic, and pervasive computing.
The TC-AAS publishes an electronic newsletter and
a Letters series, which includes written versions of
keynote speeches given at several relevant conferences, symposia, and workshops. These letters are
64
archived in the proceedings of the International
Workshop on Engineering of Autonomic Systems
(EASe) in the IEEE Computer Society Digital Library
(www.computer.org/csdl).
Members of the TC-AAS have organized several
events, including EASe in April 2008 (www.ulster.ac.
uk/ease)
_____ and the IEEE International Conference on
Self-Adaptive and Self-Organizing Systems, which will
take place in Venice, Italy, from 20-24 October 2008
(www.saso-conference.org). Several workshops and
other events will take place in conjunction with the
Fifth IEEE International Conference on Autonomic
Computing, which convenes in Chicago 2-6 June
2008.
Membership in the TC-AAS is open to Computer
Society members and nonmembers alike. To sign up,
learn about upcoming activities, or view archived
newsletters, visit http://tab.computer.org/aas.
Computer
Computer
Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next Page
A
BEMaGS
F
Computer
Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next Page
A
BEMaGS
F
Computer Society Launches
New Certification Program
T
he IEEE Computer Society will debut the Certified
Software Development Associate program in April
2008. A volunteer panel of expert software engineers, developers, and educators created the new certification in response to industry leaders’ requests for
a way to assess the skill and knowledge of individuals
who are just embarking on their careers as software
professionals.
The CSDA certification encompasses the entire field
of software development and validates knowledge of the
foundations of computer science, computer engineering, and mathematics. The exam covers core software
engineering principles, including software construction,
software design, software testing, software requirements, and software methods. CSDA certification standards are based on The Guide to the Software Engineering Body of Knowledge (SWEBOK) and Software
Engineering 2004: Curriculum Guidelines for Undergraduate Degree Programs in Software Engineering
(SE2004.)
VIII. Software engineering process
IX. Software engineering methods
X. Software quality
XI. Software engineering professional practice
XII. Software engineering economics
XIII. Computing foundations
XIV. Mathematical foundations
XV. Engineering foundations
CSDA BETA TEST
The IEEE Computer Society will administer a beta test
for the CSDA exam from 7 December 2007 to 18
January 2008. Its purpose is to validate the exam questions and provide statistical data. The CSDA beta exam
is open to recent graduates and students in their final
year of a baccalaureate or equivalent degree program.
The exam will be given at Prometric Testing Centers at
locations around the world.
Applications to take the test are due by 7 December.
Fees for the beta exam total $110, rising to $250 when
testing opens to the public in April. Candidates who pass
the beta exam will be awarded the CSDA certificate and
mailed the results in early March.
To learn more about CSDA certification and other
IEEE Computer Society professional development programs, visit www.computer.org/csda. ■
Questions on the Certified Software Development
Associate exam address the following 15 areas of
expertise:
I. Software requirements
II. Software design
III. Software construction
IV. Software testing
V. Software maintenance
VI. Software configuration management
VII. Software engineering management
Editor: Bob Ward, Computer; ______________
bnward@computer.org
REACH HIGHER
Advancing in the IEEE Computer Society can elevate
your standing in the profession.
Application to Senior-grade membership recognizes
✔ ten years or more of professional expertise
Nomination to Fellow-grade membership recognizes
✔ exemplary accomplishments in computer engineering
GIVE YOUR CAREER A BOOST
■
UPGRADE YOUR MEMBERSHIP
w
w w. c o m p u t e r. o r g / j o i n / g r a d e s . h t m
____________________________________________
65
December 2007
Computer
Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next Page
A
BEMaGS
F
Computer
Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next Page
A
BEMaGS
F
CALL AND CALENDAR
CALLS FOR ARTICLES FOR IEEE CS PUBLICATIONS
APRIL 2008
CALLS FOR PAPERS
7-12 Apr: ICDE 2008, 24th IEEE Int’l Conf. on Data
Engineering, Cancun, Mexico; www.icde2008.org
LICS 2008, IEEE Logic in Computer Science Symp., 2427 June, Pittsburgh; Submissions due 7 Jan. 2008;
www2.informatik.hu-berlin.de/lics/lics08/
cfp08-1.pdf
______________________________________
SCC 2008, IEEE Int’l Conf. on Services Computing, 811 July, Hawai’i; Submissions due 28 Jan. 2008; ____
http://
conferences.computer.org/scc/2008/cfp.html
8-12 Apr: MCN 2008, 2nd IEEE Workshop on MissionCritical Networking (with InfoCom), Phoenix; www.
____
criticalnet.org
__________
14-15 Apr: RAW 2008, 15th Reconfigurable
____
Architectures Workshop (with IPDPS), Miami; www.
ece.lsu.edu/vaidy/raw
_______________
ICWS 2008, IEEE Conf. on Web Services, 23-26 Sept.,
Beijing; Submissions due 7 Apr. 2008; http://conferences.
computer.org/icws/2008/call-for-papers.html
14-18 Apr: IPDPS 2008, 22nd IEEE Int’l Parallel and
Distributed Processing Symp., Miami; www.ipdps.org
CALENDAR
15-17 Apr: InfoCom 2008, 27th IEEE Conf. on Computer
Communications, Phoenix; www.ieee-infocom.org
JANUARY 2008
7-10 Jan: HICSS 2008, Hawai’i Int’l Conf. on System
Sciences, Waikoloa, Hawai’i; www.hicss.hawaii.edu/
hicss_41/apahome41.html
___________________
FEBRUARY 2008
18-21 Feb: WICSA 2008, Working IEEE/IFIP Conf. on
Software Architecture, Vancouver, Canada; www.wicsa.net
MARCH 2008
18 Apr: Hot-P2P 2008, 5th Int’l Workshop on Hot
Topics in Peer-to-Peer Systems (with IPDPS), Miami;
www.disi.unige.it/hotp2p/2008/index.php
18 Apr: PCGrid 2008, 2nd Workshop on Desktop Grids
and Volunteer Computing Systems (with IPDPS), Miami;
http://pcgrid.lri.fr
18 Apr: SSN 2008, 4th Int’l Workshop on Security in
Systems and Networks (with IPDPS), Miami; www.cse.
buffalo.edu/~fwu2/ssn08
__________________
3-7 Mar: SimuTools 2008, 1st Int’l Conf. on Simulation
Tools and Techniques for Communications, Networks,
and Systems, Vancouver, Canada; www.simutools.org
25-28 Mar: AINA 2008, 22nd IEEE Int’l Conf. on
Advanced Information Networking and Applications,
Okinawa, Japan; www.aina-conference.org/2008
25-28 Mar: SOCNE 2008, 3rd IEEE Workshop on
Service-Oriented Architectures in Converging Networked Environments (with AINA), Okinawa, Japan;
www.c-lab.de/RLS/SOCNE08
Submission Instructions
The Call and Calendar section lists conferences,
symposia, and workshops that the IEEE Computer
Society sponsors or cooperates in presenting.
Visit www.computer.org/conferences for instructions
on how to submit conference or call listings as well as a
more complete listing of upcoming computer-related
conferences.
66
LICS 2008
The 23rd IEEE Logic in Computer Science symposium is an annual international forum on theoretical
and practical topics in computer science that relate
to logic.
Organizers have invited submissions on topics that
include automata theory, categorical models and logics, concurrency, distributed computation, logical
frameworks, and constraint programming.
The symposium is sponsored by the IEEE Computer
Society Technical Committee on Mathematical
Foundations of Computing, in cooperation with the
Association for Symbolic Logic and the European
Association for Theoretical Computer Science.
LICS will take place 24-27 June 2008 in Pittsburgh.
Abstracts are due by 7 January 2008. Visit www2.
____
informatik.hu-berlin.de/lics/lics08 for more details on
_______________________
LICS 2008, including a complete call for papers.
Computer
Computer
Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next Page
A
BEMaGS
F
Computer
Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next Page
MAY 2008
4-8 May: VLSI 2008, 28th IEEE VLSI Test Symp., San
Diego; www.tttc-vts.org/
A
BEMaGS
F
24-27 June: LICS 2008, IEEE Symp. on Logic in
Computer Science, Pittsburgh; www2.
informatik.
______________
hu-berlin.de/lics/lics08
________________
JULY 2008
5-7 May: ISORC 2008, 11th IEEE Int’l Symp. on Object/
Component/Service-Oriented Real-Time Distributed
Computing, Orlando, Florida; http://ise.gmu.edu/isorc08
7-11 July: Services 2008, IEEE Congress on Services,
Hawai’i; http://conferences.computer.org/services/2008
7-9 May: EDCC 2008, 7th European Dependable
Computing Conf., Kaunas, Lithuania; http://edcc.
dependability.org
____________
8-11 July: CIT 2008, IEEE Int’l Conf. on Computer and
Information Technology, Sydney, Australia; ____
http://
attend.it.uts.edu.au/cit2008
___________________
18-22 May: CCGrid 2008, 8th IEEE Int’l Symp. on
Cluster Computing and the Grid, Lyon, France; ____
http://
ccgrid2008.ens-lyon.fr
________________
8-11 July: SCC 2008, IEEE Int’l Conf. on Services
Computing, Hawai’i; http://conferences.computer.org/
scc/2008
______
22-24 May: ISMVL 2008, 38th Int’l Symp. on MultipleValued Logic, Dallas; http://engr.smu.edu/ismvl08
24 May: ULSI 2008, 17th Int’l Workshop on Post-Binary
ULSI Systems (with ISMVL), Dallas; http://engr.smu.
edu/ismvl08
_________
JUNE 2008
23-25 June: CSF 2008, 21st IEEE Computer Security
Foundations Symp. (with LICS), Pittsburgh; www.cylab.
cmu.edu/CSF2008
_____________
23-25 June: WETICE 2008, 17th IEEE Int’l Workshop
on Enabling Technologies: Infrastructures for
Collaborative Enterprises, Rome; www.sel.uniroma2.it/
wetice08/venue.htm
______________
23-26 June: ICITA 2008, 5th Int’l Conf. on Information
Technology and Applications, Cairns, Australia; www.
____
icita.org
Events in 2007-2008
JANUARY
7-10 . . . . . . . . . . . . . . . . . . . . .HICSS 2008
FEBRUARY
18-21 . . . . . . . . . . . . . . . . . . .WICSA 2008
MARCH
3-7 . . . . . . . . . . . . . . . . . .SimuTools 2008
25-28 . . . . . . . . . . . . . . . . . . . .AINA 2008
25-28 . . . . . . . . . . . . . . . . . .SOCNE 2008
Call for Articles for Computer
Computer seeks articles for a July 2008 special issue
on high-assurance service-oriented architectures. The
guest editors are Jing Dong from the University of
Texas at Dallas, Raymond Paul from the US Department of Defense, and Liang-Jie Zhang from IBM’s
T.J. Watson Research Center.
Recent advances in services computing technology
make it possible to register, request, discover, and
supply software services online. Such loosely coupled
software services form a service-oriented architecture
with the support of network resources. Serviceoriented architectures have been applied in many
mission-critical environments including medical,
traffic control, and defense systems. These systems
are required to be highly reliable, secure, available,
timely, fault-tolerant, and dependable. Recently,
architects have come to face new challenges in developing service-oriented systems with high assurance
requirements.
Computer invites papers that describe techniques,
tools, or experiences related to the design, development, or assessment of practical high-assurance
systems. Editors are particularly interested in submissions that address applications of service-oriented
techniques. Examples of suitable topics include highassurance service compositions; service specifications
for security, reliability, dependability, availability,
and QoS properties; service discoveries with highassurance system requirements; and service security,
trust, and privacy.
The deadline for papers is 14 December. Detailed
author instructions are available at www.computer.
org/portal/pages/computer/mc/author.html. Send
inquiries to the guest editors at ______________
jdong@utdallas.edu,
raymond.paul@osd.mil,
or
zhanglj@us.ibm.com.
_______________
________________
67
December 2007
Computer
Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next Page
A
BEMaGS
F
Computer
Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next Page
2008 MEMBERSHIP APPLICATION
A
BEMaGS
F
IEEE
Computer
Society
FIND THE RIGHT SOLUTION!
Solve problems, learn new skills, and grow your career with
the cutting edge resources of the IEEE Computer Society.
www.computer.org/join
Computer
Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next Page
A
BEMaGS
F
Computer
Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next Page
A
BEMaGS
F
2008 RATES for IEEE COMPUTER SOCIETY
Membership Dues and Subscriptions
Membership and periodical subscriptions are annualized to and expire on 31 December 2008.
Pay full or half-year rate depending upon the date of receipt by the IEEE Computer Society as noted below.
Membership Options*
All prices are quoted in U.S. dollars.
´ I do not belong to the IEEE and I want to join
only the Computer Society:
´ I want to join both the Computer Society and the IEEE:
FULL YEAR
HALF YEAR
Applications received
17 Aug 07 – 29 Feb 08
Applications received
1 Mar 08– 15 Aug 08
$113.00
$57.00
$220.00
$195.00
$187.00
$180.00
$181.00
$110.00
$98.00
$94.00
$90.00
$91.00
$50.00
$25.00
Payment Information
Payment required with application
Membership fee
I reside in the USA
I reside in Canada
I reside in Africa/Europe/Middle East
I reside in Latin America
I reside in Asia/Pacific
´ I already belong to the IEEE, and I want to
$
Periodicals total
$
Applicable sales tax***
join the Computer Society:
Are you now or were you ever a member of the IEEE?
Yes
No If yes, please provide member # if known: ______________________
$
TOTAL:
$
Add Periodicals**
BEST VALUE!
IEEE Computer Society Digital Library (online only)
ARTIFICIAL INTELLIGENCE
IEEE Intelligent Systems
IEEE Transactions on Learning Technologies†
IEEE Transactions on Pattern Analysis and
Machine Intelligence
BIOTECHNOLOGY
IEEE/ACM Transactions on Computational
Biology and Bioinformatics
COMPUTATION
Computing in Science & Engineering
COMPUTER HARDWARE
IEEE Computer Architecture Letters
IEEE Micro
IEEE Design & Test of Computers
IEEE Transactions on Computers
GRAPHICS & MULTIMEDIA
IEEE Computer Graphics and Applications
IEEE MultiMedia
IEEE Transactions on Haptics†
IEEE Transactions on Visualization and
Computer Graphics
HISTORY OF COMPUTING
IEEE Annals of the History of Computing
INTERNET & DATA TECHNOLOGIES
IEEE Internet Computing
IEEE Transactions on Knowledge and Data Engineering
IEEE Transactions on Services Computing†
IT & SECURITY
IT Professional
IEEE Security & Privacy
IEEE Transactions on Dependable and Secure Computing
MOBILE COMPUTING
IEEE Pervasive Computing
IEEE Transactions on Mobile Computing
NETWORKING
IEEE Transactions on Parallel and Distributed Systems
SOFTWARE
IEEE Software
IEEE Transactions on Software Engineering
ISSUES
PER
YEAR
FULL YEAR
HALF YEAR
Applications received
16 Aug 07 – 29 Feb 08
PRINT + ONLINE
Applications received
1 Mar 08 – 15 Aug 08
PRINT + ONLINE
Enclosed:
Check/Money Order****
Charge my:
MasterCard
VISA
American Express
Diner’s Club
n/a
$121
$61
6
4
$43
$39
$22
$18
12
$52
$26
4
$36
$18
6
$45
$23
4
6
6
12
$29
$41
$40
$47
$15
$21
$20
$24
6
4
2
$43
$38
$31
$22
$19
$16
6
$43
$22
4
$34
$17
6
12
4
$43
$49
$39
$22
$25
$18
6
6
4
$42
$24
$33
$21
$12
$17
Allow up to 8 weeks for application processing. Allow a minimum
of 6 to 10 weeks for delivery of print periodicals.
4
12
$43
$43
$22
$22
Please complete both
sides of this form.
12
$47
$24
6
12
$49
$38
$25
$19
For fastest service,
apply online at
www.computer.org/join
Card Number
Exp Date (month/year)
Signature
USA 0nly include
5-digit billing zip code
* Member dues include $25 for a 12-month subscription to Computer.
** Periodicals purchased at member prices are for the member’s
personal use only.
*** Canadian residents add 14% HST or 6% GST to total. AL,
AZ, CO, DC, GA, IN, KY, MD, MO, NM, and WV add sales tax to
periodical subscriptions. European Union residents add VAT tax to
IEEE Computer Society Digital Library subscription.
**** Payable to the IEEE in U.S. dollars drawn on a U.S. bank
account. Please include member name and number (if known)
on your check.
* Member dues include $25 for a 12-month subscription to Computer magazine.
** Periodicals purchased at member pirces are for the member’s personal use only.
†
Online issues only.
Computer
Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next Page
A
BEMaGS
F
Computer
Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next Page
Enter your name as you want it to appear on correspondence. As a key identifier
in our database, circle your last/surname.
Female
Title
First name
Middle
Last/Surname
Home address
City
State/Province
Postal code
Country
Home telephone
Home facsimile
This information is used by society magazines to verify their annual
circulation. Please refer to the audit codes and indicate your selections
in the box provided.
1.
2.
3.
4.
5.
6.
7.
8.
9.
10.
11.
12.
13.
14.
15.
16.
17.
18.
Preferred e-mail
´
F
A. Primary line of business
Date of birth (Day/Month/Year)
Send mail to:
BEMaGS
BPA Information
´ Personal Information
Male
A
Home address
Business address
19.
20.
21.
22.
23.
24.
Educational Information
First professional degree completed
Month/Year degree received
Program major/course of study
College/University
State/Province
Highest technical degree received
Program/Course of study
Country
25.
26.
27.
28.
Computers
Computer peripheral equipment
Software
Office and business machines
Test, measurement, and instrumentation equipment
Communications systems and equipment
Navigation and guidance systems and equipment
Consumer electronics/appliances
Industrial equipment, controls, and systems
ICs and microprocessors
Semiconductors, components, sub-assemblies, materials, and supplies
Aircraft, missiles, space, and ground support equipment
Oceanography and support equipment
Medical electronic equipment
OEM incorporating electronics in their end product (not elsewhere classified)
Independent and university research, test and design laboratories, and consultants
(not connected with a manufacturing company)
Government agencies and armed forces
Companies using and/or incorporating any electronic products in their manufacturing,
processing, research, or development activities
Telecommunications services, and telephone (including cellular)
Broadcast services (TV, cable, radio)
Transportation services (airlines, railroads, etc.)
Computer and communications and data processing services
Power production, generation, transmission, and distribution
Other commercial users of electrical, electronic equipment, and services (not elsewhere classified)
Distributor (reseller, wholesaler, retailer)
University, college/other education institutions, libraries
Retired
Others (allied to this field) _______________________________
B. Principal job function
Month/Year received
College/University
State/Province
Country
´ Business/Professional Information
Title/Position
Years in current position
Years of practice since graduation
Employer name
Department/Division
Street address
City
State/Province
Postal code
Country
1.
2.
3.
4.
5.
6.
7.
8.
9.
10.
11.
12.
13.
14.
15.
16.
17.
18.
19.
General and corporate management
Engineering management
Project engineering management
Research and development management
Design engineering management — analog
Design engineering management — digital
Research and development engineering
Design/development engineering — analog
Design/development engineering — digital
Hardware engineering
Software design/development
Computer science
Science/physics/mathematics
Engineering (not elsewhere classified)
Marketing/sales/purchasing
Consulting
Education/teaching
Retired
Other _______________________________________________
C. Principal responsibility
Office phone
Office facsimile
I hereby make application for Computer Society and/or IEEE membership and agree to be governed by
IEEE’s Constitution, Bylaws, Statements of Policies and Procedures, and Code of Ethics. I authorize release of
information related to this application to determine my qualifications for membership.
Signature
1.
2.
3.
4.
5.
6.
7.
8.
9.
Engineering or scientific management
Management other than engineering
Engineering design
Engineering
Software: science/management/engineering
Education/teaching
Consulting
Retired
Other _______________________________________________
Date
IF8L
NOTE: In order for us to process your application, you must complete
and return BOTH sides of this form to the office nearest you:
Asia/Pacific Office
IEEE Computer Society
Watanabe Bldg.
1-4-2 Minami-Aoyama
Minato-ku, Tokyo 107-0062 Japan
Phone: +81 3 3408 3118
Fax: +81 3 3408 3553
E-mail: __________
tokyo.ofc@computer.org
Computer
Publications Office
IEEE Computer Society
10662 Los Vaqueros Circle
P.O. Box 3014
Los Alamitos, CA 90720-1314 USA
Phone: +1 800 272 6657 (USA and Canada)
Phone: +1 714 821 8380 (worldwide)
Fax: +1 714 821 4641
E-mail: ________
help@computer.org
D. Title
1.
2.
3.
4.
5.
6.
7.
8.
9.
10.
11.
12.
13.
14.
15.
16.
17.
18.
Chairman of the Board/President/CEO
Owner/Partner
General Manager
V.P. Operations
V.P. Engineering/Director Engineering
Chief Engineer/Chief Scientist
Engineering Manager
Scientific Manager
Member of Technical Staff
Design Engineering Manager
Design Engineer
Hardware Engineer
Software Engineer
Computer Scientist
Dean/Professor/Instructor
Consultant
Retired
Other Professional/Technical _____________________________
Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next Page
A
BEMaGS
F
Computer
Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next Page
A
BEMaGS
F
FEATURED TITLE FROM WILEY AND CS PRESS
Software Engineering:
Barry W. Boehm’s
Lifetime Contributions to
Software Development,
Management, and
Research
edited by
Richard W. Selby
978-0-470-14873-0
June 2007 • 832 pages
Hardcover • $79.95
A Wiley-IEEE CS Press Publication
To Order:
North America
1-877-762-2974
Rest of the World
+ 44 (0) 1243 843294
This is the most authoritative archive of Barry
Boehm’s contributions to software engineering.
Featuring 42 reprinted articles, along with an
introduction and chapter summaries to provide
context, it serves as a “how-to” reference manual
for software engineering best practices. It provides
convenient access to Boehm’s landmark work on
product development and management processes.
The book concludes with an insightful look to the
future by Dr. Boehm.
20
%
Computer
Pro
m
CS otio
nC
CH
7
od
e
Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next Page
A
BEMaGS
F
Computer
Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next Page
A
BEMaGS
F
ANNUAL INDEX
Volume 40, 2007
SUBJECT INDEX
A
3D body scanning
3D Body Scanning and Healthcare Applications, P. Treleaven
and J. Wells, July, pp. 28-34.
3D graphics systems
How GPUs Work [How Things Work], D. Luebke and G.
Humphreys, Feb., pp. 96-100.
3D Internet
Generation 3D: Living in Virtual Worlds [Entertainment
Computing], M. Macedonia, Oct., pp. 99-101.
3D visualization
3D Display Using Passive Optical Scatterers, S.K. Nayar and
V.N. Anand, July, pp. 54-63.
3D Vision: Developing an Embedded Stereo-Vision System
[Embedded Computing], J.I. Woodfill et al.,
May, pp. 106-108.
Immersidata Analysis: Four Case Studies, C.
Shahabi et al., July, pp. 45-52.
Virtual Reality: How Much Immersion Is
Enough?, D.A. Bowman and R.P. McMahan,
July, pp. 36-43.
Agile software
Standards, Agility, and Engineering, F. Coallier,
Sept., pp. 100-102.
Analytical models
The Discipline of Embedded Systems Design, T.A.
Henzinger and J. Sifakis, Oct., pp. 32-40.
AQUA
AQUA: An Amphibious Autonomous Robot, G.
Dudek et al., Jan., pp. 46-53.
Artificial intelligence
Automated Killers and the Computing Profession
[The Profession], N. Sharkey, Nov., pp. 124,
122-123.
Game Smarts [Entertainment Computing], M. van Lent, Apr.,
pp. 99-101.
The Strangest Thing About Software, T. Menzies et al., Jan.,
pp. 54-60.
Automated traceability
Best Practices for Automated Traceability, J. Cleland-Huang
et al., June, pp. 27-35.
Automotive electronics system design
Embedded System Design for Automotive Applications, A.
Sangiovanni-Vincentelli and M. Di Natale, Oct., pp. 42-51.
AUTOSAR
Embedded System Design for Automotive Applications, A.
Sangiovanni-Vincentelli and M. Di Natale, Oct., pp. 42-51.
Avionics systems
The Glass Cockpit [How Things Work], J. Knight, Oct., pp.
92-95.
B
Binary arithmetic
Binary Arithmetic [How Things Work], N. Holmes, June, pp.
90-93.
Biomolecular simulations
Using FPGA Devices to Accelerate Biomolecular Simulations,
S.R. Alam et al., Mar., pp. 66-73.
BioTracking
Swarms and Swarm Intelligence [Software Technologies],
M.G. Hinchey et al., Apr., pp. 111-113.
BlueFS
Consumer Electronics Meets Distributed Storage [Invisible
Computing], D. Peek and J. Flinn, Feb., pp. 93-95.
Bohrbugs
Fighting Bugs: Remove, Retry, Replicate, and Rejuvenate
[Software Technologies], M. Grottke and K.S. Trivedi, Feb.,
pp. 107-109.
72
Broadening participation in computing
Increasing the Participation of People with Disabilities in Computing Fields [Broadening Participation in Computing], S.E.
Burgstahler and R.E. Ladner, May, pp. 94-97.
Propagating Diversity through Active Dissemination
[Broadening Participation in Computing], K.A. Siek et al.,
Feb., pp. 89-92.
Business intelligence
The Current State of Business Intelligence, H.J. Watson and
B.H. Wixom, Sept., pp. 96-99.
Business service stack
Service Is in the Eyes of the Beholder, T. Margaria, Nov., pp.
33-37.
C
Cache consistency protocol
Data Consistency for Cooperative Caching in Mobile
Environments, J. Cao et al., Apr., pp. 60-66.
Caravela environment
Caravela: A Novel Stream-Based Distributed Computing
Environment, S. Yamagiwa and L. Sousa, May, pp. 70-77.
Cell Broadband Engine
An Open Source Environment for Cell Broadband Engine
System Software, M. Gschwind et al., June, pp. 37-47.
Change-tolerant systems
An Era of Change-Tolerant Systems [Software Technologies],
S. Bohner, June, pp. 100-102.
Chip multiprocessors
Isolation in Commodity Multicore Processors, N. Aggarwal
et al., June, pp. 49-59.
An Open Source Environment for Cell Broadband Engine
System Software, M. Gschwind et al., June, pp. 37-47.
CIRCA system
A Communication Support System for Older People with
Dementia, N. Alm et al., May, pp. 35-41.
Classroom Presenter
Classroom Presenter: Enhancing Interactive Education with
Digital Ink, R. Anderson et al., Sept., pp. 56-61.
Ink, Improvisation, and Interactive Engagement: Learning
with Tablets, J. Roschelle et al., Sept., pp.42-48.
Click fraud
Click Fraud [Web Technologies], B.J. Jansen, July, pp. 85-86.
New Technology Prevents Click Fraud [News], L.D. Paulson,
Mar., pp. 20-22.
Client-server systems
Empirical Test Observations in Client-Server Systems, L.
Hatton, May, pp. 24-29.
Cognitive assistance
A Communication Support System for Older People with
Dementia, N. Alm et al., May, pp. 35-41.
Collaboration environments
Supporting Resource-Constrained Collaboration Environments [The Profession], S.M. Price, June, pp. 108, 106-107.
Collaborative Web search
A Community-Based Approach to Personalizing Web Search,
B. Smyth, Aug., pp. 42-50.
Collaboratories
Examining the Challenges of Scientific Workflows [Computing
Practices], Y. Gil et al., Dec., pp. 24-32.
Competisoft project
Software Process Improvement: The Competisoft Project,
Oktaba H. et al., Oct., pp. 21-28.
Computational models
The Discipline of Embedded Systems Design, T.A. Henzinger
and J. Sifakis, Oct., pp. 32-40.
Computer architectures
Architectures for Silicon Nanoelectronics and Beyond, R.I.
Bahar et al., Jan., pp. 25-33.
The Embedded Systems Landscape [Guest Editor’s
Introduction], W. Wolf, Oct., pp. 29-31.
Computer
Computer
Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next Page
A
BEMaGS
F
Computer
Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next Page
Computer performance evaluation
A New Era of Performance Evaluation [Perspectives], S.M.
Pieper et al., Sept., pp. 23-30.
Computers and society
Annie and the Boys [In Our Time], D.A. Grier, Aug., pp. 6-9.
Automated Killers and the Computing Profession [The
Profession], N. Sharkey, Nov., pp. 124, 122-23.
The Best Deal in Town [In Our Time], D.A. Grier, Apr., pp.
8-11.
The Boundaries of Time [In Our Time], D.A. Grier, July, pp.
5-7.
The Camino Real [In Our Time], D.A. Grier, June, pp. 6-8.
The Chimera of Software Quality [The Profession], L. Hatton,
Aug., pp. 104, 102-103.
Computing as an Evolving Discipline: 10 Observations [The
Profession], J. Liu, May, pp. 112, 110-111.
The Computing Profession and Higher Education [The
Profession], N. Holmes, Jan., pp. 116, 114-115.
Consciousness and Computers [The Profession], N. Holmes,
July, pp. 100, 98-99.
Controlling the Conversation, [In Our Time], D.A. Grier,
Sept., pp. 7-9.
Counting Beans [In Our Time], D.A. Grier, Nov., pp. 8-10.
Digital Technology and the Skills Shortage [The Profession],
N. Holmes, Mar., pp. 100, 98-99.
Dirty Electricity [In Our Time], D.A. Grier, Feb., pp. 6-8.
E-Mailing from Armenia [In Our Time], D.A. Grier, Oct., pp.
8-10.
A Force of Nature, [In Our Time], D.A. Grier, Dec., pp. 8-9.
Getting Real in the Classroom [The Profession], M. van
Genuchten and D. Vogel, Oct., pp. 108, 109-110.
Incorporating a Variable-Expertise-Level System in IT Course
Modules [The Profession], N. Harkiolakis, Apr., pp. 116,
114-115.
Making Computers Do More with Less [The Profession], S.
Santini, Dec., pp. 124, 122-123.
Outposts [In Our Time], D.A. Grier, Mar., pp. 8-10.
The Profession as a Culture Killer, [The Profession], N.
Holmes, Sept., pp. 112, 110-11.
Supporting Resource-Constrained Collaboration Environments [The Profession], S.M. Price, June, pp. 108, 106-107.
The Wave of the Future [In Our Time], D.A. Grier, Jan., pp.
12-14.
Working Class Hero [In Our Time], D.A. Grier, May, pp.
8-10.
Computers in education
Classroom Presenter: Enhancing Interactive Education with
Digital Ink, R. Anderson et al., Sept., pp. 56-1.
Computing as an Evolving Discipline: 10 Observations [The
Profession], J. Liu, May, pp. 112, 110-111.
Facilitating Pedagogical Practices through a Large-Scale Tablet
PC Deployment, J.G. Tront, Sept., pp. 62-68.
Getting Real in the Classroom [The Profession], M. van
Genuchten and D. Vogel, Oct., pp. 108, 109-110.
Handwriting Recognition: Tablet PC Text Input, J.A. Pittman,
Sept., pp. 49-54.
Incorporating a Variable-Expertise-evel System in IT Course
Modules [The Profession], N. Harkiolakis, Apr., pp. 116,
114-115.
Ink, Improvisation, and Interactive Engagement: Learning
with Tablets, J. Roschelle et al., Sept., pp.42-48.
Magic Paper: Sketch-Understanding Research, R. Davis, Sept.,
pp. 34-41.
Tablet PC Technology: The Next Generation [Guest Editors’
Introduction], J. Prey and A. Weaver, Sept., pp. 32-33.
Computing profession
Automated Killers and the Computing Profession [The
Profession], N. Sharkey, Nov., pp. 124, 122-123.
The Chimera of Software Quality [The Profession], L. Hatton,
Aug., pp. 104, 102-103
Computing as an Evolving Discipline: 10 Observations [The
Profession], J. Liu, May, pp. 112, 110-111.
The Computing Profession and Higher Education [The
Profession], N. Holmes, Jan., pp. 116, 114-115.
Consciousness and Computers [The Profession], N. Holmes,
July, pp. 100, 98-99.
Digital Technology and the Skills Shortage [The Profession],
N. Holmes, Mar., pp. 100, 98-99.
Getting Real in the Classroom [The Profession], M. van
Genuchten and D. Vogel, Oct., pp. 108, 109-110.
Incorporating a Variable-Expertise-Level System in IT Course
Modules [The Profession], N. Harkiolakis, Apr., pp. 116,
114-115.
A
BEMaGS
F
Increasing the Participation of People with Disabilities in
Computing Fields [Broadening Participation in
Computing], S.E. Burgstahler and R.E. Ladner, May, pp.
94-97.
Making Computers Do More with Less [The Profession], S.
Santini, Dec., pp. 124, 122-123.
The Profession as a Culture Killer, [The Profession], N.
Holmes, Sept., pp. 112, 110-111.
Software Development: What Is the Problem? [The
Profession], R.R. Loka, Feb., pp. 112, 110-111.
Supporting Resource-Constrained Collaboration Environments [The Profession], S.M. Price, June, pp. 108, 106-107.
Concurrent versioning system
Teaching Software Evolution in Open Source [Computing
Practices], M. Petrenko et al., Nov., pp. 25-31.
Consumer electronics
Consumer Electronics Meets Distributed Storage [Invisible
Computing], D. Peek and J. Flinn, Feb., pp. 93-95.
The Future Arrives? Finally [Entertainment Computing], M.
Macedonia, Feb., pp. 101-103.
Cryptography
Cryptography on a Speck of Dust, J.-P. Kaps et al., Feb., pp.
38-44.
Resolving the Micropayment Problem [Security], M.
Tripunitara and T. Messerges, Feb., pp. 104-106.
Cybersecurity
The Case for Flexible NIST Security Standards, F. Keblawi
and D. Sullivan, June, pp. 19-26.
D
Database conceptual schemas
Database Conceptual Schema Matching [Software
Technologies], M.A. Casanova et al., Oct., pp.
102-104.
Data management
Data Consistency for Cooperative Caching in
Mobile Environments, J. Cao et al., Apr., pp.
60-66.
A Data Integration Broker for Healthcare Systems,
D. Budgen et al., Apr., pp. 34-41.
Immersidata Analysis: Four Case Studies, C.
Shahabi et al., July, pp. 45-52.
Improving Data Accessibility with File Area
Networks [News], D. Geer, Nov., pp. 14-17.
Measuring Data Management Practice Maturity:
A Community’s Self-Assessment, P. Aiken et al.,
Apr., pp. 42-50.
Privacy-Preserving Data Mining Systems, N.
Zhang and W. Zhao, Apr., pp. 52-58.
Taking a Hard-Line Approach to Encryption [News], C.
Laird, Mar., pp. 13-15.
Data mining
Privacy-Preserving Data Mining Systems, N. Zhang and W.
Zhao, Apr., pp. 52-58.
Process Query Systems, G. Cybenko and V.H. Berk, Jan., pp.
62-70.
The Strangest Thing About Software, T. Menzies et al., Jan.,
pp. 54-60.
Data storage
Discryption: Internal Hard-Disk Encryption for Secure
Storage [Security], L. Hars, June, pp. 103-105.
Supporting Resource-Constrained Collaboration Environments [The Profession], S.M. Price, June, pp. 108, 106-107.
DaVinci technology
Using DaVinci Technology for Digital Video Devices, D. Talla
and J. Golston, Oct., pp. 53-61.
Debugging
Boosting Debugging Support for Complex Systems on Chip,
A. Mayer et al., Apr., pp. 76-81.
Dirty Electricity [In Our Time], D.A. Grier, Feb., pp. 6-8.
Fighting Bugs: Remove, Retry, Replicate, and Rejuvenate
[Software Technologies], M. Grottke and K.S. Trivedi, Feb.,
pp. 107-109.
Denial-of-service attacks
Marking Technique to Isolate Boundary Router and Attacker,
V. Vijairaghavan et al., Feb., pp. 54-58.
Dictionary attacks
Password-Based Authentication: Preventing Dictionary
Attacks, S. Chakrabarti and M. Singhal, June, pp. 68-74.
Digital ink
Classroom Presenter: Enhancing Interactive Education with
Digital Ink, R. Anderson et al., Sept., pp. 56-61.
73
December 2007
Computer
Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next Page
A
BEMaGS
F
Computer
Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next Page
A
BEMaGS
F
ANNUAL INDEX
Digital technology
Consciousness and Computers [The Profession], N. Holmes,
July, pp. 100, 98-99.
The Future Arrives? Finally [Entertainment Computing], M.
Macedonia, Feb., pp. 101-103.
Researchers Develop Efficient Digital-Cameras Design
[News], L.D. Paulson, Mar., pp. 20-22.
Using DaVinci Technology for Digital Video Devices, D. Talla
and J. Golston, Oct., pp. 53-61.
Discryption
Discryption: Internal Hard-Disk Encryption for Secure
Storage [Security], L. Hars, June, pp. 103-105.
Distributed computing
Caravela: A Novel Stream-Based Distributed Computing
Environment, S. Yamagiwa and L. Sousa, May, pp. 70-77.
Consumer Electronics Meets Distributed Storage [Invisible
Computing], D. Peek and J. Flinn, Feb., pp. 93-95.
A Data Integration Broker for Healthcare Systems, D. Budgen
et al., Apr., pp. 34-41.
Trust Management in Distributed Systems, H. Li and M.
Singhal, Feb., pp. 45-53.
Entertainment computing
Enhancing the User Experience in Mobile Phones
[Entertainment Computing], S.R. Subramanya and B.K.
Yi, Dec., pp. 114-117.
The Future Arrives? Finally [Entertainment Computing], M.
Macedonia, Feb., pp. 101-103.
Game Smarts [Entertainment Computing], M. van Lent, Apr.,
pp. 99-101.
Games: Once More, with Feeling [Entertainment Computing],
M. van Lent and W. Swartout, Aug., pp. 98-100.
Generation 3D: Living in Virtual Worlds [Entertainment
Computing], M. Macedonia, Oct., pp. 99-101.
iPhones Target the Tech Elite [Entertainment Computing], M.
Macedonia, June, pp. 94-95.
Environmental monitoring
Process Query Systems, G. Cybenko and V.H. Berk, Jan., pp.
62-70.
Escher model
Escher: A New Technology Transitioning Model [Embedded
Computing], J. Sztipanovits et al., Mar., pp. 90-92.
E
Fault isolation
Isolation in Commodity Multicore Processors, N. Aggarwal
et al., June, pp. 49-59.
FPGAs
Achieving High Performance with FPGA-Based Computing,
M.C. Herbordt et al., Mar., pp. 50-57.
High-Performance Reconfigurable Computing [Guest Editors’
Introduction], D. Buell et al., Mar., pp. 23-27.
It’s Time to Stop Calling Circuits “Hardware” [Embedded
Computing], F. Vahid, Sept., pp. 106-108.
Sparse Matrix Computations on Reconfigurable Hardware,
V.K. Prasanna and G.R. Morris, Mar., pp. 58-64.
Trident: From High-Level Language to Hardware Circuitry,
J.L. Tripp et al., Mar., pp. 28-37.
Using FPGA Devices to Accelerate Biomolecular Simulations,
S.R. Alam et al., Mar., pp. 66-73.
Flow-model-based computation
Caravela: A Novel Stream-Based Distributed Computing
Environment, S. Yamagiwa and L. Sousa, May, pp. 70-77.
Formal methods
Designing for Software’s Social Complexity, J.L. Fiadeiro, Jan.,
pp. 34-39.
FT-CORBA
Fault-Tolerant CORBA: From Specification to Reality
[Standards], P. Narasimhan, Jan., pp. 110-112.
E-government
New Paradigms for Next-Generation E-Government Projects,
S. Friedrichs and S. Jung, Nov., pp. 53-55.
E-mail management
Managing E-Mail Overload: Solutions and Future Challenges,
D. Schuff et al., Feb., pp. 31-36.
E-passports
Replacing Lost or Stolen E-Passports [Security],
J. Yong and E. Bertino, Oct., pp. 89-91.
Ecosystem integration
Toward Mobile Services: Three Approaches, J.
Bosch, Nov., pp. 51-53.
Electronic voting
Electronic Voting [How Things Work], J. Epstein,
Aug., pp. 92-95.
Embedded computing
3D Vision: Developing an Embedded StereoVision System [Embedded Computing], J.I.
Woodfill et al., May, pp. 106-108.
Boosting Debugging Support for Complex Systems on Chip, A. Mayer et al., Apr., pp. 76-81.
The Discipline of Embedded Systems Design, T.A.
Henzinger and J. Sifakis, Oct., pp. 32-40.
Embedded System Design for Automotive
Applications, A. Sangiovanni-Vincentelli and M.
Di Natale, Oct., pp. 42-51.
The Embedded Systems Landscape [Guest Editor’s Introduction], Wolf W., Oct., pp. 29-31.
Escher: A New Technology Transitioning Model [Embedded
Computing], J. Sztipanovits et al., Mar., pp. 90-92.
The Good News and the Bad News [Embedded Computing],
W. Wolf, Nov., pp. 104-105.
It’s Time to Stop Calling Circuits “Hardware” [Embedded
Computing], F. Vahid, Sept., pp. 106-108.
SensorMap for Wide-Area Sensor Webs [Embedded
Computing], S. Nath et al., July, pp. 90-93.
Software-Defined Radio Prospects for Multistandard Mobile
Phones, U. Ramacher, Oct., pp. 62-69.
Using DaVinci Technology for Digital Video Devices, D. Talla
and J. Golston, Oct., pp. 53-61.
Energy-efficiency optimizations
The Case for Energy-Proportional Computing, L.A. Barroso
and U. Hölzle, Dec., pp. 33-37.
Models and Metrics to Enable Energy-Efficiency Optimizations, S. Rivoire et al., Dec., pp. 39-48.
Enterprise engineering
Enterprise, Systems, and Software Engineering—The Need
for Integration [Standards], P. Joannou, May, pp. 103-105.
Enterprise services
Enterprise Security for Web 2.0 [IT Systems Perspectives],
M.A. Davidson, and E. Yoran, Nov., pp. 117-119.
Getting on Board the Enterprise Service Bus [News], S. Ortiz
Jr., Apr., pp. 15-17.
Toward the Realization of Policy-Oriented Enterprise
Management, M. Kaiser, Nov., pp. 57-63.
Service Is in the Eyes of the Beholder, T. Margaria, Nov., pp.
33-37.
Service-Oriented Computing: State of the Art and Research
Challenges, M.P. Papazoglou et al., Nov., pp. 38-45.
74
F
G
Game technology
Game Smarts [Entertainment Computing], M. van Lent, Apr.,
pp. 99-101.
Game-theoretic analysis
Reengineering the Internet for Better Security, M.
Parameswaran et al., Jan., pp. 40-44.
GeForce 3
How GPUs Work [How Things Work], D. Luebke and G.
Humphreys, Feb., pp. 96-100.
GPUs
How GPUs Work [How Things Work], D. Luebke and G.
Humphreys, Feb., pp. 96-100.
Graphical tools
The Inevitable Cycle: Graphical Tools and Programming
Paradigms, J. Soukup and M. Soukup, Aug., pp. 24-30.
Graphics processing units
Caravela: A Novel Stream-Based Distributed Computing
Environment, S. Yamagiwa and L. Sousa, May, pp. 70-77.
Green Computing
The Case for Energy-Proportional Computing, L.A. Barroso
and U. Hölzle, Dec., pp. 33-37.
The Green500 List: Encouraging Sustainable Supercomputing,
W. C. Feng and K. Cameron, Dec., pp. 50-55.
Life Cycle Aware Computing: Reusing Silicon Technology,
J.Y. Oliver et al., Dec., pp. 56-61.
Models and Metrics to Enable Energy-Efficiency
Optimizations, S. Rivoire et al., Dec., pp. 39-48.
Green Destiny
The Green500 List: Encouraging Sustainable Supercomputing,
W. C. Feng and K. Cameron, Dec., pp. 50-55.
Group Scribbles
Ink, Improvisation, and Interactive Engagement: Learning
with Tablets, J. Roschelle et al., Sept., pp.42-48.
Computer
Computer
Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next Page
A
BEMaGS
F
Computer
Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next Page
H
Handel-C
Trident: From High-Level Language to Hardware Circuitry,
J.L. Tripp et al., Mar., pp. 28-37.
Handwriting recognition
Handwriting Recognition: Tablet PC Text Input, J.A. Pittman,
Sept., pp. 49-54.
Hard-disk encryption
Discryption: Internal Hard-Disk Encryption for Secure
Storage [Security], L. Hars, June, pp. 103-105.
Heads-up display
Holistic Sensing and Active Displays for Intelligent Driver
Support Systems, M.M. Trivedi and S.Y. Cheng, May, pp.
60-68.
Healthcare technology
3D Body Scanning and Healthcare Applications, P. Treleaven
and J. Wells, July, pp. 28-34.
A Data Integration Broker for Healthcare Systems, D. Budgen
et al., Apr., pp. 34-41.
Immersidata Analysis: Four Case Studies, C. Shahabi et al.,
July, pp. 45-52.
How Things Work
Binary Arithmetic [How Things Work], N. Holmes, June, pp.
90-93.
Electronic Voting [How Things Work], J. Epstein, Aug., pp.
92-95.
The Glass Cockpit [How Things Work], J. Knight, Oct., pp.
92-95.
How GPUs Work [How Things Work], D. Luebke and G.
Humphreys, Feb., pp. 96-100.
SMS: The Short Message Service [How Things Work], J.
Brown et al., Dec., pp. 106-110.
Wi-Fi—The Nimble Musician in Your Laptop [How Things
Work], D.G. Leeper, Apr., pp. 108-110.
Human activity language
A Language for Human Action, G. Guerra-Filho and Y.
Aloimonos, May, pp. 42-51.
Human-centered computing
A Communication Support System for Older People with
Dementia, N. Alm et al., May, pp. 35-41.
Holistic Sensing and Active Displays for Intelligent Driver
Support Systems, M.M. Trivedi and S.Y. Cheng, May, pp.
60-68.
Human-Centered Computing: Toward a Human Revolution
[Guest Editors’ Introduction], A. Jaimes et al., May, pp. 3034.
An Interactive Multimedia Diary for the Home, G.C. de Silva
et al., May, pp. 52-59.
A Language for Human Action, G. Guerra-Filho and Y.
Aloimonos, May, pp. 42-51.
I
IEEE 1667
Authentication in Transient Storage Device Attachments
[Security], D. Rich, Apr., pp. 102-104.
IEEE Computer Society
CHC61 Sites Highlight “Unsung Heroes” in 2007 [CS
Connection], B. Ward, Mar., pp. 77-79.
Computer Recognizes Expert Reviewers [CS Connection],
C.K. Chang, Dec., pp. 64-65.
Computer Society Announces Larson, UPE, and OCA Student
Winners [CS Connection], B. Ward, Apr., pp. 89-91.
Computer Society and IEEE Foundation Offer Cash Prizes at
Intel Science Fair [CS Connection], B. Ward, Mar., pp. 7779.
Computer Society Launches New Certification Program [CS
Connection], B. Ward, Dec., pp. 64-65.
Computer Society Recognizes Outstanding Professionals, [CS
Connection], B. Ward, May, pp. 85-88.
Computer Society Summer and Fall Conferences, [CS
Connection], B. Ward, May, pp. 85-88.
Edward Seidel Honored with Sidney Fernbach Award [CS
Connection], B. Ward, Apr., pp. 89-91.
Hosaka and Spielberg Named Winners of 2006 Computer
Pioneer Award [CS Connection], B. Ward, Feb., pp. 73-77.
IEEE Computer Society Elections, Sept., pp. 69-76.
An Interesting Year [President’s Message], M.R. Williams,
Dec., pp. 6-7.
James Pomerene Garners Joint IEEE/ACM Award [CS
Connection], B. Ward, Apr., pp. 89-91.
My Vision for Computer [EIC’s Message], C.K. Chang, Jan.,
pp. 7-8.
A
BEMaGS
F
Tadashi Watanabe Receives 2006 Seymour Cray Award [CS
Connection], B. Ward, Apr., pp. 89-91.
A Year of Decision [President’s Message], M.R. Williams, Jan.,
pp. 9-11.
Image retrieval systems
From Pixels to Semantic Spaces: Advances in Content-Based
Image Retrieval, N. Vasconcelos, July, pp. 20-26.
Image wall
The Shannon Portal Installation: Interaction Design for Public
Places, L. Ciolfi et al., July, pp. 64-71.
Immersidata
Immersidata Analysis: Four Case Studies, C. Shahabi et al.,
July, pp. 45-52.
iMouse
iMouse: An Integrated Mobile Surveillance and Wireless
Sensor System, Y.-C. Tseng et al., June, pp. 60-66.
Impulse C
Trident: From High-Level Language to Hardware Circuitry,
J.L. Tripp et al., Mar., pp. 28-37.
In situ computing
Five Enablers for Mobile 2.0 [Invisible Computing], W.G.
Griswold, Oct., pp. 96-98.
Information overload
An Information Avalanche [Web Technologies], V.G. Cerf,
Jan., pp. 104-105.
Managing E-Mail Overload: Solutions and Future Challenges,
D. Schuff et al., Feb., pp. 31-36.
Information storage and retrieval
Measuring Data Management Practice Maturity: A Community’s Self-Assessment, P. Aiken et al., Apr.,
pp. 42-50.
Privacy-Preserving Data Mining Systems, N.
Zhang and W. Zhao, Apr., pp. 52-58.
Instruction set architectures
Embracing and Extending 20th-Century
Instruction Set Architectures, J. Gebis and D.
Patterson, Apr., pp. 68-75.
Intelligent Network
Evolution of SOA Concepts in Telecommunications, T. Magedanz, et al., Nov., pp. 4650.
Service Is in the Eyes of the Beholder, T. Margaria,
Nov., pp. 33-37.
Interactive displays
The Shannon Portal Installation: Interaction
Design for Public Places, L. Ciolfi et al., July,
pp. 64-71.
Internet
An Information Avalanche [Web Technologies],
V.G. Cerf, Jan., pp. 104-105.
Reengineering the Internet for Better Security, M.
Parameswaran et al., Jan., pp. 40-44.
These Are Not Your Father’s Widgets [News], G. Lawton,
July, pp. 10-13.
Internet communities
Social Scripting for the Web [Invisible Computing], T. Lau,
June, pp. 96-98.
Internet Protocol traceback
Marking Technique to Isolate Boundary Router and Attacker,
V. Vijairaghavan et al., Feb., pp. 54-58.
Intrusion detection
Natural-Language Processing for Intrusion Detection
[Security], A. Stone, Dec., pp. 103-105.
Invisible computing
Consumer Electronics Meets Distributed Storage [Invisible
Computing], D. Peek and J. Flinn, Feb., pp. 93-95.
Five Enablers for Mobile 2.0 [Invisible Computing], W.G.
Griswold, Oct., pp. 96-98.
How-To Web Pages [Invisible Computing], C. Torrey and
D.W. McDonald, Aug., pp. 96-97.
Predestination: Where Do You Want to Go Today? [Invisible
Computing], J. Krumm and E. Horvitz, Apr., pp. 105-107.
Social Scripting for the Web [Invisible Computing], T. Lau,
June, pp. 96-98.
iPhone
iPhones Target the Tech Elite [Entertainment Computing], M.
Macedonia, June, pp. 94-95.
IP Multimedia Subsystem
Evolution of SOA Concepts in Telecommunications, T.
Magedanz et al., Nov., pp. 46-50.
IP protection
An Information Avalanche [Web Technologies], V.G. Cerf,
Jan., pp. 104-105.
75
December 2007
Computer
Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next Page
A
BEMaGS
F
Computer
Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next Page
A
BEMaGS
F
ANNUAL INDEX
IT architecture
Creating Business Value through Flexible IT Architecture, J.
Helbig and A. Scherdin, Nov., pp. 55-56.
IT education
Incorporating a Variable-Expertise-Level System in IT Course
Modules [The Profession], N. Harkiolakis, Apr., pp. 116,
114-115.
IT systems perspectives
The Current State of Business Intelligence, H.J. Watson and
B.H. Wixom, Sept., pp. 96-99.
Enterprise Security for Web 2.0 [IT Systems Perspectives],
M.A. Davidson, and E. Yoran, Nov., pp. 117-119.
The Impact of Software Growth on the Electronics Industry
[IT Systems Perspectives], M. van Genuchten, Jan., pp. 106108.
IT Audit: A Critical Business Process [IT Systems Perspectives],
A. Carlin and F. Gallegos, July, pp. 87-89.
Replacing Proprietary Software on the Desktop [IT Systems
Perspectives], D. Hardaway, Mar., pp. 96-97.
Service Management: Driving the Future of IT [IT Systems
Perspectives], B. Clacy and B. Jennings, May, pp. 98-100.
J
J2ME technology
Challenges in Securing Networked J2ME Applications, A.N.
Klingsheim et al., Feb., pp. 24-30.
Jacobi method
Sparse Matrix Computations on Reconfigurable Hardware,
V.K. Prasanna and G.R. Morris, Mar., pp. 58-64.
Java Application Building Center (jABC)
Full Life-Cycle Support for End-to-End Processes,
B. Steffen and P. Narayan, Nov., pp. 64-73.
JouleSort benchmark
Models and Metrics to Enable Energy-Efficiency
Optimizations, S. Rivoire et al., Dec., pp. 39-48.
K
Knowledge management
Managing E-Mail Overload: Solutions and Future
Challenges, D. Schuff et al., Feb., pp. 31-36.
Koala
Social Scripting for the Web [Invisible Computing], T. Lau, June, pp. 96-98.
L
Life cycle aware computing
Life Cycle Aware Computing: Reusing Silicon
Technology, J.Y. Oliver et al., Dec., pp. 56-61.
Linguistics
A Language for Human Action, G. Guerra-Filho and Y.
Aloimonos, May, pp. 42-51.
Location-based services
Predestination: Where Do You Want to Go Today? [Invisible
Computing], J. Krumm and E. Horvitz, Apr., pp. 105-107.
LURCH
The Strangest Thing About Software, T. Menzies et al., Jan.,
pp. 54-60.
M
Machine learning
Search Engines that Learn from Implicit Feedback, T.
Joachims and F. Radlinski, Aug., pp. 34-40.
Magic paper
Magic Paper: Sketch-Understanding Research, R. Davis, Sept.,
pp. 34-41.
Malware
News Briefs, L.D. Paulson, July, pp. 17-19.
Reengineering the Internet for Better Security, M.
Parameswaran et al., Jan., pp. 40-44.
Mandelbugs
Fighting Bugs: Remove, Retry, Replicate, and Rejuvenate
[Software Technologies], M. Grottke and K.S. Trivedi, Feb.,
pp. 107-109.
Metadata
Measuring Data Management Practice Maturity: A
Community’s Self-Assessment, P. Aiken et al., Apr., pp. 42-50.
Micropayments
Resolving the Micropayment Problem [Security], M.
Tripunitara and T. Messerges, Feb., pp. 104-106.
Mitrion-C
Trident: From High-Level Language to Hardware Circuitry,
J.L. Tripp et al., Mar., pp. 28-37.
76
MoProSoft model
Software Process Improvement: The Competisoft Project,
Oktaba H. et al., Oct., pp. 21-28.
Mobile computing
4G Wireless Begins to Take Shape [News], S. Ortiz Jr., Nov.,
pp. 18-21.
Adaptive QoS for Mobile Web Services through Cross-Layer
Communication, M. Tian et al., Feb., pp. 59-63.
Company Develops Handheld with Flexible Screen [News],
L.D. Paulson, Apr., pp. 21-23.
Data Consistency for Cooperative Caching in Mobile
Environments, J. Cao et al., Apr., pp. 60-66.
Enhancing the User Experience in Mobile Phones
[Entertainment Computing], S.R. Subramanya and B.K.
Yi, Dec., pp. 114-117.
Five Enablers for Mobile 2.0 [Invisible Computing], W.G.
Griswold, Oct., pp. 96-98.
iMouse: An Integrated Mobile Surveillance and Wireless
Sensor System, Y.-C. Tseng et al., June, pp. 60-66.
iPhones Target the Tech Elite [Entertainment Computing], M.
Macedonia, June, pp. 94-95.
New Interfaces at the Touch of a Fingertip [News], S.J.
Vaughan-Nichols, Aug., pp. 12-15.
A New Virtual Private Network for Today’s Mobile World,
[Technology News], K. Heyman, Dec., pp. 17-19.
Predestination: Where Do You Want to Go Today? [Invisible
Computing], J. Krumm and E. Horvitz, Apr., pp. 105-107.
Software-Defined Radio Prospects for Multistandard Mobile
Phones, U. Ramacher, Oct., pp. 62-69.
Toward Mobile Services: Three Approaches, J. Bosch, Nov.,
pp. 51-53.
Mobile search
Deciphering Trends in Mobile Search, M. Kamvar and S.
Baluja, Aug., pp. 58-62.
Model-based design
Embedded System Design for Automotive Applications, A.
Sangiovanni-Vincentelli and M. Di Natale, Oct., pp. 42-51.
Full Life-Cycle Support for End-to-End Processes, B. Steffen
and P. Narayan, Nov., pp. 64-73.
MuSIC-1 chip
Software-Defined Radio Prospects for Multistandard Mobile
Phones, U. Ramacher, Oct., pp. 62-69.
Multicore processors
For Programmers, Multicore Chips Mean Multiple Challenges
[News], D. Geer, Sept., pp. 17-19.
Isolation in Commodity Multicore Processors, N. Aggarwal
et al., June, pp. 49-59.
An Open Source Environment for Cell Broadband Engine
System Software, M. Gschwind et al., June, pp. 37-47.
Multimedia data
An Interactive Multimedia Diary for the Home, G.C. de Silva
et al., May, pp. 52-59.
Multiprocessing
The Good News and the Bad News [Embedded Computing],
W. Wolf, Nov., pp. 104-105.
N
NIST Standards
Managing Enterprise Security Risk with NIST Standards
[Security], R. Ross, Aug., pp. 88-91.
Nanotechnology
Architectures for Silicon Nanoelectronics and Beyond, R.I.
Bahar et al., Jan., pp. 25-33.
Nanodevice Increases Optical Networks’ Bandwidth [News],
L.D. Paulson, Apr., pp. 21-23.
Natural-language processing
Natural-Language Processing for Intrusion Detection
[Security], A. Stone, Dec., pp. 103-105.
NetBeans IDE
Full Life-Cycle Support for End-to-End Processes, B. Steffen
and P. Narayan, Nov., pp. 64-73.
Network security
Cryptography on a Speck of Dust, J.-P. Kaps et al., Feb., pp.
38-44.
Marking Technique to Isolate Boundary Router and Attacker,
V. Vijairaghavan et al., Feb., pp. 54-58.
Trust Management in Distributed Systems, H. Li and M.
Singhal, Feb., pp. 45-53.
Neural networks
Handwriting Recognition: Tablet PC Text Input, J.A. Pittman,
Sept., pp. 49-54.
Computer
Computer
Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next Page
A
BEMaGS
F
Computer
Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next Page
O
Online communities
Online Experiments: Lessons Learned, R. Kohavi and R.
Longbotham, Sept., pp. 103-105.
Toward a PeopleWeb, R. Ramakrishnan and A. Tomkins,
Aug., pp. 63-72.
Online maps
Taking Online Maps Down to Street Level [Invisible
Computing], L. Vincent, Dec., pp. 118-120.
Open source software
The Economic Motivation of Open Source Software:
Stakeholder Perspectives [News], D. Riehle, Apr., pp. 25-32.
Group Works on Open Source Interoperability [News], L.D.
Paulson and G. Lawton , June, pp. 15-18.
An Open Source Environment for Cell Broadband Engine
System Software, M. Gschwind et al., June, pp. 37-47.
Replacing Proprietary Software on the Desktop [IT Systems
Perspectives], D. Hardaway, Mar., pp. 96-97.
Teaching Software Evolution in Open Source [Computing
Practices], M. Petrenko et al., Nov., pp. 25-31.
Working Class Hero [In Our Time], D.A. Grier, May, pp. 8-10.
Osmot engine
Search Engines that Learn from Implicit Feedback, T.
Joachims and F. Radlinski, Aug., pp. 34-40.
Outsourcing
The Changing World of Outsourcing, [Industry Trends], N.
Leavitt, Dec., pp. 13-16.
P
POEM
Toward the Realization of Policy-Oriented Enterprise
Management, M. Kaiser, Nov., pp. 57-63.
Processor reuse
Life Cycle Aware Computing: Reusing Silicon Technology,
J.Y. Oliver et al., Dec., pp. 56-61.
PQS modeling framework
Process Query Systems, G. Cybenko and V.H. Berk, Jan., pp.
62-70.
Pairwise preferences
Search Engines that Learn from Implicit Feedback, T.
Joachims and F. Radlinski, Aug., pp. 34-40.
Passive optical scatterers
3D Display Using Passive Optical Scatterers, S.K. Nayar and
V.N. Anand, July, pp. 54-63.
Password-based authentication
Password-Based Authentication: Preventing Dictionary
Attacks, S. Chakrabarti and M. Singhal, June, pp. 68-74.
PeopleWeb
Toward a PeopleWeb, R. Ramakrishnan and A. Tomkins,
Aug., pp. 63-72.
Perrow-class failures
Conquering Complexity [Software Technologies] G.J.
Holzmann, Dec., pp. 111-113.
Personalized Web search
A Community-Based Approach to Personalizing Web Search,
B. Smyth, Aug., pp. 42-50.
Pervasive computing
Cryptography on a Speck of Dust, J.-P. Kaps et al., Feb., pp.
38-44.
PowerPC
Embracing and Extending 20th-Century Instruction Set
Architectures, J. Gebis and D. Patterson, Apr., pp. 68-75.
Privacy
Privacy-Preserving Data Mining Systems, N. Zhang and W.
Zhao, Apr., pp. 52-58.
Process detection
Process Query Systems, G. Cybenko and V.H. Berk, Jan., pp.
62-70.
Programming paradigms
The Inevitable Cycle: Graphical Tools and Programming
Paradigms, J. Soukup and M. Soukup, Aug., pp. 24-30.
Public-key cryptosystems
Cryptography on a Speck of Dust, J.-P. Kaps et al., Feb., pp.
38-44.
R
Reconfigurable computing
Achieving High Performance with FPGA-Based Computing,
M.C. Herbordt et al., Mar., pp. 50-57.
High-Performance Reconfigurable Computing [Guest Editors’
Introduction], D. Buell et al., Mar., pp. 23-27.
Software-Defined Radio Prospects for Multistandard Mobile
Phones, U. Ramacher, Oct., pp. 62-69.
A
BEMaGS
F
Sparse Matrix Computations on Reconfigurable Hardware,
V.K. Prasanna and G.R. Morris, Mar., pp. 58-64.
Trident: From High-Level Language to Hardware Circuitry,
J.L. Tripp et al., Mar., pp. 28-37.
Using FPGA Devices to Accelerate Biomolecular Simulations,
S.R. Alam et al., Mar., pp. 66-73.
Vforce: An Extensible Framework for Reconfigurable
Supercomputing, N. Moore et al., Mar., pp. 39-49.
Robotics
AQUA: An Amphibious Autonomous Robot, G. Dudek et al.,
Jan., pp. 46-53.
Automated Killers and the Computing Profession [The
Profession], N. Sharkey, Nov., pp. 124, 122-123.
S
Safety-critical software
Unsafe Standardization [Standards], M. Thomas, Nov., pp.
109-111.
SAP enterprise SOA
Toward the Realization of Policy-Oriented Enterprise
Management, M. Kaiser, Nov., pp. 57-63.
Scanning technologies
3D Body Scanning and Healthcare Applications, P. Treleaven
and J. Wells, July, pp. 28-34.
Scenario-oriented computing
A New Era of Performance Evaluation [Perspectives], S.M.
Pieper et al., Sept., pp. 23-30.
Schema matching
Database Conceptual Schema Matching [Software Technologies], M.A. Casanova, Oct., pp. 102-104.
Scientific workflows
Examining the Challenges of Scientific Workflows
[Computing Practices], Y. Gil et al., Dec., pp.
24-32.
SDR baseband processors
Software-Defined Radio Prospects for Multistandard Mobile Phones, U. Ramacher, Oct.,
pp. 62-69.
Search
A Community-Based Approach to Personalizing
Web Search, B. Smyth, Aug., pp. 42-50.
Deciphering Trends in Mobile Search, M. Kamvar
and S. Baluja, Aug., pp. 58-62.
Enterprise Security for Web 2.0 [IT Systems
Perspectives], M.A. Davidson, and E. Yoran,
Nov., pp. 117-119.
Search Engines that Learn from Implicit Feedback,
T. Joachims and F. Radlinski, Aug., pp. 34-40.
Search: The New Incarnations [From the Area
Editor], N. Ramakrishnan, Aug., pp. 31-32.
Sponsored Search: Is Money a Motivator for Providing
Relevant Results?, B.J. Jansen and A. Spink, Aug., pp. 52-57.
Toward a PeopleWeb, R. Ramakrishnan and A. Tomkins,
Aug., pp. 63-72.
Security
Authentication in Transient Storage Device Attachments
[Security], D. Rich, Apr., pp. 102-104.
The Case for Flexible NIST Security Standards, F. Keblawi
and D. Sullivan, June, pp. 19-26.
Challenges in Securing Networked J2ME Applications, A.N.
Klingsheim et al., Feb., pp. 24-30.
Discryption: Internal Hard-Disk Encryption for Secure
Storage [Security], L. Hars, June, pp. 103-105.
IT Audit: A Critical Business Process [IT Systems Perspectives],
A. Carlin and F. Gallegos, July, pp. 87-89.
Managing Enterprise Security Risk with NIST Standards
[Security], R. Ross, Aug., pp. 88-91.
Natural-Language Processing for Intrusion Detection
[Security], A. Stone, Dec., pp. 103-105.
Password-Based Authentication: Preventing Dictionary
Attacks, S. Chakrabarti and M. Singhal, June, pp. 68-74.
Process Query Systems, G. Cybenko and V.H. Berk, Jan., pp.
62-70.
Protecting Networks by Controlling Access [News], S. Ortiz
Jr., Aug., pp. 16-19.
Reengineering the Internet for Better Security, M.
Parameswaran et al., Jan., pp. 40-44.
Replacing Lost or Stolen E-Passports [Security], J. Yong and
E. Bertino, Oct., pp. 89-91.
Resolving the Micropayment Problem [Security], M.
Tripunitara and T. Messerges, Feb., pp. 104-106.
Stronger Domain Name System Thwarts Root-Server Attacks
[News], G. Lawton, May, pp. 14-17.
77
December 2007
Computer
Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next Page
A
BEMaGS
F
Computer
Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next Page
A
BEMaGS
F
ANNUAL INDEX
Web 2.0 Creates Security Challenges [News], G. Lawton, Oct.,
pp. 13-16.
Semantic Web
Toward a Social Semantic Web [Web Technologies], A.
Mikroyannidis, Nov., pp. 113-115.
SensorMap
SensorMap for Wide-Area Sensor Webs [Embedded
Computing], S. Nath et al., July, pp. 90-93.
Service orientation
Component Contracts in Service-Oriented Architectures, F.
Curbera, Nov., pp. 74-80.
Creating Business Value through Flexible IT Architecture, J.
Helbig and A. Scherdin, Nov., pp. 55-56.
An Era of Change-Tolerant Systems [Software Technologies],
S. Bohner, June, pp. 100-102.
Evolution of SOA Concepts in Telecommunications, T.
Magedanz, et al., Nov., pp. 46-50.
The Fractal Nature of Web Services [Web Technologies], C.
Bussler, Mar., pp. 93-95.
Full Life-Cycle Support for End-to-End Processes, B. Steffen
and P. Narayan, Nov., pp. 64-73.
Getting on Board the Enterprise Service Bus [News], S. Ortiz
Jr., Apr., pp. 15-17.
New Paradigms for Next-Generation E-Government Projects,
S. Friedrichs and S. Jung, Nov., pp. 53-55.
Service Is in the Eyes of the Beholder, T. Margaria, Nov., pp.
33-37.
Service-Oriented Computing: State of the Art and Research
Challenges, M.P. Papazoglou et al., Nov., pp. 38-45.
Steps Toward a Science of Service Systems, J.
Spohrer et al., Jan., pp. 71-77.
Toward Mobile Services: Three Approaches, J.
Bosch, Nov., pp. 51-53.
Toward the Realization of Policy-Oriented
Enterprise Management, M. Kaiser, Nov., pp.
57-63.
Service provider certification
Reengineering the Internet for Better Security, M.
Parameswaran et al., Jan., pp. 40-44.
Silicon devices
Architectures for Silicon Nanoelectronics and
Beyond, R.I. Bahar et al. , Jan., pp. 25-33.
SIMD processors
Embracing and Extending 20th-Century
Instruction Set Architectures, J. Gebis and D.
Patterson, Apr., pp. 68-75.
SIMNET
Generation 3D: Living in Virtual Worlds
[Entertainment Computing], M. Macedonia,
Oct., pp. 99-101.
Situational awareness systems
Holistic Sensing and Active Displays for Intelligent Driver
Support Systems, M.M. Trivedi and S.Y. Cheng, May, pp.
60-68.
Sketch-understanding systems
Magic Paper: Sketch-Understanding Research, R. Davis, Sept.,
pp. 34-41.
Smart home
An Interactive Multimedia Diary for the Home, G.C. de Silva
et al., May, pp. 52-59.
SMS
SMS: The Short Message Service [How Things Work], J.
Brown et al., Dec., pp. 106-110.
SOAs
Component Contracts in Service-Oriented Architectures, F.
Curbera, Nov., pp. 74-80.
SOC paradigm
Service-Oriented Computing: State of the Art and Research
Challenges, M.P. Papazoglou et al., Nov., pp. 38-45.
SoCs
Boosting Debugging Support for Complex Systems on Chip,
A. Mayer et al., Apr., pp. 76-81.
Social Web
Social Scripting for the Web [Invisible Computing], T. Lau,
June, pp. 96-98.
Toward a Social Semantic Web [Web Technologies], A.
Mikroyannidis, Nov., pp. 113-115.
Software-defined radio technologies
Software-Defined Radio Prospects for Multistandard Mobile
Phones, U. Ramacher, Oct., pp. 62-69.
Software development
The Chimera of Software Quality [The Profession], L. Hatton,
Aug., pp. 104, 102-103.
78
A Data Integration Broker for Healthcare Systems, D. Budgen
et al., Apr., pp. 34-41.
The Economic Motivation of Open Source Software:
Stakeholder Perspectives [News], D. Riehle, Apr., pp. 2532.
Empirical Test Observations in Client-Server Systems, L.
Hatton, May, pp. 24-29.
Fighting Bugs: Remove, Retry, Replicate, and Rejuvenate
[Software Technologies], M. Grottke and K.S. Trivedi, Feb.,
pp. 107-109.
How Accurately Do Engineers Predict Software Maintenance
Tasks?, L. Hatton, Feb., pp. 64-69.
The Inevitable Cycle: Graphical Tools and Programming
Paradigms, J. Soukup and M. Soukup, Aug., pp. 24-30.
Software Development: What Is the Problem? [The
Profession], R.R. Loka, Feb., pp. 112, 110-111.
Software Process Improvement: The Competisoft Project,
Oktaba H. et al., Oct., pp. 21-28.
The Strangest Thing About Software, T. Menzies et al., Jan.,
pp. 54-60.
Teaching Software Evolution in Open Source [Computing
Practices], M. Petrenko et al., Nov., pp. 25-31.
Unsafe Standardization [Standards], M. Thomas, Nov., pp.
109-111.
Software engineering
Designing for Software’s Social Complexity, J.L. Fiadeiro, Jan.,
pp. 34-39.
The Impact of Software Growth on the Electronics Industry
[IT Systems Perspectives], M. van Genuchten, Jan., pp. 106108.
Software technologies
Conquering Complexity [Software Technologies] G.J.
Holzmann, Dec., pp. 111-113.
Database Conceptual Schema Matching [Software
Technologies], M.A. Casanova et al., Oct., pp. 102-104.
An Era of Change-Tolerant Systems [Software Technologies],
S. Bohner, June, pp. 100-102.
Fighting Bugs: Remove, Retry, Replicate, and Rejuvenate
[Software Technologies], M. Grottke and K.S. Trivedi, Feb.,
pp. 107-109.
How Business Goals Drive Architectural Design [Software
Technologies], R.S. Sangwan and C.J. Neill, Aug., pp. 8587.
Swarms and Swarm Intelligence [Software Technologies],
M.G. Hinchey et al., Apr., pp. 111-113.
Sponsored search
Click Fraud [Web Technologies], B.J. Jansen, July, pp. 85-86.
Sponsored Search: Is Money a Motivator for Providing
Relevant Results?, B.J. Jansen and A. Spink, Aug., pp. 52-57.
Squirrel system
Five Enablers for Mobile 2.0 [Invisible Computing], W.G.
Griswold, Oct., pp. 96-98.
SRC Carte
Trident: From High-Level Language to Hardware Circuitry,
J.L. Tripp et al., Mar., pp. 28-37.
Standards
Authentication in Transient Storage Device Attachments
[Security], D. Rich, Apr., pp. 102-104.
The Case for Flexible NIST Security Standards, F. Keblawi
and D. Sullivan, June, pp. 19-26.
Enterprise, Systems, and Software Engineering—The Need
for Integration [Standards], P. Joannou, May, pp. 103-105.
Fault-Tolerant CORBA: From Specification to Reality
[Standards], P. Narasimhan, Jan., pp. 110-112.
Software Process Improvement: The Competisoft Project,
Oktaba H. et al., Oct., pp. 21-28.
Standards, Agility, and Engineering, F. Coallier, Sept., pp. 100102.
Standards Confusion and Harmonization [Standards], J.M.
Voas and P.A. Laplante, July, pp. 94-96.
Unsafe Standardization [Standards], M. Thomas, Nov., pp.
109-111.
Stereo-vision systems
3D Vision: Developing an Embedded Stereo-Vision System
[Embedded Computing], J.I. Woodfill et al., May, pp. 106108.
Supercomputing
The Green500 List: Encouraging Sustainable Supercomputing,
W. C. Feng and K. Cameron, Dec., pp. 50-55.
Trident: From High-Level Language to Hardware Circuitry,
J.L. Tripp et al., Mar., pp. 28-37.
Vforce: An Extensible Framework for Reconfigurable
Supercomputing, N. Moore et al., Mar., pp. 39-49.
Computer
Computer
Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next Page
A
BEMaGS
F
Computer
Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next Page
Swarm technologies
Swarms and Swarm Intelligence [Software Technologies],
M.G. Hinchey et al., Apr., pp. 111-113.
System design and development
A Communication Support System for Older People with
Dementia, N. Alm et al., May, pp. 35-41.
The Discipline of Embedded Systems Design, T.A. Henzinger
and J. Sifakis, Oct., pp. 32-40.
Holistic Sensing and Active Displays for Intelligent Driver
Support Systems, M.M. Trivedi and S.Y. Cheng, May, pp.
60-68.
How Business Goals Drive Architectural Design [Software
Technologies], R.S. Sangwan and C.J. Neill, Aug., pp. 8587.
Human-Centered Computing: Toward a Human Revolution
[Guest Editors’ Introduction], A. Jaimes et al., May, pp. 3034.
An Interactive Multimedia Diary for the Home, G.C. de Silva
et al., May, pp. 52-59.
T
Tablet PCs
Classroom Presenter: Enhancing Interactive Education with
Digital Ink, R. Anderson et al., Sept., pp. 56-61.
Facilitating Pedagogical Practices through a Large-Scale Tablet
PC Deployment, J.G. Tront, Sept., pp. 62-68.
Handwriting Recognition: Tablet PC Text Input, J.A. Pittman,
Sept., pp. 49-54.
Ink, Improvisation, and Interactive Engagement: Learning
with Tablets, J. Roschelle et al., Sept., pp.42-48.
Magic Paper: Sketch-Understanding Research, R. Davis, Sept.,
pp. 34-41.
Tablet PC Technology: The Next Generation [Guest Editors’
Introduction], J. Prey and A. Weaver, Sept., pp. 32-33.
Telecommunications industry
Evolution of SOA Concepts in Telecommunications, T.
Magedanz, et al., Nov., pp. 46-50.
Transient storage devices
Authentication in Transient Storage Device Attachments
[Security], D. Rich, Apr., pp. 102-104.
Trident compiler
Trident: From High-Level Language to Hardware Circuitry,
J.L. Tripp et al., Mar., pp. 28-37.
Trust management
Trust Management in Distributed Systems, H. Li and M.
Singhal, Feb., pp. 45-53.
U
User-centered system design
Human-Centered Computing: Toward a Human Revolution
[Guest Editors’ Introduction], A. Jaimes et al., May, pp. 3034.
User interfaces
Enhancing the User Experience in Mobile Phones
[Entertainment Computing], S.R. Subramanya and B.K.
Yi, Dec., pp. 114-117.
V
Vector architecture
Embracing and Extending 20th-Century Instruction Set
Architectures, J. Gebis and D. Patterson, Apr., pp. 68-75.
Vforce framework
Vforce: An Extensible Framework for Reconfigurable
Supercomputing, N. Moore et al., Mar., pp. 39-49.
Virtual reality
Generation 3D: Living in Virtual Worlds [Entertainment
Computing], M. Macedonia, Oct., pp. 99-101.
Immersidata Analysis: Four Case Studies, C. Shahabi et al.,
July, pp. 45-52.
Powering Down the Computing Infrastructure [News], G.
Lawton, Feb., pp. 16-19.
Virtual Reality: How Much Immersion Is Enough?, D.A.
Bowman and R.P. McMahan, July, pp. 36-43.
Virtual Reality Program Eases Amputees’ Phantom Pain, L.D.
Paulson, Jan., pp. 22-24.
Volumetric displays
3D Display Using Passive Optical Scatterers, S.K. Nayar and
V.N. Anand, July, pp. 54-63.
VPNs
A New Virtual Private Network for Today’s Mobile World,
[Technology News], K. Heyman, Dec., pp. 17-19.
A
BEMaGS
F
W
WS-QoS framework
Adaptive QoS for Mobile Web Services through Cross-Layer
Communication, M. Tian et al., Feb., pp. 59-63.
Web 2.0
Building Web 2.0 [Web Technologies], K.-J. Lin, May, pp.
101-102.
Enterprise Security for Web 2.0 [IT Systems Perspectives],
M.A. Davidson and E. Yoran, Nov., pp. 117-119.
Five Enablers for Mobile 2.0 [Invisible Computing], W.G.
Griswold, Oct., pp. 96-98.
Toward a Social Semantic Web [Web Technologies], A.
Mikroyannidis, Nov., pp. 113-115.
Web 2.0 Creates Security Challenges [News], G. Lawton, Oct.,
pp. 13-16.
Web applications
How-To Web Pages [Invisible Computing], C. Torrey and
D.W. McDonald, Aug., pp. 96-97.
Replacing Proprietary Software on the Desktop [IT Systems
Perspectives], D. Hardaway, Mar., pp. 96-97.
Social Scripting for the Web [Invisible Computing], T. Lau,
June, pp. 96-98.
Web services
Adaptive QoS for Mobile Web Services through Cross-Layer
Communication, M. Tian et al. , Feb., pp. 59-63.
Component Contracts in Service-Oriented Architectures, F.
Curbera, Nov., pp. 74-80.
Evolution of SOA Concepts in Telecommunications, T.
Magedanz, et al., Nov., pp. 46-50.
The Fractal Nature of Web Services [Web
Technologies], C. Bussler, Mar., pp. 93-95.
Service Is in the Eyes of the Beholder, T. Margaria,
Nov., pp. 33-37.
Service-Oriented Computing: State of the Art and
Research Challenges, M.P. Papazoglou et al.,
Nov., pp. 38-45.
Toward Mobile Services: Three Approaches, J.
Bosch, Nov., pp. 51-53.
Toward the Realization of Policy-Oriented
Enterprise Management, M. Kaiser, Nov., pp.
57-63.
Web technologies
Building Web 2.0 [Web Technologies], K.-J. Lin,
May, pp. 101-102.
Click Fraud [Web Technologies], B.J. Jansen, July,
pp. 85-86.
The Fractal Nature of Web Services [Web
Technologies], C. Bussler, Mar., pp. 93-95.
An Information Avalanche [Web Technologies],
V.G. Cerf, Jan., pp. 104-105.
Online Experiments: Lessons Learned, R. Kohavi and R.
Longbotham, Sept., pp. 103-105.
Toward a Social Semantic Web [Web Technologies], A.
Mikroyannidis, Nov., pp. 113-115.
Wi-Fi
News Briefs, L.D. Paulson, July, pp. 17-19.
Wi-Fi—The Nimble Musician in Your Laptop [How Things
Work], D.G. Leeper, Apr., pp. 108-110.
Wireless technology
4G Wireless Begins to Take Shape [News], S. Ortiz Jr., Nov.,
pp. 18-21.
iMouse: An Integrated Mobile Surveillance and Wireless
Sensor System, Y.-C. Tseng et al., June, pp. 60-66.
Writing systems
The Profession as a Culture Killer, [The Profession], N.
Holmes, Sept., pp. 112, 110-111.
AUTHOR INDEX
A
Agarwal, P.K., see S.R. Alam, Mar., pp. 66-73.
Aggarwal, N., P. Ranganathan, N.P. Jouppi, and J.E. Smith,
Isolation in Commodity Multicore Processors, June, pp.
49-59.
Aiken, P., M.D Allen, B. Parker, and A. Mattia, Measuring
Data Management Practice Maturity: A Community’s SelfAssessment, Apr., pp. 42-50.
Aizawa, K., see G.C. de Silva, May, pp. 52-59.
Akella, V., see J. Oliver, Dec., pp. 56-61.
Alam, S.R., P.K. Agarwal, M.C. Smith, J.S. Vetter, and D.
Caliga, Using FPGA Devices to Accelerate Biomolecular
Simulations, Mar., pp. 66-73.
Allen, M. David., see P. Aiken, Apr., pp. 42-50.
79
December 2007
Computer
Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next Page
A
BEMaGS
F
Computer
Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next Page
A
BEMaGS
F
ANNUAL INDEX
Alm, N., R. Dye, G. Gowans, J. Campbell, A. Astell, and M.
Ellis, A Communication Support System for Older People
with Dementia, May, pp. 35-41.
Aloimonos, Y., see G. Guerra-Filho, May, pp. 42-51.
Alquicira, C., see H. Oktaba, Oct., pp. 21-28.
Amirtharajah, R., see J. Oliver, Dec., pp. 56-61.
Anand, V.N., see S.K. Nayar, July, pp. 54-63.
Anderson, R., R. Anderson, P. Davis, N. Linnell, C. Prince, V.
Razmov, and F. Videon, Classroom Presenter: Enhancing
Interactive Education with Digital Ink, Sept., pp. 56-61.
Anderson, R., see R. Anderson, Sept., pp. 56-61.
Astell, A., see N. Alm, May, pp. 35-41.
B
Bahar, R. Iris., D. Hammerstrom, J. Harlow, W.H. Joyner Jr.,
C. Lau, D. Marculescu, A. Orailoglu, and M. Pedram,
Architectures for Silicon Nanoelectronics and Beyond, Jan.,
pp. 25-33.
Bailey, J., see J. Spohrer, Jan., pp. 71-77.
Baluja, S., see M. Kamvar, Aug., pp. 58-62.
Bannon, L.J., see L. Ciolfi, July, pp. 64-71.
Barroso , L. and U. Hölzle, The Case for Energy-Proportional
Computing, Dec., pp. 33-37.
Bay, J., see J. Sztipanovits, Mar., pp. 90-92.
Berenbach, B., see J. Cleland-Huang, June, pp. 27-35.
Berk, V. H., see G. Cybenko, Jan., pp. 62-70.
Bertino, E., see J. Yong, Oct., pp. 89-91.
Bhatia, L., see V. Vijairaghavan, Feb., pp. 54-58.
Blum, N., see T. Magedanz, Nov., pp. 46-50.
Bohner, S., An Era of Change-Tolerant Systems,
June, pp. 100-102.
Bosch, J., S. Friedrichs, S. Jung, J. Helbig, and A.
Scherdin, Service Orientation in the Enterprise,
Nov., pp. 51-56.
Bowman, D.A., and R.P. McMahan, Virtual
Reality: How Much Immersion Is Enough?, July,
pp. 36-43.
Brauner, D.F., see M.A. Casanova, Oct., pp. 102104.
Breitman, K.K., see M.A. Casanova, Oct., pp.
102-104.
Brereton, P., see D. Budgen, Apr., pp. 34-41.
Brown, J., B. Shipman, and R. Vetter, SMS: The
Short Message Service, Dec., pp. 106-110.
Brown, T., see J.I. Woodfill, May, pp. 106-108.
Buchta, J., see M. Petrenko, Nov., pp. 25-31.
Buck, R., see J.I. Woodfill, May, pp. 106-108.
Budgen, D., M. Rigby, P. Brereton, and M. Turner,
A Data Integration Broker for Healthcare
Systems, Apr., pp. 34-41.
Buehler, M., see G. Dudek, Jan., pp. 46-53.
Buell, D., T. El-Ghazawi, K. Gaj, and V. Kindratenko, Guest
Editors’ Introduction: High-Performance Reconfigurable
Computing, Mar., pp. 23-27.
Burgstahler, S.E., and R.E. Ladner, Increasing the Participation
of People with Disabilities in Computing Fields, May, pp.
94-97.
Bussler, C., The Fractal Nature of Web Services, Mar., pp. 9395.
C
Caliga, D., see S.R. Alam, Mar., pp. 66-73.
Cameron, K., see W. Feng , Dec., pp. 50-55.
Campbell, J., see N. Alm, May, pp. 35-41.
Cao, G., see J. Cao, Apr., pp. 60-66.
Cao, J., Y. Zhang, G. Cao, and L. Xie, Data Consistency for
Cooperative Caching in Mobile Environments, Apr., pp.
60-66.
Carlin, A., and F. Gallegos, IT Audit: A Critical Business
Process, July, pp. 87-89.
Casanova, M.A., K.K. Breitman, D.F. Brauner, and A.L..
Marins, Database Conceptual Schema Matching, Oct., pp.
102-104.
Cerf, V.G., An Information Avalanche, Jan., pp. 104-105.
Chakrabarti, S., and M. Singhal, Password-Based Authentication: Preventing Dictionary Attacks, June, pp. 68-74.
Chang, C.K.
Computer Recognizes Expert Reviewers, Dec., pp. 64-65.
My Vision for Computer, Jan., pp. 7-8.
Chaudhury, S. Raj., see J. Roschelle, Sept., pp. 42-48.
Cheng, K., see Y. Tseng, June, pp. 60-66.
Cheng, S.Y., see M.M. Trivedi, May, pp. 60-68.
Chong, F., see J. Oliver, Dec., pp. 56-61.
80
Ciolfi, L., M. Fernstrom, L.J. Bannon, P. Deshpande, P.
Gallagher, C. McGettrick, N. Quinn, and S. Shirley, The
Shannon Portal Installation: Interaction Design for Public
Places, July, pp. 64-71.
Clacy, B., and B. Jennings, Service Management: Driving the
Future of IT, May, pp. 98-100.
Clark, S., see J. Cleland-Hang, June, pp. 27-35.
Cleland-Huang, J., B. Berenbach, S. Clark, R. Settimi, and E.
Romanova, Best Practices for Automated Traceability, June,
pp. 27-35.
Coallier, F., Standards, Agility, and Engineering, Sept., pp.
100-102.
Connelly, K., see K.A. Siek, Feb., pp. 89-92.
Conti, A.,
see M.C. Herbordt, Mar., pp. 50-57.
see N. Moore, Mar., pp. 39-49.
Croson, D., see D. Schuff, Feb., pp. 31-36.
Curbera, F., Component Contracts in Service-Oriented
Architectures, Nov., pp. 74-80.
Cybenko, G., and V.H. Berk, Process Query Systems, Jan., pp.
62-70.
D
D’Arcy, J., see D. Schuff, Feb., pp. 31-36.
Davidson, M.A., and E. Yoran, Enterprise Security for Web
2.0, Nov., pp. 117-119.
Davis, P., see R. Anderson, Sept., pp. 56-61.
Davis, R., Magic Paper: Sketch-Understanding Research,
Sept., pp. 34-41.
de Silva, G.C., T. Yamasaki, and K. Aizawa, An Interactive
Multimedia Diary for the Home, May, pp. 52-59.
Deelman, E., see Y. Gil, Dec., pp. 24-32.
Deshpande, P., see L. Ciolfi, July, pp. 64-71.
Di Natale, M., see A. Sangiovanni-Vincentelli, Oct., pp. 4251.
DiGiano, C., see J. Roschelle, Sept., pp. 42-48.
Dimitriadis, Y., see J. Roschelle, Sept., pp. 42-48.
DiSabello, D., see M.C. Herbordt, Mar., pp. 50-57.
Dudek, G., P. Giguere, C. Prahacs, S. Saunderson, J. Sattar, L.
Torres-Mendez, M. Jenkin, A. German, A. Hogue, A.
Ripsman, J. Zacher, E. Milios, H. Liu, P. Zhang, M. Buehler,
and C. Georgiades, AQUA: An Amphibious Autonomous
Robot, Jan., pp. 46-53.
Dustdar, S., see M.P. Papazoglou, Nov., pp. 38-45.
Dutkowski, S., see T. Magedanz, Nov., pp. 46-50.
Dye, R., see N. Alm, May, pp. 35-41.
E
El-Ghazawi, T., see D. Buell, Mar., pp. 23-27.
Ellis, M., see N. Alm, May, pp. 35-41.
Ellisman, M., see Y. Gil, Dec., pp. 24-32.
Epstein, J., Electronic Voting, Aug., pp. 92-95.
Erb, D., see M. Gschwind, June, pp. 37-47.
F
Fahringer, T., see Y. Gil, Dec., pp. 24-32.
Fang, F., see M. Parameswaran, Jan., pp. 40-44.
Feng , W., and K. Cameron, The Green500 List: Encouraging
Sustainable Supercomputing, Dec., pp. 50-55.
Fernstrom, M., see L. Ciolfi, July, pp. 64-71.
Fiadeiro, J.L., Designing for Software’s Social Complexity,
Jan., pp. 34-39.
Flinn, J., see D. Peek, Feb., pp. 93-95.
Fox, G., see Y. Gil, Dec., pp. 24-32.
Friedrichs, S., see J. Bosch, Nov., pp. 51-56.
G
Gaj, K., see D. Buell, Mar., pp. 23-27.
Galgali, P., see V. Vijairaghavan, Feb., pp. 54-58.
Gallagher, P., see L. Ciolfi, July, pp. 64-71.
Gallegos, F., see A. Carlin, July, pp. 87-89.
Gannon, D., see Y. Gil, Dec., pp. 24-32.
Garcia, F., see H. Oktaba, Oct., pp. 21-28.
Gatica-Perez, D., see A. Jaimes, May, pp. 30-34.
Gaubatz, G., see J. Kaps, Feb., pp. 38-44.
Gebis, J., and D. Patterson, Embracing and Extending 20thCentury Instruction Set Architectures, Apr., pp. 68-75.
Geer, D. [News]
Improving Data Accessibility with File Area Networks,
Nov., pp. 14-17.
For Programmers, Multicore Chips Mean Multiple
Challenges, Sept., pp. 17-19.
Georgiades, C., see G. Dudek, Jan., pp. 46-53.
Computer
Computer
Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next Page
A
BEMaGS
F
Computer
Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next Page
German, A., see G. Dudek, Jan., pp. 46-53.
Geyer, R., see J. Oliver, Dec., pp. 56-61.
Giguere, P., see G. Dudek, Jan., pp. 46-53.
Gil, Y., E. Deelman, M. Ellisman, T. Fahringer, G. Fox, D.
Gannon, C. Goble, M. Livny, L. Moreau, and J. Myers,
Examining the Challenges of Scientific Workflows, Dec.,
pp. 24-32.
Goble, C., see Y. Gil, Dec., pp. 24-32.
Gokhale, M.B., see J.L. Tripp, Mar., pp. 28-37.
Golston, J., see D. Talla, Oct., pp. 53-61.
Gordon, G., see J. Iselin. Woodfill, May, pp. 106-108.
Gowans, G., see N. Alm, May, pp. 35-41.
Gramm, A., see M. Tian, Feb., pp. 59-63.
Grier, D.A., [Column]
Annie and the Boys, Aug., pp. 6-9.
The Best Deal in Town, Apr., pp. 8-11.
The Boundaries of Time, July, pp. 5-7.
The Camino Real, June, pp. 6-8.
Controlling the Conversation, Sept., pp. 7-9.
Counting Beans, Nov., pp. 8-10.
Dirty Electricity, Feb., pp. 6-8.
E-Mailing from Armenia, Oct., pp. 8-10.
A Force of Nature, Dec., pp. 8-10.
Outposts, Mar., pp. 8-10.
The Wave of the Future, Jan., pp. 12-14.
Working Class Hero, May, pp. 8-10.
Griswold, W. G., Five Enablers for Mobile 2.0, Oct., pp. 9698.
Grottke, M., and K. S. Trivedi, Fighting Bugs: Remove, Retry,
Replicate, and Rejuvenate, Feb., pp. 107-109.
Gruhl, D., see J. Spohrer, Jan., pp. 71-77.
Gschwind, M., D. Erb, S. Manning, and M. Nutter, An Open
Source Environment for Cell Broadband Engine System
Software, June, pp. 37-47.
Gu, Y., see M.C. Herbordt, Mar., pp. 50-57.
Guerra-Filho, G., and Y. Aloimonos, A Language for Human
Action, May, pp. 42-51.
H
Hammerstrom, D., see R. Iris. Bahar, Jan., pp. 25-33.
Hardaway, D., Replacing Proprietary Software on the
Desktop, Mar., pp. 96-97.
Harkiolakis, N., Incorporating a Variable-Expertise-Level
System in IT Course Modules, Apr., pp. 116, 114-115.
Harlow, J., see R. Iris. Bahar, Jan., pp. 25-33.
Hars, L., Discryption: Internal Hard-Disk Encryption for
Secure Storage, June, pp. 103-105.
Hatton, L.
The Chimera of Software Quality, Aug., pp. 104, 102-103.
Empirical Test Observations in Client-Server Systems, May,
pp. 24-29.
How Accurately Do Engineers Predict Software
Maintenance Tasks?, Feb., pp. 64-69.
Helbig, J., see J. Bosch, Nov., pp. 51-56.
Henzinger, T.A., and J. Sifakis, The Discipline of Embedded
Systems Design, Oct., pp. 32-40.
Herbordt, M.C., T. VanCourt, Y. Gu, B. Sukhwani, A. Conti,
J. Model, and D. DiSabello, Achieving High Performance
with FPGA-Based Computing, Mar., pp. 50-57.
Heyman, K. [News]
New Attack Tricks Antivirus Software, May, pp. 18-20.
A New Virtual Private Network for Today’s Mobile World,
Dec., pp. 17-19.
Hinchey, M.G., R. Sterritt, and C. Rouff, Swarms and Swarm
Intelligence, Apr., pp. 111-113.
Hogue, A., see G. Dudek, Jan., pp. 46-53.
Hole, K.J., see A.N. Klingsheim, Feb., pp. 24-30.
Holmes, N. [Column]
Binary Arithmetic, June, pp. 90-93.
The Computing Profession and Higher Education, Jan., pp.
116, 114-115.
Consciousness and Computers, July, pp. 100, 98-99.
Digital Technology and the Skills Shortage, Mar., pp. 100,
98-99.
The Profession as a Culture Killer, Sept., pp. 112, 110-111.
Holzmann, G., Conquering Complexity, Dec., pp. 111-113.
Hopkins, L., see K.A. Siek, Feb., pp. 89-92.
Horvitz, E., see J. Krumm, Apr., pp. 105-107.
Hsieh, Y., see Y. Tseng, June, pp. 60-66.
Huang, T.S., see A. Jaimes, May, pp. 30-34.
Humphreys, G., see D. Luebke, Feb., pp. 96-100.
Hölzle, U., see L. Barroso , Dec., pp. 33-37.
A
BEMaGS
F
J
Jaimes, A., D. Gatica-Perez, N. Sebe, and T.S. Huang, Guest
Editors’ Introduction: Human-Centered Computing—
Toward a Human Revolution, May, pp. 30-34.
Jansen, B.J., Click Fraud, July, pp. 85-86.
Jansen, B.J., and A. Spink, Sponsored Search: Is Money a
Motivator for Providing Relevant Results?, Aug., pp. 52-57.
Jenkin, M., see G. Dudek, Jan., pp. 46-53.
Jennings, B., see B. Clacy, May, pp. 98-100.
Joachims, T., and F. Radlinski, Search Engines that Learn from
Implicit Feedback, Aug., pp. 34-40.
Joannou, P., Enterprise, Systems, and Software Engineering—
The Need for Integration, May, pp. 103-105.
Jouppi, N.P., see N. Aggarwal, June, pp. 49-59.
Joyner Jr., W.H., see R. Iris. Bahar, Jan., pp. 25-33.
Jung, S., see J. Bosch, Nov., pp. 51-56.
Jurasek, D., see J.I. Woodfill, May, pp. 106-108.
K
Kaiser, M., Toward the Realization of Policy-Oriented
Enterprise Management, Nov., pp. 57-63.
Kamvar, M., and S. Baluja, Deciphering Trends in Mobile
Search, Aug., pp. 58-62.
Kaps, J., G. Gaubatz, and B. Sunar, Cryptography on a Speck
of Dust, Feb., pp. 38-44.
Keblawi, F., and D. Sullivan, The Case for Flexible NIST
Security Standards, June, pp. 19-26.
Kindratenko, V., see D. Buell, Mar., pp. 23-27.
King, B.A., and L.D. Paulson, Motion Capture Moves into
New Realms, Sept., pp. 13-16.
King, L.S., see N. Moore, Mar., pp. 39-49.
Klingsheim, A.N., V. Moen, and K.J. Hole,
Challenges in Securing Networked J2ME
Applications, Feb., pp. 24-30.
Knight, J., The Glass Cockpit, Oct., pp. 92-95.
Kohavi, R., and R. Longbotham, Online Experiments: Lessons Learned, Sept., pp. 103-105.
Kozyrakis, C., see S. Rivoire, Dec., pp. 39-48.
Krumm, J., and E. Horvitz, Predestination: Where
Do You Want to Go Today?, Apr., pp. 105-107.
L
Ladner, R.E., see S.E. Burgstahler, May, pp. 94-97.
Laird, C., Taking a Hard-Line Approach to
Encryption, Mar., pp. 13-15.
Laplante, P.A., see J.M. Voas, July, pp. 94-96.
Lau, C., see R. Iris. Bahar, Jan., pp. 25-33.
Lau, T., Social Scripting for the Web, June, pp. 9698.
Lawton, G. [News]
The Next Big Thing in Chipmaking, Apr., pp. 18-20.
Powering Down the Computing Infrastructure, Feb., pp.
16-19.
Stronger Domain Name System Thwarts Root-Server
Attacks, May, pp. 14-17.
These Are Not Your Father’s Widgets, July, pp. 10-13.
Web 2.0 Creates Security Challenges, Oct., pp. 13-16.
Leavitt, N. [News]
The Changing World of Outsourcing, Dec., pp. 13-16.
For Wireless USB, the Future Starts Now, July, pp. 14-16.
Vendors Fight Spam’s Sudden Rise, Mar., pp. 16-19.
Leeper, D. G., Wi-Fi—The Nimble Musician in Your Laptop,
Apr., pp. 108-110.
Leeser, M., see N. Moore, Mar., pp. 39-49.
Leymann, F., see M. P. Papazoglou, Nov., pp. 38-45.
Li, H., and M. Singhal, Trust Management in Distributed
Systems, Feb., pp. 45-53.
Lin, K., Building Web 2.0, May, pp. 101-102.
Linnell, N., see R. Anderson, Sept., pp. 56-61.
Liu, H., see G. Dudek, Jan., pp. 46-53.
Liu, J., see S. Nath, July, pp. 90-93.
Liu, J., Computing as an Evolving Discipline: 10
Observations, May, pp. 112, 110-111.
Livny, M., see Y. Gil, Dec., pp. 24-32.
Loka, R.R., Software Development: What Is the Problem?,
Feb., pp. 112, 110-111.
Longbotham, R., see R. Kohavi, Sept., pp. 103-105.
Luebke, D., and G. Humphreys, How GPUs Work, Feb., pp.
96-100.
M
Macedonia, M. [Column]
The Future Arrives ... Finally, Feb., pp. 101-103.
81
December 2007
Computer
Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next Page
A
BEMaGS
F
Computer
Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next Page
A
BEMaGS
F
ANNUAL INDEX
Generation 3D: Living in Virtual Worlds, Oct., pp. 99-101.
iPhones Target the Tech Elite, June, pp. 94-95.
Magedanz, T., N. Blum, and S. Dutkowski, Evolution of SOA
Concepts in Telecommunications, Nov., pp. 46-50.
Maglio, P.P., see J. Spohrer, Jan., pp. 71-77.
Manning, S., see M. Gschwind, June, pp. 37-47.
Marculescu, D., see R. Iris. Bahar, Jan., pp. 25-33.
Margaria, T., Service Is in the Eyes of the Beholder, Nov., pp.
33-37.
Marins, A.L.A., see M.A. Casanova, Oct., pp. 102-104.
Marsh, T., see C. Shahabi, July, pp. 45-52.
Mattia, A., see P. Aiken, Apr., pp. 42-50.
Mayer, A., H. Siebert, and K.D. McDonald-Maier, Boosting
Debugging Support for Complex Systems on Chip, Apr.,
pp. 76-81.
McDonald, D.W., see C. Torrey, Aug., pp. 96-97.
McDonald-Maier, K.D., see A. Mayer, Apr., pp. 76-81.
McGettrick, C., see L. Ciolfi, July, pp. 64-71.
McLaughlin, M., see C. Shahabi, July, pp. 45-52.
McMahan, R.P., see D. A. Bowman, July, pp. 36-43.
Menzel, S., see K.A. Siek, Feb., pp. 89-92.
Menzies, T., D. Owen and J. Richardson, The Strangest Thing
About Software, Jan., pp. 54-60.
Messerges, T., see M. Tripunitara, Feb., pp. 104-106.
Meza , J., see S. Rivoire, Dec., pp. 39-48.
Mikroyannidis, A., Toward a Social Semantic Web, Nov., pp.
113-115.
Milios, E., see G. Dudek, Jan., pp. 46-53.
Model, J., see M.C. Herbordt, Mar., pp. 50-57.
Moen, V., see A.N. Klingsheim, Feb., pp. 24-30.
Moore, N., A. Conti, M. Leeser, and L.S. King,
Vforce: An Extensible Framework for Reconfigurable Supercomputing, Mar., pp. 39-49.
Moreau, L., see Y. Gil, Dec., pp. 24-32.
Morris, G.R., and V.K. Prasanna, Sparse Matrix
Computations on Reconfigurable Hardware,
Mar., pp. 58-64.
Mun, M., see C. Shahabi, July, pp. 45-52.
Myers, J., see Y. Gil, Dec., pp. 24-32.
N
Narasimhan, P., Fault-Tolerant CORBA: From
Specification to Reality, Jan., pp. 110-112.
Narayan, P., see B. Steffen, Nov., pp. 64-73.
Nath, S., J. Liu and F. Zhao, SensorMap for WideArea Sensor Webs, July, pp. 90-93.
Nayar, S.K., and V.N. Anand, 3D Display Using
Passive Optical Scatterers, July, pp. 54-63.
Neill, C.J., see R.S. Sangwan, Aug., pp. 85-87.
Nutter, M., see M. Gschwind, June, pp. 37-47.
O
Oktaba, H., F. Garcia, M. Piattini, F. Ruiz, F.J. Pino, and C.
Alquicira, Software Process Improvement: The Competisoft
Project, Oct., pp. 21-28.
Oliver, J., R. Amirtharajah, V. Akella, R. Geyer and F. Chong,
Life Cycle Aware Computing: Reusing Silicon Technology,
Dec., pp. 56-61.
Orailoglu, A., see R. Iris. Bahar, Jan., pp. 25-33.
Ortiz Jr., S. [News]
4G Wireless Begins to Take Shape, Nov., pp. 18-21.
Brain-Computer Interfaces: Where Human and Machine
Meet, Jan., pp. 17-21.
Getting on Board the Enterprise Service Bus, Apr., pp. 15-17.
Protecting Networks by Controlling Access, Aug., pp. 16-19.
Searching the Visual Web, June, pp. 12-14.
Owen, D., see T. Menzies, Jan., pp. 54-60.
P
Papazoglou, M.P., P. Traverso, S. Dustdar, and F. Leymann,
Service-Oriented Computing: State of the Art and Research
Challenges, Nov., pp. 38-45.
Parameswaran, M., X. Zhao, A.B. Whinston and F. Fang,
Reengineering the Internet for Better Security, Jan., pp. 4044.
Parker, B., see P. Aiken, Apr., pp. 42-50.
Patterson, D., see J. Gebis, Apr., pp. 68-75.
Patton, C., see J. Roschelle, Sept., pp. 42-48.
Paul, J. M., see S. M. Pieper, Sept., pp. 23-30.
Paulson, L.D. [News Briefs]
Company Develops Handheld with Flexible Screen, Apr.,
p. 22.
82
Company Says Diagonal Wiring Makes Chips Faster, Sept.,
p. 20.
Developing Tomorrow’s Beach Today, Where High Tide
Meets High Tech, Oct., p. 18.
Femtocells Promise Faster Mobile Networks, Oct., p. 18.
Google Surveys Web for Malware, July, p. 18
High-Tech Mirror Helps Shoppers Reflect on Their
Purchases Aug., p. 21.
Hitachi Researchers Develop Power-Sized RFID Chips,
May, p. 23.
HP Releases Computer on a Sticker, Feb., p. 21
IBM Adds “Nothing” to Chips, Improves Performance,
July, p. 17.
IBM Demonstrates Fast Memory-Chip Technology, Apr.,
p. 22.
IEEE Works on Energy-Efficient Ethernet, May, pp. 21.
Important Wireless-Spectrum Auction Generates
Controversy Nov. p. 22.
Intel Adds Distance to Wi-Fi, July, p. 19.
Interface Makes a Computer Desktop Look like a Real
Desktop, Dec. p. 21.
Light Pulses Could Improve Optical Communications,
Oct., p. 17.
Making Web Science an Academic Field, Feb., p. 22.
Nanodevice Increases Optical Networks’ Bandwidth, Apr.,
p. 21.
New Approach Will Speed Up USB, Dec., p. 21.
New Attack Works Hards to Avoid Defenses, Aug., p. 20.
New Contender in Networked Storage, Feb., p. 20.
New Eye-Tracking Technology Could Make Billboards
More Effective, July, p. 18.
New Software Visually Displays Musical Structure; May,
p. 22.
New Technique Yields Faster Multicore Chips, Oct., p. 19.
New Technology Creates Lively E-Wallpaper, Nov., p. 23.
New Technology Prevents Click Fraud Mar., p. 21.
New Technology Transmits Data via Visible Light, Sept.,
p. 21.
Professor Builds Desktop High-Performance Computer,
Nov., p. 22.
Project Improves Mesh Networks, Jan., p. 23.
Researchers Develop “Ballistic Computing” Transistor,
Jan., p. 22.
Researchers Develop Efficient Digital-Camera Design Mar.,
p. 20.
Researchers Herd Computers to Fight Spyware, Sept.,
p. 22.
Robot Adapts to Changing Conditions, Dec. p. 20.
Scientists Study Ancient Computing Device, Feb., p. 21.
Scrabble Program Wins by Inference, Mar., p. 21.
Silicon Clock Promises Improved Computer Technology,
Mar., p. 22.
Standardization Comes to Virtualization, Dec., p. 22.
Startup Uses Sensors to Save Houseplants, Apr., p. 23
Spherical System Captures Images from All Directions,
Aug., p. 20.
Sun Makes Java Open Source, Jan., p. 24.
Technique Creates High-Performance Storage Technology,
Aug., p. 21.
Tracking Troubled Turtles with Wireless Technology, Sept.,
p. 21.
Virtual Reality Program Eases Amputees’ Phantom Pain,
Jan., p. 23.
Will Thin Finally Be In?, Nov., p. 24.
Pedram, M., see R. Iris. Bahar, Jan., pp. 25-33.
Peek, D., and J. Flinn, Consumer Electronics Meets
Distributed Storage, Feb., pp. 93-95.
Peterson, K.D., see J.L. Tripp, Mar., pp. 28-37.
Petrenko, M., D. Poshyvanyk, V. Rajlich, and J. Buchta,
Teaching Software Evolution in Open Source, Nov., pp.
25-31.
Piattini, M., see H. Oktaba, Oct., pp. 21-28.
Pieper, S.M., J.M. Paul, and M.J. Schulte, A New Era of
Performance Evaluation, Sept., pp. 23-30.
Pino, F.J., see H. Oktaba, Oct., pp. 21-28.
Pittman, J.A., Handwriting Recognition: Tablet PC Text
Input, Sept., pp. 49-54.
Poshyvanyk, D., see M. Petrenko, Nov., pp. 25-31.
Prahacs, C., see G. Dudek, Jan., pp. 46-53.
Prasanna, V.K., see G.R. Morris, Mar., pp. 58-64.
Prey, J., and A. Weaver, Guest Editors’ Introduction: Tablet
PC Technology—The Next Generation, Sept., pp. 32-33.
Computer
Computer
Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next Page
A
BEMaGS
F
Computer
Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next Page
Price, S.M., Supporting Resource-Constrained Collaboration
Environments, June, pp. 108, 106-107.
Prince, C., see R. Anderson, Sept., pp. 56-61.
Q
Quinn, N., see L. Ciolfi, July, pp. 64-71.
R
Radlinski, F., see T. Joachims, Aug., pp. 34-40.
Rajlich, V., see M. Petrenko, Nov., pp. 25-31.
Ramacher, U., Software-Defined Radio Prospects for
Multistandard Mobile Phones, Oct., pp. 62-69.
Ramakrishnan, N., From the Area Editor: Search—The New
Incarnations, Aug., pp. 31-32.
Ramakrishnan, R., and A. Tomkins, Toward a PeopleWeb,
Aug., pp. 63-72.
Ranganathan, P.,
see S. Rivoire, Dec., pp. 39-48.
see N. Aggarwal, June, pp. 49-59.
Razmov, V., see R. Anderson, Sept., pp. 56-61.
Rich, D., Authentication in Transient Storage Device
Attachments, Apr., pp. 102-104.
Richardson, J., see T. Menzies, Jan., pp. 54-60.
Riehle, D., The Economic Motivation of Open Source
Software: Stakeholder Perspectives, Apr., pp. 25-32.
Rigby, M., see D. Budgen, Apr., pp. 34-41.
Ripsman, A., see G. Dudek, Jan., pp. 46-53.
Ritter, H., see M. Tian, Feb., pp. 59-63.
Rivoire, S., M. Shah, P. Ranganathan, C. Kozyrakis, and J.
Meza , Models and Metrics to Enable Energy-Efficiency
Optimizations, Dec., pp. 39-48.
Rizzo, A.A., see C. Shahabi, July, pp. 45-52.
Rohrbough, L., see J. Sztipanovits, Mar., pp. 90-92.
Romanova, E., see J. Cleland-Huang, June, pp. 27-35.
Roschelle, J., D. Tatar, S. Raj. Chaudhury, Y. Dimitriadis, C.
Patton, and C. DiGiano, Ink, Improvisation, and Interactive
Engagement: Learning with Tablets, Sept., pp. 42-48.
Ross, R., Managing Enterprise Security Risk with NIST
Standards, Aug., pp. 88-91.
Rouff, C., see M.G. Hinchey, Apr., pp. 111-113.
Ruiz, F., see H. Oktaba, Oct., pp. 21-28.
S
Sangiovanni-Vincentelli, A., and M. Di Natale, Embedded
System Design for Automotive Applications, Oct., pp. 42-51.
Sangwan, R.S., and C.J. Neill, How Business Goals Drive
Architectural Design, Aug., pp. 85-87.
Santini, S., Making Computers Do More with Less, Dec., pp.
124, 122-123.
Sastry, S., see J. Sztipanovits, Mar., pp. 90-92.
Sattar, J., see G. Dudek, Jan., pp. 46-53.
Saunderson, S., see G. Dudek, Jan., pp. 46-53.
Scherdin, A., see J. Bosch, Nov., pp. 51-56.
Schiller, J.H., see M. Tian, Feb., pp. 59-63.
Schmidt, D.C., see J. Sztipanovits, Mar., pp. 90-92.
Schuff, D., O. Turetken, J. D’Arcy, and D. Croson, Managing
E-Mail Overload: Solutions and Future Challenges, Feb.,
pp. 31-36.
Schulte, M.J., see S.M. Pieper, Sept., pp. 23-30.
Sebe, N., see A. Jaimes, May, pp. 30-34.
Settimi, R., see J. Cleland-Huang, June, pp. 27-35.
Shah, A., see V. Vijairaghavan, Feb., pp. 54-58.
Shah, D., see V. Vijairaghavan, Feb., pp. 54-58.
Shah, M., see S. Rivoire, Dec., pp. 39-48.
Shah, N., see V. Vijairaghavan, Feb., pp. 54-58.
Shahabi, C., T. Marsh, K. Yang, H. Yoon, A.A. Rizzo, M.
McLaughlin, and M. Mun, Immersidata Analysis: Four
Case Studies, July, pp. 45-52.
Sharkey, N., Automated Killers and the Computing
Profession, Nov., pp. 124, 122-123.
Shipman, B., see J. Brown, Dec., pp. 106-110.
Shirley, S., see L. Ciolfi, July, pp. 64-71.
Siebert, H., see A. Mayer, Apr., pp. 76-81.
Siek, K. A., K. Connelly, S. Menzel, and L. Hopkins,
Propagating Diversity through Active Dissemination, Feb.,
pp. 89-92.
Sifakis, J., see T.A. Henzinger, Oct., pp. 32-40.
Singhal, M., see H. Li, Feb., pp. 45-53.
Singhal, M., see S. Chakrabarti, June, pp. 68-74.
Smith, J.E., see N. Aggarwal, June, pp. 49-59.
Smith, M.C., see S.R. Alam, Mar., pp. 66-73.
Smyth, B., A Community-Based Approach to Personalizing
Web Search, Aug., pp. 42-50.
A
BEMaGS
F
Soukup, J., and M. Soukup, The Inevitable Cycle: Graphical
Tools and Programming Paradigms, Aug., pp. 24-30.
Soukup, M., see J. Soukup, Aug., pp. 24-30.
Sousa, L., see S. Yamagiwa, May, pp. 70-77.
Spink, A., see B.J. Jansen, Aug., pp. 52-57.
Spohrer, J., P.P. Maglio, J. Bailey, and D. Gruhl, Steps Toward
a Science of Service Systems, Jan., pp. 71-77.
Srinivasan, V., see V. Vijairaghavan, Feb., pp. 54-58.
Steffen, B., and P. Narayan, Full Life-Cycle Support for Endto-End Processes, Nov., pp. 64-73.
Sterritt, R., see M.G. Hinchey, Apr., pp. 111-113.
Stone, A., Natural-Language Processing for Intrusion
Detection, Dec., pp. 103-105.
Subramanya , S., and B. Yi, Enhancing the User Experience
in Mobile Phones, Dec., pp. 114-117.
Sukhwani, B., see M.C. Herbordt, Mar., pp. 50-57.
Sullivan, D., see F. Keblawi, June, pp. 19 -26.
Sunar, B., see J. Kaps, Feb., pp. 38-44.
Swartout, W., see M. van Lent, Aug., pp. 98-100.
Sztipanovits, J., J. Bay, L. Rohrbough, S. Sastry, D.C. Schmidt,
N. Whitaker, D. Wilson, and D. Winter, Escher: A New
Technology Transitioning Model, Mar., pp. 90-92.
T
Talla, D., and J. Golston, Using DaVinci Technology for
Digital Video Devices, Oct., pp. 53-61.
Tatar, D., see J. Roschelle, Sept., pp. 42-48.
Thomas, M., Unsafe Standardization, Nov., pp. 109-111.
Tian, M., A. Gramm, H. Ritter, J.H. Schiller, and T. Voigt,
Adaptive QoS for Mobile Web Services through
Cross-Layer Communication, Feb., pp. 59-63.
Tomkins, A., see R. Ramakrishnan, Aug., pp. 6372.
Torres-Mendez, L., see G. Dudek, Jan., pp. 46-53.
Torrey, C., and D.W. McDonald, How-To Web
Pages, Aug., pp. 96-97.
Traverso, P., see M.P. Papazoglou, Nov., pp. 38-45.
Treleaven, P., and J. Wells, 3D Body Scanning and
Healthcare Applications, July, pp. 28-34.
Tripp, J.L., M.B. Gokhale, and K.D. Peterson,
Trident: From High-Level Language to
Hardware Circuitry, Mar., pp. 28-37.
Tripunitara, M., and T. Messerges, Resolving the
Micropayment Problem, Feb., pp. 104-106.
Trivedi, K.S., see M. Grottke, Feb., pp. 107-109.
Trivedi, M.M., and S.Y. Cheng, Holistic Sensing
and Active Displays for Intelligent Driver
Support Systems, May, pp. 60-68.
Tront, J.G., Facilitating Pedagogical Practices
through a Large-Scale Tablet PC Deployment, Sept., pp.
62-68.
Tseng, Y., Y. Wang, K. Cheng, and Y. Hsieh, iMouse: An
Integrated Mobile Surveillance and Wireless Sensor System,
June, pp. 60-66.
Turetken, O., see D. Schuff, Feb., pp. 31-36.
Turner, M., see D. Budgen, Apr., pp. 34-41.
V
Vahid, F., It’s Time to Stop Calling Circuits Hardware, Sept.,
pp. 106-108.
VanCourt, T., see M.C. Herbordt, Mar., pp. 50-57.
van Genuchten, M., The Impact of Software Growth on the
Electronics Industry, Jan., pp. 106-108.
van Genuchten, M., and D. Vogel, Getting Real in the
Classroom, Oct., pp. 108, 106-107.
Van Lent, M., Game Smarts, Apr., pp. 99-101.
Van Lent, M., and W. Swartout, Games: Once More, with
Feeling, Aug., pp. 98-100.
Vasconcelos, N., From Pixels to Semantic Spaces: Advances in
Content-Based Image Retrieval, July, pp. 20-26.
Vaughan Nichols, S.J., New Interfaces at the Touch of a
Fingertip, Aug., pp. 12-15.
Vetter, J.S., see S. R. Alam, Mar., pp. 66-73.
Vetter, R., see J. Brown, Dec., pp. 106-110.
Videon, F., see R. Anderson, Sept., pp. 56-61.
Vijairaghavan, V., D. Shah, P. Galgali, A. Shah, N. Shah, V.
Srinivasan, and L. Bhatia, Marking Technique to Isolate
Boundary Router and Attacker, Feb., pp. 54-58.
Vincent, L., Taking Online Maps Down to Street Level, Dec.,
pp. 118-120.
Voas, J.M., and P.A. Laplante, Standards Confusion and
Harmonization, July, pp. 94-96.
83
December 2007
Computer
Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next Page
A
BEMaGS
F
Computer
Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next Page
A
BEMaGS
F
ANNUAL INDEX
Vogel, D., see M. van Genuchten, Oct., pp. 108, 106-107.
Voigt, T., see M. Tian, Feb., pp. 59-63.
W
Wang, Y., see Y. Tseng, June, pp. 60-66.
Ward, B. [CS Connection]
Bylaws Changes, Jan., pp. 89-93.
CHC61 Sites Highlight ‘Unsung Heroes’ in 2007, Mar., pp.
77-79.
Computer Science Enrollments Drop in 2006, June, p. 85.
Computer Society Announces 2007 Programs, Jan., pp. 8487.
Computer Society Announces Larson, UPE, and OCA
Student Winners, Apr., p. 91.
Computer Society Magazines: 2008 Preview, Nov., pp. 8184.
Computer Society Recognizes Outstanding Professionals,
May, p. 88.
Computer Society Summer and Fall Conferences, May, pp.
85-87.
Edward Seidel Honored with Sidney Fernbach Award, Apr.,
p. 89.
Hosaka and Spielberg Named Winners of 2006 Computer
Pioneer Award, Feb. p. 73.
IEEE Computer Society Appoints Editors in Chief, Aug.,
pp. 77-78.
IEEE Computer Society e-Learning Campus Adds Value to
Membership, Oct., pp. 73-74.
IEEE Computer Society Launches New Certification
Program, Dec., pp. 64-65
IEEE Computer Society Offers College Scholarships, Aug.,
p. 79.
IEEE Names 2007 Fellows, Feb., pp. 74-75.
IEEE President-Elect Candidates Address Computer Society
Concerns, Sept., pp. 77-83.
James Pomerene Garners Joint IEEE/ACM Award, Apr., p.
90.
Mateo Valero Receives Joint IEEE/ACM Award, Aug., p.
77.
Oregon Teen Wins Computer Society Prize at Intel Science
Fair, July, pp. 77-78.
Society Board Amends Bylaws, July, pp. 79-80.
Society Honors Wilkes and Parnas with 60th Anniversary
Award, Nov., p. 85.
Society Introduces New Technical Task Forces, June, p. 84.
Society Publications Seek Editors in Chief for 2008-2009
Terms, Jan., p. 88.
Software Engineering Volume Translated into Russian,
Oct., p. 75.
Tadashi Watanabe Receives 2006 Seymour Cray Award,
Apr., p. 89.
UCSD’s Smarr Receives Kanai Award, Oct. p. 73.
Watson, H.J., and B.H. Wixom, The Current State of Business
Intelligence, Sept., pp. 96-99.
Weaver, A., see J. Prey, Sept., pp. 32-33.
Wells, J., see P. Treleaven, July, pp. 28-34.
Whinston, A B., see M. Parameswaran, Jan., pp. 40-44.
Whitaker, N., see J. Sztipanovits, Mar., pp. 90-92.
Williams, M.R.
An Interesting Year, Dec., pp. 6-7.
A Year of Decision, Jan., pp. 9-11.
Wilson, D., see J. Sztipanovits, Mar., pp. 90-92.
Winter, D., see J. Sztipanovits, Mar., pp. 90-92.
Wixom, B. H., see H. J. Watson, Sept., pp. 96-99.
Wolf, W.
The Good News and the Bad News, Nov., pp. 104-105.
Guest Editor’s Introduction: The Embedded Systems
Landscape, Oct., pp. 29-31.
Woodfill, J., R. Buck, D. Jurasek, G. Gordon, and T. Brown,
3D Vision: Developing an Embedded Stereo-Vision System,
May, pp. 106-108.
X
Xie, L., see J. Cao, Apr., pp. 60-66.
Y
Yamagiwa, S., and L. Sousa, Caravela: A Novel Stream-Based
Distributed Computing Environment, May, pp. 70-77.
Yamasaki, T., see G. C. de Silva, May, pp. 52-59.
Yang, K., see C. Shahabi, July, pp. 45-52.
Yi, B., see S. Subramanya , Dec., pp. 114-117.
Yong, J., and E. Bertino, Replacing Lost or Stolen E-Passports,
Oct., pp. 89-91.
Yoon, H., see C. Shahabi, July, pp. 45-52.
Yoran, E., see M. Ann. Davidson, Nov., pp. 117-119.
Z
Zacher, J., see G. Dudek, Jan., pp. 46-53.
Zhang, N., and W. Zhao, Privacy-Preserving Data Mining
Systems, Apr., pp. 52-58.
Zhang, P., see G. Dudek, Jan., pp. 46-53.
Zhang, Y., see J. Cao, Apr., pp. 60-66.
Zhao, F., see S. Nath, July, pp. 90-93.
Zhao, W., see N. Zhang, Apr., pp. 52-58.
Zhao, X., see M. Parameswaran, Jan., pp. 40-44.
Join the IEEE Computer Society online at
www.computer.org/join/
Complete the online application and get
• immediate online access to Computer
• a free e-mail alias — you@computer.org
_________________
• free access to 100 online books on technology topics
• free access to more than 100 distance learning course titles
• access to the IEEE Computer Society Digital Library for only $118
Read about all the benefits of joining the Society at
www.computer.org/join/benefits.htm
84
Computer
Computer
Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next Page
A
BEMaGS
F
Computer
Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next Page
A
BEMaGS
F
ADVERTISER / PRODUCT INDEX
ADVERTISER / PRODUCT INDEX
D EC EM B ER
Advertisers
Page Number
Alfaisal University
AVSS 2008
87
117
Chinese University of Hong Kong
CTS 2008
88
120
IEEE Computer Society Digital Library
IEEE Computer Society Membership
The George Washington University
ieee.tv
____
5
68-70
Lincoln Laboratory
97
Michigan State University
87
Microsoft
16
National University of Singapore
94
Philips
86
Seapine Software, Inc.
Cover 4
SCC 2008
Cover 3
Temple University
87, 97
University of Bridgeport
92
University of California, Riverside
94
University of Massachusetts Boston
91
University of Michigan, Ann Arbor
90
University of Southern California
95
University of Toronto
89
University of Washington Tacoma
93
West Virginia University
91
Classified Advertising
Advertising Sales Representatives
Mid Atlantic (product/recruitment)
Dawn Becker
Phone:
+1 732 772 0160
Fax:
+1 732 772 0164
Email: ____________
db.ieeemedia@ieee.org
Midwest/Southwest (recruitment)
Darcy Giovingo
Phone:
+1 847 498 4520
Fax:
+1 847 498 5911
____________
Email: dg.ieeemedia@ieee.org
New England (product)
Jody Estabrook
Phone:
+1 978 244 0192
Fax:
+1 978 244 0103
Email: ____________
je.ieeemedia@ieee.org
Southwest (product)
Steve Loerch
Phone:
+1 847 498 4520
Fax:
+1 847 498 5911
Email:
steve@didierandbroderick.com
________________
92
Cover 2
86-101
200 7
New England (recruitment)
John Restchack
Phone: +1 212 419 7578
Fax: +1 212 419 7589
Email: ___________
j.restchack@ieee.org
Northwest (product)
Lori Kehoe
Phone: +1 650-458-3051
Fax:
+1 650 458 3052
Email: _________
l.kehoe@ieee.org
Southeast (recruitment)
Thomas M. Flynn
Phone:
+1 770 645 2944
Fax:
+1 770 993 4423
Email: ______________
flynntom@mindspring.com
Midwest (product)
Dave Jones
Phone:
+1 708 442 5633
Fax:
+1 708 442 7620
Email: ____________
dj.ieeemedia@ieee.org
Will Hamilton
Phone:
+1 269 381 2156
Fax:
+1 269 381 2556
Email: wh.ieeemedia@ieee.org
____________
Joe DiNardo
Phone:
+1 440 248 2456
Fax:
+1 440 248 2594
Email: ____________
jd.ieeemedia@ieee.org
Connecticut (product)
Stan Greenfield
Phone:
+1 203 938 2418
Fax:
+1 203 938 3211
Email: greenco@optonline.net
____________
Southern CA (product)
Marshall Rubin
Phone:
+1 818 888 2407
Fax:
+1 818 888 4907
Email: ____________
mr.ieeemedia@ieee.org
Northwest/Southern CA
(recruitment)
Tim Matteson
Phone:
+1 310 836 4064
Fax:
+1 310 836 4067
Email: ____________
tm.ieeemedia@ieee.org
Southeast (product)
Bill Holland
Phone:
+1 770 435 6549
Fax:
+1 770 435 0243
Email: ____________
hollandwfh@yahoo.com
Japan
Tim Matteson
Phone:
+1 310 836 4064
Fax:
+1 310 836 4067
Email: ____________
tm.ieeemedia@ieee.org
Europe (product/recruitment)
Hillary Turnbull
Phone:
+44 (0) 1875 825700
Fax:
+44 (0) 1875 825701
Email: _______________
impress@impressmedia.com
Boldface denotes advertisements in this issue.
Advertising Personnel
Computer
IEEE Computer Society
10662 Los Vaqueros Circle
Los Alamitos, California 90720-1314
USA
Phone: +1 714 821 8380
Fax: +1 714 821 4010
http://www.computer.org
advertising@computer.org
__________________
Computer
Marion Delaney
IEEE Media, Advertising Director
Phone:
+1 415 863 4717
Email: _____________
md.ieeemedia@ieee.org
Marian Anderson
Advertising Coordinator
Phone:
+1 714 821 8380
Fax:
+1 714 821 4010
Email: ______________
manderson@computer.org
Sandy Brown
IEEE Computer Society,
Business Development Manager
Phone:
+1 714 821 8380
Fax:
+1 714 821 4010
Email: sb.ieeemedia@ieee.org
____________
Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next Page
A
BEMaGS
F
Computer
Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next Page
A
BEMaGS
F
CAREER OPPORTUNITIES
CAREER OPPORTUNITIES
MICHIGAN TECH, Computer Engineering – Senior Faculty Position.
The Department of Electrical and Computer Engineering at Michigan Technological University invites applications for a
tenured faculty position in computer
engineering. The department is seeking
an established researcher in real-time
computing. Areas of particular interest
include real-time hardware, RTOS, and
the design and implementation of
embedded and/or distributed real-time
systems. We are looking for a person
whose central focus in real-time systems
can provide technical leadership and help
integrate several existing research projects involving peripheral aspects of realtime computing. A successful senior candidate will have a demonstrated track
record of establishing and conducting a
high quality research program, sufficient
to qualify for the position of Associate or
Full Professor with tenure. A candidate
with a demonstrated potential for establishing a high quality research program
will be considered for a position of Assistant Professor. The Department of Electrical and Computer Engineering has one
of the leading undergraduate programs
in the US and is aggressively growing its
graduate and research programs. Michigan Tech is located in the beautiful Upper
Peninsula of Michigan, offering extensive
outdoor recreation. Michigan Tech is an
Philips has the following job opportunities available
(various levels/types):
PHILIPS ELECTRONICS NORTH AMERICA
ANDOVER, MA
• Software Engineer (SWE-PENAC-MA)
• Project Manager (PM-PENAC-MA) – Manage and
coordinate design, development and implementation
activities for the various technology projects
FRAMINGHAM, MA
• Software Engineer (SWE-PENAC-F-MA)
FOSTER CITY, CA
• Software Engineer (SWE-PENAC-CA)
• Test Engineer (TE-PENAC-CA)
EL PASO,TX
• Engineering Manager (EM-PENAC-TX)
PHILIPS MEDICAL SYSTEMS MR
LATHAN, NY
• Mechanical Engineer (MEN-PMR-NY)
• Manufacturing Engineer (ME-PMR-NY)
PHILIPS NUCLEAR MEDICINE
FITCHBURG, WI
• Software Engineer (SWE-PNM-WI)
MILPITAS, CA
• Software Engineer (SWE-PNM-CA)
PHILIPS ULTRASOUND
BOTHELL, WA
• Software Engineer (SWE-PU-WA)
Some positions may require travel. Submit resume
by mail to PO box 4104, Santa Clara, CA 950564104, ATTN: HR Coordinator. Must reference
job title and job code (i.e. SWE-PENAC-CA)
in order to be considered. EOE.
86
Computer
equal opportunity employer. Send
resume, statements of teaching and
research interests, and contact data for
three references to ____________
cpesearch@mtu.edu.
IOWA STATE UNIVERSITY, Electrical
and Computer Engineering Department. The Electrical and Computer Engineering Department at Iowa State University has immediate openings for
faculty positions at all levels. Applications
will be accepted from highly qualified
individuals for regular faculty positions in
the department in all core areas of expertise in Electrical or Computer Engineering, especially in •Computer engineering
with emphasis on embedded systems;
•VLSI with emphasis on analog/mixedsignal/RF IC design and bio applications;
•Software engineering; •Information
assurance and security; and •Distributed
decisions sciences, controls, and applications. Faculty positions are also available
in Interdisciplinary research areas as part
the Iowa State University College of Engineering’s aggressive mission to fill 50 new
college-wide positions with faculty who
possess the talent to address the challenges that define worldwide quality of
life and have global impact. The positions
are targeted in the following interdisciplinary research and education cluster
areas: •Biosciences and Engineering,
•Energy Sciences and Technology, •Engineering for Extreme Events, •Information
and Decision Sciences, •Engineering for
Sustainability. Duties for all positions will
include undergraduate and graduate
education, developing and sustaining
externally-funded research, graduate student supervision and mentoring, and professional/institutional service. All candidates must have an earned Ph.D. degree
in Electrical Engineering, Computer Engineering, Computer Science or related
field; and the potential to excel in the
classroom and to establish and maintain
a productive externally funded research
program. Associate and Full Professor
candidates must, in addition, have an
excellent record of externally funded
research and internationally recognized
scholarship. Exceptional senior candidates may be considered for endowed
research chair/professorship positions.
Rank and salary are commensurate with
qualifications. Screening will begin on
November 1, 2007, and will continue
until positions are filled. To guarantee
consideration, complete applications
must be received by 1/18/2008. For regular faculty positions, apply online at
http://iastate.jobs.com.
Vacancy
#070478. For information on positions in
the cluster areas and application process,
visit http://www.engineering.iastate.
edu/clusters. ISU is an EO/AA employer.
_______
UNIVERSITY OF CALIFORNIA, SANTA
CRUZ, Computer Engineering, Associate/Full Professor, Autonomous
Systems. The Computer Engineering
Department invites applications for a
tenured (Associate or Full Professor) position in Autonomous Systems. Potential
areas of specialization include robotics,
control, mechatronics, and assistive technology. The department is launching an
initiative in autonomous systems and
mechatronic engineering, and seeks an
individual to join our core faculty in this
area and lead the development of new
research and degree programs. Apply:
http://www.soe.ucsc.edu/jobs, Position
#808. To ensure full consideration, applications must arrive by Jan. 1, 2008. EEO/
AA/IRCA Employer.
UNIVERSITY OF NORTH CAROLINA
WILMINGTON, Computer Science
(Assistant Professor, Tenure-track).
August 2008. Ph.D. (or ABD) in Computer Science or closely related area
required. Emphasis in computer and network security or closely related area.
SUBMISSION DETAILS: Rates are $299.00 per column inch ($320 minimum). Eight lines per column inch and average five typeset words per line.
Send copy at least one month prior to publication date to: Marian Anderson,
Classified Advertising, Computer Magazine, 10662 Los Vaqueros Circle, PO
Box 3014, Los Alamitos, CA 90720-1314; (714) 821-8380; fax (714) 821-4010.
Email: _________________
manderson@computer.org.
In order to conform to the Age Discrimination in Employment Act and to discourage age discrimination, Computer may reject any advertisement containing any of these phrases or similar ones: “…recent college grads…,” “…14 years maximum experience…,” “…up to 5 years experience,” or “…10 years
maximum experience.” Computer reserves the right to append to any advertisement without specific notice to the advertiser. Experience ranges are suggested minimum requirements, not maximums. Computer assumes that since
advertisers have been notified of this policy in advance, they agree that any
experience requirements, whether stated as ranges or otherwise, will be construed by the reader as minimum requirements only. Computer encourages
employers to offer salaries that are competitive, but occasionally a salary may
be offered that is significantly below currently acceptable levels. In such cases
the reader may wish to inquire of the employer whether extenuating circumstances apply.
Computer
Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next Page
A
BEMaGS
F
Computer
Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next Page
A
BEMaGS
F
COLLEGE OF ENGINEERING
ALFAISAL UNIVERSITY
Riyadh, Saudi Arabia
Alfaisal University is a private, not-for-profit, research university, comprising the Colleges of Engineering, Science and General Studies, Medicine, and
Business, and will commence its programs in Fall
2008. The language of instruction is English and
modern learning outcomes, paradigms and technologies are used. The university was founded by King
Faisal Foundation along with organizations such as
Boeing, British Aerospace, Thales, and King Faisal
Specialist Hospital & Research Center, who serve on
its Board of Trustees.
The College of Engineering will offer undergraduate and graduate programs in the following disciplines and subdisciplines: ELECTRICAL (power,
communications, signal processing, electronics,
photonics), COMPUTER (intelligent systems, language and speech, computer systems, computation), MECHANICAL (applied mechanics, product
creation), AEROSPACE (thermo/fluid systems, aerospace systems, transportation, system dynamics
and control), MATERIALS (materials processing,
materials properties and performance, polymers,
nanoscience and technology), CHEMICAL (catalysis, reactor design, design-systems, polymers). All
programs have been developed by renowned scholars from leading universities in the US and the UK,
and are designed to be qualified for accreditation according to US and UK standards and requirements.
Alfaisal Engineering seeks candidates for the following positions, commencing in August 2008:
FOUNDING SENIOR FACULTY (with instructional,
research, and administrative responsibilities),
RESEARCH SCIENTISTS (academics with research
focus), LECTURERS (academics with instructional focus), POST-DOCS (Doctorate degree holders with research focus), INSTRUCTORS (Masters degree holders
with instructional focus), and ENGINEERS (Bachelors
degree holders). Attractive salary and start-up support is provided. Queries and Applications should be
sent to engnr_recruiting@alfaisal.edu.
______________________ The subject
line should specify the discipline, subdiscipline, position, and the advertisement reference. The deadline
for applications is 31 December 2007. Interviews for
leading positions will be conducted in January and
February 2008 in Cambridge, MA, USA, and Cambridge, England, UK.
______________
__________
________________
________________
87
December 2007
Computer
Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next Page
A
BEMaGS
F
Computer
Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next Page
A
BEMaGS
F
Details at __________________
http://www.uncw.edu/csc/posi_____ Screening begins January 9,
tion.htm.
2008. EEO/AA Employer. Women and
minorities encouraged to apply.
______________
_________________
_____________
ARIZONA STATE UNIVERSITY
School of Computing and Informatics
Department of Computer Science and Engineering
The Department of Computer Science and Engineering has a Lecturer position open.
Applicants are required to have their MS in computer science, computer engineering, or
a closely related field. Applicants who have completed two years postmasters or are currently PhD candidates in their programs will also be considered. In addition they must
show evidence of effective written, communication and management skills and a willingness to learn and manage course servers.
A Ph.D. in computer science or a related field and teaching experience at a university,
college, or junior college level is desired.
The successful candidates will be expected to teach and co-ordinate several on-line sections of a computer literacy course (CSE 180) and teach basic programming courses in
C# (CSE 182) and other languages. It is expected that the teaching will be through various delivery methods including on-line and other distance learning formats. Depending
on the course load, the candidate is expected to provide service on department committees and develop new curriculum for both existing degree programs as well as for
new programs.
Applications will be reviewed as received until the search is closed. Early applications are strongly encouraged. Application packages must include a cover letter, detailed
curriculum vitae, teaching statement, and the names, addresses, and phone numbers of
three professional references. Application packages must be uploaded via
http://sci.asu.edu/hiring. Please direct any questions to the Chairperson of the recruiting committee at ______________
cse.recruiting@asu.edu.
The closing date for receipt of applications is January 15, 2008. If not filled, applications will be reviewed on the 15th and 30th of the month thereafter until the search is
closed. Anticipated start date is August 16, 2008.
Arizona State University is an Equal Opportunity/Affirmative Action Employer. A
Background check is required for employment.
88
Computer
UNIVERSITY OF CALIFORNIA, LOS
ANGELES, Department of Computer Science. The Department of
Computer Science in the Henry Samueli
School of Engineering and Applied Science at the University of California, Los
Angeles, invites applications for tenuretrack positions in all areas of Computer
Science and Computer Engineering.
Applications are also encouraged from
distinguished candidates at senior levels.
Quality is our key criterion for applicant
selection. Applicants should have a strong
commitment to both research and teaching and an outstanding record of research
for their level. We seek applicants in any
mainstream area of Computer Science
and Computer Engineering such as software systems, embedded systems and
machine learning as well as those with a
strength in emerging technologies
related to computer science such as biocomputing, nano architectures, and
nanosystems. To apply, please visit
http://www.cs.ucla.edu/recruit. Faculty
applications received by January 31 will
be given full consideration. The University of California is an Equal Opportunity/Affirmative Action Employer.
WASHINGTON UNIVERSITY IN SAINT
LOUIS, Department of Computer
Science and Engineering, Faculty
Positions. The School of Engineering
and Applied Science at Washington University has embarked on a major initiative
to expand its programs and facilities. As
part of this initiative, the Department of
Computer Science and Engineering is
seeking outstanding faculty in the broad
area of digital systems and architecture,
including
embedded
computing,
advanced multi-core architectures and
hybrid computing systems. We have a
special interest in candidates seeking to
develop multi-disciplinary collaborations
with colleagues in related disciplines. On
the applications side, this may include
collaborations in systems biology, neural
engineering and genetics. On the technology and basic science side, it may
include collaborations in electrical engineering, materials and physics. Successful candidates must show exceptional
promise for research leadership and have
a strong commitment to high quality
teaching at all levels. Our faculty is
engaged in a broad range of research
activities including hybrid computing
architectures, networking, computational
biology, robotics, graphics, computer
vision, and advanced combinatorial optimization. The department provides a supportive environment for research and the
preparation of doctoral students for
careers in research. Our doctoral gradu-
Computer
Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next Page
A
BEMaGS
F
Computer
Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next Page
ates go on to positions of leadership in
both academia and industry. The department values both fundamental research
and systems research with the potential
for high impact, and has a strong tradition of successful technology transfer.
Limits on undergraduate enrollments and
the university’s growing popularity allow
us to offer small classes and close personal
attention to a diverse student body of
exceptional quality. A faculty known for
its collegiality provides a supportive environment for new arrivals. A progressive
administration reaches out to academic
couples seeking to co-locate, and promotes policies that reward research,
teaching, and innovative new initiatives.
Washington University is one of the
nation’s leading research universities and
attracts top-ranked students from across
the country and around the world. It is a
medium-sized institution, with roughly
6,000 full-time undergraduates and
6,000 graduate students, allowing it to
provide both a strong sense of community and a broad range of academic
opportunities. Its six professional schools
provide advanced education in engineering, medicine, social work, business, law,
architecture and art. It has exceptional
research strengths in the life sciences and
medicine, creating unmatched opportunities for interdisciplinary collaborations
for faculty in computer science and engineering. It has one of the most attractive
university campuses anywhere, and is
located in a lovely residential neighborhood, adjacent to one of the nation’s
largest urban parks, in the heart of a
vibrant metropolitan area. St. Louis is a
wonderful place to live, providing access
to a wealth of cultural and entertainment
opportunities, while being relatively free
of the everyday hassles and pressures of
larger cities. Applicants should hold a
doctorate in Computer Engineering,
Computer Science or Electrical Engineering. Qualified applicants should submit a
complete application (cover letter, curriculum vita, research statement, teaching statement, and names of at least three
references) electronically to ____
recruiting@cse.wustl.edu.
___________ Other communications may be directed to Dr. Jonathan
Turner, _____________
jon.turner@wustl.edu. Applications will be considered as they are
received. Applications received after January 15, 2008 will receive limited consideration. Washington University is an equal
opportunity/affirmative action employer.
UNIVERSITY OF PORTLAND, Faculty
Position in Computer Science. The
Department of Electrical Engineering and
Computer Science at the University of
A
BEMaGS
F
Portland seeks a computer science faculty
member for a tenure-track position at the
assistant or associate professor level, to
begin August 2008. Ph.D. is required;
dedication to excellence in teaching is
essential. Must be a U.S. citizen or permanent resident. Areas of interest include
computer security, computer graphics,
computer gaming, robotics, and computer systems. Other areas of expertise
will be considered. Duties include undergraduate teaching and advising, laboratory development, engagement in scholarly activity, and service to the University.
A typical teaching responsibility is three
courses per semester. The University of
Portland is an independently governed
Catholic university that welcomes people
of diverse backgrounds. The University
serves approximately 3500 students. The
CS and EE programs are both ABETaccredited. Send hard copy of application
materials (letter of interest, vita, teaching
statement, research statement, references) to: CS Search Committee, School
of Engineering, University of Portland,
5000 N. Willamette Blvd., Portland, OR
97203 or apply via email at cssearch@up.
_______
edu. Applications will be processed as
__
they arrive. For full consideration, please
apply by December 1. The University of
Portland is an affirmative action/equal
opportunity employer. For information
UNIVERSITY OF TORONTO
The Edward S. Rogers Sr. Department of Electrical &
Computer Engineering
10 King’s College Road
Toronto, Ontario, Canada M5S 3G4
The Edward S. Rogers Sr. Department of Electrical and Computer Engineering at the University of Toronto invites applications for
tenure-stream Assistant or Associate Professor positions, beginning July 1, 2008, in two areas:
1. Computer or FPGA Architecture. Research areas of interest include, but are not limited to: multi- or single- core processor
architecture, FPGA architecture and CAD, embedded processor design, programming and compiler support for multi-core and novel
processors, memory systems, programmable architectures for integrated digital circuits, and power-aware and power efficient architectures. Applications and references for this position should be addressed to Professor Tarek Abdelrahman, Chair of Search Committee
and sent to: CompFPGASearch@ece.utoronto.ca.
______________________
2. Information Security. Research areas of interest include identity, privacy and security information technologies for: computer
networks, distributed systems, sensor and networked systems, embedded systems, computer architecture and system survivability. Applications and references for this position should be addressed to Professor Dimtrios Hatzinakos, Chair of Search Committee and sent
to: InfoSecSearch@ece.utoronto.ca.
____________________
Candidates must have (or are about to receive) a Ph.D. in the relevant area.
The department ranks among the top 10 ECE departments in North America. It attracts outstanding students, has excellent facilities, and is ideally located in the middle of a vibrant, artistic, and diverse cosmopolitan city. The department offers highly competitive salaries and start-up funding, and faculty have access to significant Canadian research operational and infrastructure grants.
Additional information can be found at: www.ece.utoronto.ca.
The successful candidates are expected to pursue excellence in research and teaching at both the graduate and undergraduate
levels.
Applicants must submit their application by electronic email to one of the two email addresses given above. Please submit only
Adobe Acrobat PDF documents. Applicants will receive an email acknowledgement.
All applications should include: a curriculum vitae, a summary of previous research and proposed new directions, and a statement of teaching philosophy and interests. In addition, applicants must arrange to have three confidential letters of recommendation sent directly (by the referee) by email to the correct address given above.
Applications and referee-sent references should be received by January 15, 2008.
The University of Toronto is strongly committed to diversity within its community and especially welcomes applications from visible minority group members, women, Aboriginal persons, persons with disabilities, members of sexual minority groups, and others
who may contribute to the further diversification of ideas.
All qualified candidates are encouraged to apply; however, priority will be given to Canadian Citizens and Permanent Residents.
89
December 2007
Computer
Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next Page
A
BEMaGS
F
Computer
Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next Page
about the University and program refer
to http://orgs.up.edu/cssearch. Direct
additional inquiries about the position to
cssearch@up.edu or 503-943-7314.
__________
FLORIDA INTERNATIONAL UNIVERSITY, School of Computing and
Information Sciences. Applications
are invited for multiple tenure-track or
tenured faculty positions at the levels of
Assistant, Associate, or Full Professor. A
Ph.D. in Computer Science or related
areas is required. Outstanding candi-
dates are sought in areas of (1) Software
and Computer Security; (2) Software and
Computer Systems; (3) Bio/Medical/
Health Informatics; (4) Data Mining; and
(5) Human-Computer Interface (HCI).
Exceptional candidates in other areas will
be considered as well. Candidates with
the ability to forge interdisciplinary collaborations will be favored. Candidates
for senior positions must have a proven
record of excellence in research funding,
publications, teaching, and professional
service, as well as demonstrated ability
for developing and leading collaborative
CHAIR
Computer Science and Engineering
University of Michigan, Ann Arbor
The newly restructured Division of Computer Science and Engineering (CSE) in the
College of Engineering at the University of Michigan is seeking an inaugural Chair. In
recognition of its increasing prominence in the College’s broader education and
research missions, the Division’s administrative autonomy and profile have recently
been raised to match other College departments, providing the new Chair with an
unprecedented opportunity to lead a strong CSE program into a new era.
Computer Science at Michigan will soon celebrate its 50th anniversary, and the
Computer Engineering program is among the oldest in the nation. CSE is currently
the third largest academic unit in the College, with over 40 faculty and about 200
graduate and 325 undergraduate students. The state-of the-art 104,000 square foot
CSE Building opened in 2006, housing CSE faculty and projects that had previously
been distributed across several buildings under one roof. For more CSE details, see
www.cse.umich.edu.
CSE has a strong commitment to excellence in computing education, including
interdisciplinary curricula with other units in the University. With external funding of
approximately $12 million annually, CSE is a world-class leader in core research
areas such as computer architecture, software systems and artificial intelligence. CSE
has close and longstanding research and educational collaborations with the Division
of Electrical and Computer Engineering (ECE), and these are preserved in the
restructured organization by having the Chairs of CSE and ECE serve as co-Chairs of
the Electrical Engineering and Computer Science (EECS) Department. More broadly,
CSE teams with colleagues from across the University to spearhead interdisciplinary
investigations in areas like bioinformatics, computational economics, robotics and
educational software. CSE is poised to use its new autonomy to shape research and
educational innovations and collaborations that respond to and anticipate the
emerging roles of computation in our society.
In this time of transition and opportunity for CSE, a successful Chair candidate must
have outstanding leadership, collaborative and administrative abilities, and be an
outstanding scholar. The candidate should possess a compelling vision for the future
of CSE research and education, should be able to work with a diverse group of
faculty, staff, students and administrators to articulate and achieve common goals in
pursuit of this vision and should be able to marshal support from alumni and industry
in these endeavors. He or she should have an exemplary record of achievement in
research, teaching and service commensurate with appointment as a tenured full
professor.
Inquiries, nominations and applications (with CV) should be sent to
____________ or to Prof. Edmund Durfee, CSE Chair Search
CSEChair@umich.edu,
Advisory Committee, 3633 CSE Building, University of Michigan, Ann Arbor, MI,
48109-2121.
The University of Michigan is an Equal Opportunity Affirmative Action Employer. Individuals from
under-represented groups are encouraged to apply.
90
Computer
A
BEMaGS
F
research projects. Outstanding candidates for the senior positions will be considered for the endowed Ryder Professorship position. Successful candidates
are expected to develop a high-quality
funded research program and must be
committed to excellence in teaching at
both graduate and undergraduate levels. Florida International University (FIU),
the state university of Florida in Miami,
is ranked by the Carnegie Foundation as
a comprehensive doctoral research university with high research activity. FIU
offers over 200 baccalaureate, masters
and doctoral degree programs in 21 colleges and schools. With over 38,000 students, it is one of the 25 largest universities in the United States, and boasts a
new and accredited Law School and the
newly created College of Medicine. US
News & World Report has ranked FIU
among the top 100 public universities,
and Kiplinger’s Personal Finance magazine ranked FIU among the best values
in public higher education in the country
in their 2006 survey. The School of Computing and Information Sciences (SCIS) is
a rapidly growing program of excellence
at the University. The School has 31 faculty members (including seven new faculty members hired in the last three
years), 1,200 students, and offers B.S.,
M.S., and Ph.D. degrees in Computer
Science and B.S. and B.A. degrees in
Information Technology. Its undergraduate program is the largest among the
ten state universities in Florida and SCIS
is the largest producer of Hispanic CS
and IT graduates in the US. The Ph.D.
enrollment in the School has doubled in
the last four years with around 80
enrolled Ph.D. students. In 2006-07, the
School received $2.7M in competitive
research grants and leads the similar programs in the State of Florida in terms of
per faculty annual research funding. In
addition, the school receives an annual
average of $2.2M of in-kind grants and
donations from industry. Its research has
been sponsored by NSF, NASA, NIH,
ARO, ONR, NOAA, and other federal
agencies. Several new faculty members
have received the prestigious NSF
CAREER AWARD, DoE CAREER AWARD,
and IBM Faculty Research Awards. SCIS
has broad and dynamic partnerships
with industry. Its research groups include
the NSF CREST Center for Emerging
Technologies for Advanced Information
Processing and High-Confidence Systems, the High Performance Database
Research Center, the Center for
Advanced Distributed Systems Engineering, the IBM Center for Autonomic and
Grid Computing, and other research laboratories. The SCIS has excellent computing infrastructure and technology
support. In addition, the SCIS faculty and
students have access to the grid computing infrastructure with 1000 nodes
under the Latin American Grid (LA Grid)
Consortium (http://lagrid.fiu.edu), a first
ever comprehensive international part-
Computer
Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next Page
A
BEMaGS
F
Computer
Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next Page
nership, co-founded by IBM and FIU,
linking institutions in the US, Latin America, and Spain for collaborative research,
innovation and workforce development.
Applications, including a letter of interest, contact information, curriculum
vitae, and the names of at least three references, should be sent to Chair of
Recruitment Committee, School of Computing and Information Sciences, Florida
International University, University Park,
Miami, FL 33199. E-mail submission to
recruit@cis.fiu.edu
___________ is preferred. The
application review process will begin on
January 15, 2008, and will continue until
the positions are filled. Further information can be obtained from the School
website http://www.cis.fiu.edu, or by email to ___________
recruit@cis.fiu.edu. Women and
members of underrepresented groups
are strongly encouraged to apply. Florida
International University is a member of
the State University System of Florida and
is an equal opportunity/affirmative
action/equal access employer.
PURDUE UNIVERSITY. The Department
of Computer Science at Purdue University invites applications for tenure-track
positions beginning August 2008. While
outstanding candidates in all areas of
Computer Science will be considered,
preference will be given to applicants
with a demonstrable research record in
operating systems, software engineering,
and theory. Candidates with a multi-disciplinary focus are also encouraged to
apply. Of special interest are applicants
with research focus in computational science and engineering, bioinformatics,
and health-care engineering. The level of
the positions will depend on the candidate’s experience. The Department of
Computer Science offers a stimulating
and nurturing academic environment.
Forty-four faculty members direct
research programs in analysis of algorithms, bioinformatics, databases, distributed and parallel computing, graphics
and visualization, information security,
machine learning, networking, programming languages and compilers, scientific
computing, and software engineering.
The department has implemented a
strategic plan for future growth supported by the higher administration and
recently moved into a new building. Further information about the department
and more detailed descriptions of the
open positions are available at ____
http://
www.cs.purdue.edu. Information about
the multi-disciplinary hiring effort can be
found at http://www.science.purdue.
__________ All applicants should
edu/COALESCE/.
hold a PhD in Computer Science, or a
closely related discipline, be committed
to excellence in teaching, and have
demonstrated potential for excellence in
research. Salary and benefits are highly
competitive. Applicants are strongly
encouraged to apply electronically by
sending their curriculum vitae, research
A
BEMaGS
F
College of Engineering and Mineral Resources
LANE DEPARTMENT OF COMPUTER SCIENCE AND ELECTRICAL ENGINEERING
Tenure Track Faculty Position in Computer Networks/Security
The Lane Department of Computer Science and Electrical Engineering at West Virginia University
(WVU) anticipates filling one or more tenure-track faculty positions at the Assistant Professor level, subject to availability of funds. The successful applicant will be expected to teach graduate and undergraduate
coursework and develop an externally sponsored research program in the areas of computer systems, networks and computer security. Applicants must have an earned Ph.D. in Computer Science, Computer Engineering, or a closely related discipline at the time of appointment, and must have demonstrated ability or
potential to contribute to the research and teaching missions of the department, including the ability to teach
a broad range of undergraduate topics in Computer Science.
WVU is a comprehensive land grant institution of over 27,000 students with medical, law, and business
schools. The department has 31 tenure-track faculty members, 400 undergraduates, and 280 graduate students. It offers BS degrees in Computer Science, Computer Engineering, Electrical Engineering and Biometric Systems; MS degrees in Computer Science, Software Engineering, and Electrical Engineering; and
Ph.D. degrees in Computer Science, Computer Engineering and Electrical Engineering. The department
conducts approximately $5 million annually in externally sponsored research, with major research activities in the areas of biometric systems, bioengineering, information assurance, nanotechnology, power systems, software engineering, virtual environments, and wireless networks. Strong opportunities exist for
building collaborative partnerships with nearby federal research facilities, including the Department of
Defense, Department of Energy, NASA, and the FBI.
Review of applications will begin January 1, 2008 and continue until positions are filled. Women and
minorities are strongly encouraged to apply. Applicants should send a letter describing their qualifications,
a curriculum vita, statements of teaching philosophy and research objectives, and names and e-mail addresses
of at least 3 references to:
Dr. Bojan Cukic, Faculty Search Committee Chair
Lane Department of Computer Science and Electrical Engineering
P.O. Box 6109
West Virginia University
Morgantown, WV 26506-6109
Telephone: 304-293-LANE (x 2526)
Email: _____________
cs-search@mail.wvu.edu
web: http://www.csee.wvu.edu
West Virginia University is an equal opportunity/affirmative action employer.
Careers with Mass Appeal
Assistant Professor
Department of Computer Science
www.cs.umb.edu
The Computer Science Department at the University of Massachusetts Boston invites
applications for Fall 2008 for one faculty position at the Assistant Professor level.
We offer a BS, an MS with an emphasis on software engineering, and a Ph.D. in
computer science. We seek to strengthen our research program significantly. Current
faculty interests include biodiversity informatics, computer and human vision, data
mining, databases, networks, software engineering, system modeling, and theoretical
computer science.
Strong candidates will be considered from any area of Computer Science, but
preference will be given to a candidate who does research in artificial intelligence,
particularly evolutionary computing, knowledge representation, machine learning or
neural networks. Evidence of significant research potential and a Ph.D. in computer
science or a related area are required. We offer a competitive salary and a generous
start-up package. Send cover letter, curriculum vitae, statements about research and
teaching, and the names and email addresses of three references to Search 680E at
search@cs.umb.edu.
___________
Our campus overlooks Boston harbor; our faculty and students enjoy professional
life in a center of academia and the software industry. For more information,
visit us at http://www.cs.umb.edu.
Review of applications has begun and will continue until the position is filled.
UMass Boston is an affirmative action, equal opportunity Title IX employer.
91
December 2007
Computer
Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next Page
A
BEMaGS
F
Computer
Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next Page
and teaching statements, and names and
contact information of at least three references in PDF to ___________
fac-search@cs.purdue.edu. Hard copy applications can be
_____
sent to: Faculty Search Chair, Department
of Computer Science, 305 N. University
Street, Purdue University, West Lafayette,
IN 47907. Applicants matching one
search may be considered in other relevant searches when appropriate. Review
of applications will begin on October 1,
2007, and will continue until the positions are filled. Purdue University is an
Equal Opportunity/Equal Access/Affirmative Action employer fully committed to
achieving a diverse workforce.
WAYNE STATE UNIVERSTY, Department of Computer Science, TenureTrack Faculty Position. The Department of Computer Science of Wayne
State University invites applications for a
tenure-track faculty position at the Assistant/Associate Professor level. Continuing our recent growth, we are seeking
applicants in the areas of Software Engineering and Bioinformatics. Outstanding
applicants in other areas will also be considered. Candidates should have a Ph.D.
in computer science or a related area. The
successful candidate will have a strong
commitment to both research and teaching, a strong publication record and
potential for obtaining external research
funding. Senior applicants should have
strong publication and funding records.
Currently, the department has 19 faculty,
78 Ph.D. and 120 M.S. students. The
Department’s total annual R&D expenditures average between $2-3 million in
research areas, including bioinformatics,
software engineering, systems, databases
and image processing. Our junior faculty
benefit from an extraordinarily supportive environment as demonstrated by their
success in securing two recent NSF
CAREER awards, as well as other very
competitive research funding. Faculty
actively collaborate with many other centers and departments, including the
School of Medicine, which is the largest
single-campus medical school in the
country, and the Karmanos Cancer Institute, a nationally recognized comprehensive cancer center. More information
about the department can be found at:
http://www.cs.wayne.edu. Wayne State
University is a premier institution of
higher education offering more than 350
undergraduate and graduate academic
programs to more than 33,000 students
in 11 schools and colleges. Wayne State
ranks in the top 50 nationally among
public research universities. As Michigan’s
only urban university, Wayne State fulfills
a unique niche in providing access to a
world-class education. The University
A
BEMaGS
F
offers excellent benefits and a competitive compensation package. Submit
applications online at http://jobs.wayne.
edu,
__ refer to Posting # 034690 and
include a letter of intent, a statement of
research and teaching interests, a CV, and
contact information for at least three references. All applications received prior to
January 11, 2008 will receive full consideration. However, applications will be
accepted until the position is filled. Wayne
State University is an equal opportunity/affirmative action employer.
THE HONG KONG POLYTECHNIC UNIVERSITY, Department of Computing.The Department invites applications
for Assistant Professors in most areas of
Computing, including but not limited to
Software Engineering / Biometrics / Digital Entertainment / MIS and Pervasive
Computing. Applicants should have a
PhD degree in Computing or closely
related fields, a strong commitment to
excellence in teaching and research as
well as a good research publication
record. Initial appointment will be made
on a fixed-term gratuity-bearing contract.
Re-engagement thereafter is subject to
mutual agreement. Remuneration package will be highly competitive. Applicants
should state their current and expected
salary in the application. Please submit
UNIVERSITY OF BRIDGEPORT
Engineering Faculty Positions Available
Computer Science and Engineering
DEPARTMENT OF ELECTRICAL & COMPUTER
ENGINEERING
Computer Engineering Faculty Position
in High-Performance and
Reconfigurable Computing
The Department of Electrical and Computer Engineering at
The George Washington University invites applications for tenuretrack, tenured and contractual non-tenure-accruing faculty positions at all ranks, in the area of Computer Engineering. Two positions will be for tenure-track/tenured faculty, and the third
position will be a one-year renewable non-tenure-accruing contractual position at the Assistant/Associate Professor rank, and
successful candidates may start as early as Spring 2008. Faculty
with research in High-Performance Computing and Reconfigurable Computing are particularly encouraged to apply, however,
all areas of Computer Engineering will be considered. Additional
information and details on position qualifications and the application procedure are available on http://www.ece.gwu.edu.
Review of applications will continue until the positions are filled.
The George Washington University is an Equal
Opportunity/Affirmative Action Employer
92
Computer
The fast-growing department of Computer Science and Engineering at the
University of Bridgeport invites applications for full time tenure-track
positions at the Assistant/Associate Professor levels. Candidates for tenuretrack positions must have a Ph.D. in computer science, computer engineering,
or a related field. A strong interest in teaching undergraduate and graduate
courses and an excellent research record are required. The ability to teach
lab-based courses is also required. Applicants are sought in the areas of
Medical Electronics, Biomedical Engineering, Biometrics, Bio-Informatics,
Wireless Design, Distributed Computing, Computer Architecture, Data Base
Design, Algorithms, e-commerce, Data Mining and Artificial Intelligence.
There are opportunities to participate in the external engineering programs,
which include weekend and evening graduate and continuing education
classes, on-site instruction in local industry and distance learning initiatives.
Electrical Engineering
The Department of Electrical Engineering at the University of Bridgeport is
searching for two tenure track Assistant Professors to start January 7, 2008.
Qualifications include an earned doctorate in Electrical Engineering or
equivalent. All disciplines are encouraged to apply, but preference will be
given (i) to those in the area of sensors: biological and chemical, including
design and theoretical modeling of semiconductor sensors and thin films and
(ii) to those involved with SOC (systems on a chip) design, digital/analog
VLSI design and testing, and VLSI design automation. Other areas of
engineering will be considered, including RFIC design, A/D wireless
communications, Biomedical Engineering, and Medical Electronics. The
successful candidate is expected to excel in teaching undergraduate and
graduate students in the classroom and in the lab.
A strong publication/ research record is required as well.
Applicants should send a cover letter, resume and address
and e-mail address of four references via e-mail to:
University of Bridgeport, School of Engineering
Faculty Search Committee: _____________
enfacrec@bridgeport.edu
The University of Bridgeport is an Affirmative Action/Equal
Opportunity Employer, we encourage women and minorities to apply
Computer
Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next Page
A
BEMaGS
F
Computer
Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next Page
_____
your application via email to hrstaff@
polyu.edu.hk.
________ Application forms can be
downloaded from http://www.polyu.
edu.hk/hro/job.htm.
____________ Recruitment will
continue until the positions are filled.
Details of the University’s Personal Information Collection Statement for recruitment can be found at http://www.polyu.
edu.hk/hro/jobpics.htm.
______________
McMURRY UNIVERSITY. Assistant Professor of Computer Science to begin
August 2008. Candidates must have
Computer Science PhD. Position requires
teaching 12 hours per semester, advising
CS majors, and system admin of CS
server. McMurry University is a private liberal arts institution affiliated with the
United Methodist Church. Candidates
must be willing to support the university
mission and values. Review of applications will begin immediately and continue
until position is filled. Complete details
available at www.mcm.edu/employment.
UNIVERSITY OF LOUISIANA AT
LAFAYETTE, The Center for Advanced Computer Studies, Faculty
Positions, Graduate Fellowships.
Candidates with a strong research record
and an earned doctorate in computer science or computer engineering are invited
to apply for multiple tenure-track assistant/associate professor faculty positions
starting fall of 2008. Target areas include
Grid Computing, Large Scale Data &
Knowledge Engineering, Distributed Software Systems, and Entertainment Computing. Consideration will also be given
to outstanding candidates in other areas.
Candidates must have demonstrated
potential to achieve national visibility
through accomplishments in research
contract and graduate students. Faculty
teach mostly at the graduate and senior
undergraduate levels and offer a continuing research seminar. State and university funds are available to support
research initiation efforts. Salaries are
competitive along with excellent support
directed towards the attainment of our
faculty's professional goals. The Center's
colloquium series brings many world
known professionals to our campus each
year. The Center is primarily a graduate
research unit of 17 tenure-track and 7
research faculty, with programs leading
to MS/PhD degrees in computer science
and computer engineering. More than
200 graduate students are enrolled in
these programs. The Center has been
ranked 57th in a recent NSF survey based
on research and development expenditures. The Center has state-of-the-art
research and instructional computing
facilities, consisting of several networks of
SUN workstations and other high performance computing platforms. In addition,
the Center has dedicated research laboratories in Intelligent Systems, Computer
Architecture and Networking, Cryptog-
raphy, FPGA and Reconfigurable Computing, Internet Computing, Virtual Reality, Entertainment Computing, Software
Research, VLSI and SoC, Wireless Technologies, and Distributed Embedded
Computing Systems. Related university
programs include the CSAB (ABET)
accredited undergraduate program in
Computer Science, and the ABET accredited undergraduate program in Electrical
and Computer Engineering. Additional
information about the Center may be
obtained at http://www.cacs.louisiana.
edu.
__ A number of PhD fellowships, val-
A
BEMaGS
F
ued at up to $24,000 per year including
tuition and most fees, are available. They
provide support for up to four years of
study towards the PhD in computer science or computer engineering. Eligible
candidates must be U.S. citizens or must
have earned an MS degree from a U.S.
university. Recipients also receive preference for low-cost campus housing. Applications may be obtained and submitted
at http://gradschool.louisiana.edu. The
University of Louisiana at Lafayette is a
Carnegie Research University with high
research activity, with an enrollment of
THE INSTITUTE OF TECHNOLOGY
AT THE
UNIVERSITY OF WASHINGTON TACOMA
Tenure-Track Faculty Positions
The Institute of Technology at the University of Washington Tacoma is accepting applications
for full-time, tenure-track faculty positions in the areas of Computing and Software Systems
(CSS), Computer Engineering and Systems (CES), and Information Technology and Systems
(ITS) beginning September 16, 2008. Commitment to high-quality teaching and to an externally funded research program are essential and excellent communication skills are required.
Please visit our website for full descriptions of each position: http://www.tacoma.washing_________________
ton.edu/hr/jobs/
_________
COMPUTING AND SOFTWARE SYSTEMS
CSS candidates with strengths in hardware-oriented CS (architecture and OS) or theoretical
foundations will be given priority consideration, but we will consider all areas of expertise,
with appropriate evidence of ability to teach core CS topics. The search is focused at the Assistant level but advanced candidates may be considered at the Associate and Full Professor levels. A Ph.D. in Computer Science or a closely related discipline, and an appropriately strong
record of published research in computing is required. For additional information, please contact Dr. George Mobus at gmobus@u.washington.edu.
________________
COMPUTER ENGINEERING AND SYSTEMS
CES candidates should have a strong background in a focused area such as: embedded and
real-time systems, digital system design, or network design and security. The search is focused
at the Assistant or Associate Professor levels but advanced candidates may be considered at the
Professor level. A Ph.D. in Computing Engineering and Systems or a closely related field is
required. For additional information, contact Dr. Larry Wear at lwear@u.washington.edu
______________ or
by phone at (253) 692-4538.
INFORMATION TECHNOLOGY AND SYSTEMS
ITS candidates with applied research interests in asynchronous architectures/grid computing,
graphics and game development, Web services and distributed computing, social networks and
computing, collaborative learning technologies, databases, interaction and design, or information assurance & computer security are particularly encouraged to apply. Candidates will
be considered at the ranks of Assistant or Associate Professor. A Ph.D. in Computer/Information Science, Information Technology, Information Systems or a closely related field is required
by the time of appointment. Priority will be given to candidates with a background in industry or experience conducting/managing research programs outside academia. For additional
information, please contact Dr. Orlando Baiocchi at baiocchi@u.washington.edu
________________ or by phone
at (253) 692-4727.
The University of Washington Tacoma, one of three UW campuses, is an urban undergraduate and master’s level campus that is changing the face of its region economically, culturally,
and architecturally. The Institute of Technology was created in 2001 by a public/private partnership to address the critical industry demand for baccalaureate and masters level computing and engineering professionals. Please see http://www.insttech.washington.edu/ for additional information.
Applications should include (1) a letter describing academic qualifications, (2) a statement of
research interests, (3) a description of teaching philosophy, and (4) curriculum vitae. Applications must include contact information for at least three (3) references. Applications should
be submitted electronically to: ________________
Ifaculty@u.washington.edu.
Screening of credentials will begin November 19th. All positions will remain open until filled.
Salary is competitive and will be commensurate with experience and qualifications.
The University of Washington Tacoma is an affirmative action, equal opportunity employer.
The University is committed to building a culturally diverse faculty and staff, and strongly
encourages applications from women, minorities, individuals with disabilities and covered veterans. University of Washington Tacoma faculty engage in teaching, research and service and
are expected to participate in lower division teaching.
93
December 2007
Computer
Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next Page
A
BEMaGS
F
Computer
Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next Page
over 16,000 students. Additional information may be obtained at http://www.
louisiana.edu/.
________ The University is located
in Lafayette, the hub of Acadiana, which
is characterized by its Cajun music and
food and joie de vivre atmosphere. The
city, with its population of over 120,000,
provides many recreational and cultural
opportunities. Lafayette is located
approximately 120 miles west of New
Orleans. The search committee will
review applications and continue until the
positions are filled. Candidates should
send a letter of intent, curriculum vitae,
statement of research and teaching interests, and names, addresses and telephone
numbers of at least four references. Additional materials, of the candidate's choice,
may also be sent to: Dr. Magdy A. Bayoumi, Director, The Center for Advanced
Computer Studies, University of Louisiana
at Lafayette, Lafayette, LA 70504-4330.
Tel: 337.482.6147. Fax: 337.482.5791.
The University is an Affirmative Action/
Equal Opportunity Employer.
SR. ORACLE DEVELOPER - EGB Systems , Stamford, CT - Develops DB applications & DB warehousing, DB modeling
using TCA, OLAP, ORACLE & Oracle
RDBMS. Bachelor in Science with 9 yrs
exp in oracle, SQL and PL/SQL apply to
careersusa@egbsystems.com.
_________________
UNIVERSITY OF WASHINGTON,
Computer Science & Engineering
and Electrical Engineering, TenureTrack and Research Faculty. The University of Washington’s Department of
Computer Science & Engineering and
Department of Electrical Engineering
have jointly formed a new UW Experimental Computer Engineering Lab
(ExCEL). In support of this effort, the College of Engineering has committed to hiring several new faculty over the forth-
The National University
of Singapore (NUS) invites
nominations and applications
for the position of Head,
Computer Science Department.
NUS has about 23500
undergraduate and 9000 graduate students from 88 countries. In 2006, Newsweek's ranking of universities
listed NUS as 31st globally and 3rd in Asia/Australasia. The CS Department has some 80 faculty
members, many of whom regularly publish in
prestigious conferences and journals, and serve
on their program committees and editorial boards.
We seek a new Head who can further raise the
Department's achievements. The candidate should
be an internationally-recognized researcher, with
experience in technical leadership and team management. The salary and benefits are internationally competitive.
For details, see:
http://www.comp.nus.edu.sg/cs/________
cshodrec.html
94
Computer
coming years. All positions will be dual
appointments in both departments (with
precise percentages as appropriate for the
candidate). This year, we have two open
positions, and encourage exceptional
candidates in computer engineering, at
tenure-track Assistant Professor, Associate
Professor, or Professor, or Research Assistant Professor, Research Associate Professor, or Research Professor to apply. A
moderate teaching and service load
allows time for quality research and close
involvement with students. The CSE and
EE departments are co-located on campus, enabling cross department collaborations and initiatives. The Seattle area is
particularly attractive given the presence
of significant industrial research laboratories, a vibrant technology-driven entrepreneurial community, and spectacular
natural beauty. Information about ExCEL
can be found at <http://www.excel.wash______________
ington.edu>. We welcome applications in
_______
all computer engineering areas including
but not exclusively: atomics scale devices
& nanotechnology, implantable and biologically-interfaced devices, synthetic
molecular engineering, VLSI, embedded
systems, sensor systems, parallel computing, network systems, and technology
for the developing world. We expect candidates to have a strong commitment
both to research and teaching. ExCEL is
seeking individuals at all career levels,
with appointments commensurate with
the candidate’s qualifications and experience. Applicants for both tenure-track
and research positions must have earned
a PhD by the date of appointment. Please
apply online at <http://www.excel.wash______________
ington.edu/jobs.html> with a letter of
_____________
application, a complete curriculum vitae,
statement of research and teaching interests, and the names of at least four references. Applications received by January
31st, 2008 will be given priority consideration. The University of Washington
was awarded an Alfred P. Sloan Award for
UNIVERSITY OF CALIFORNIA,
RIVERSIDE
The Department of Computer Science and
Engineering invites applications for faculty
positions at all levels and in all areas with special interest in interdisciplinary research
involving Computer Graphics, Software Systems, High-Performance Computing and
Computational Science and Engineering.
Additional positions are available for candidates whose research interests intersect with
the newly created Material Science and Engineering program (http://www.engr.ucr.edu/
mse/).
__ Applicants must have a PhD in Computer Science or in a closely related field. Visit
http://www.cs.ucr.edu for information and
http://www.engr.ucr.edu/facultysearch/ to
apply. Review of applications will begin on
January 1, 2008, and will continue until the
positions are filled. For inquiries contact
__________
search@cs.ucr.edu. UC Riverside is an EqualOpportunity/Affirmative-Action Employer.
A
BEMaGS
F
Faculty Career Flexibility in 2006. In addition, the University of Washington is a
recipient of a National Science Foundation ADVANCE Institutional Transformation Award to increase the participation
of woman in academic science and engineering careers. We are building a culturally diverse faculty and encourage
applications from women and minority
candidates.
STATE UNIVERSITY OF NEW YORK AT
BINGHAMTON, Department of
Computer Science, The Thomas J.
Watson School of Engineering and
Applied Science, http://www.cs.
binghamton.edu.
____________ Applications are
invited for a tenure-track position at the
Assistant/Associate Professor level beginning in Fall 2008. Salary and startup packages are competitive. We are especially
interested in candidates with specialization in (a) Embedded Systems and Compilers or (b) Ubiquitous Computing/Information Access or (c) Information Security
or (d) Areas related to systems development. Applicants must have a Ph.D. in
Computer Science or a closely related discipline by the time of appointment.
Strong evidence of research capabilities
and commitment to teaching are essential. We offer a significantly reduced
teaching load for junior tenure track faculty for at least the first three years. Binghamton is one of the four Ph.D. granting
University Centers within the SUNY system and is nationally recognized for its
academic excellence. The Department
has well- established Ph.D. and M.S. programs, an accredited B.S. program and is
on a successful and aggressive recruitment plan. Local high-tech companies
such as IBM, Lockheed-Martin, BAE and
Universal Instruments provide opportunities for collaboration. Binghamton borders the scenic Finger Lakes region of
New York. Submit a resume and the
names of three references to the url
address: __________________
http://binghamton.interviewexchange.com.
_______ First consideration will be
given to applications that are received by
March 1, 2008. Applications will be considered until the positions are filled. Binghamton University is an equal opportunity/affirmative action employer.
UNIVERSITY OF CHICAGO. The
Department of Computer Science at the
University of Chicago invites applications
from exceptionally qualified candidates
in all areas of Computer Science for faculty positions at the ranks of Professor,
Associate Professor, Assistant Professor,
and Instructor. The University of Chicago
has the highest standards for scholarship
and faculty quality, and encourages collaboration across disciplines. The Chicago
metropolitan area provides a diverse and
exciting environment. The local economy
is vigorous, with international stature in
banking, trade, commerce, manufactur-
Computer
Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next Page
A
BEMaGS
F
Computer
Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next Page
ing, and transportation, while the cultural
scene includes diverse cultures, vibrant
theater, world-renowned symphony,
opera, jazz, and blues. The University is
located in Hyde Park, a pleasant Chicago
neighborhood on the Lake Michigan
shore. Please send applications or nominations to: Professor Stuart A. Kurtz,
Chairman, Department of Computer Science, The University of Chicago, 1100 E.
58th Street, Ryerson Hall, Chicago, IL.
60637-1581 or to: apply-077714@mail____________
man.cs.uchicago.edu
____________ (attachments can
be in pdf, postscript, or Word). Complete
applications consist of (a) a curriculum
vitae, including a list of publications, (b)
forward-looking research and teaching
statements. Complete applications for
Assistant Professor and Instructor positions also require (c) three letters of recommendation, sent to ________
recommend077714@mailman.cs.uchicago.edu
____________________ or to
the above postal address, including one
that addresses teaching ability. Applicants
must have completed, or will soon complete, a doctorate degree. We will begin
screening applications on December 15,
2007. Screening will continue until all
available positions are filled. The University of Chicago is an equal opportunity/affirmative action employer.
DREXEL UNIVERSITY, Department
of Computer Science, Faculty Positions. Drexel University's Department of
Computer Science (www.cs.drexel.edu)
invites applications for tenure-track faculty positions at all levels. The preferred
interest is ARTIFICIAL INTELLIGENCE and
MULTI-AGENT SYSTEMS, although
exceptional applicants in other areas will
be considered. The department has
expanding graduate research and education programs in software engineering,
graphics and vision, information assurance and security, human-computer
interaction, high-performance computing and symbolic computation. We specialize in interdisciplinary and applied
research and are supported by several
major federal research grants from NSF,
DARPA, ONR, DoD, DoE and NIST, as well
as by private sources such as Intel, Nissan,
NTT, and Lockheed Martin. The department offers BS, BA, MS, and Ph.D.
degrees in computer science as well as BS
and MS degrees in software engineering.
Drexel is a designated National Security
Agency (NSA) Center of Academic Excellence in Information Assurance Education.
The department has over 425 undergraduate and over 100 graduate students, with annual research expenditures
in excess of $4M. Several of the Computer Science faculty are recipients of NSF
CAREER or Young Investigator Awards.
Review of applications begins immediately. To assure consideration materials
from applicants should be received by
February 1, 2008. Successful applicants
must demonstrate potential for research
and teaching excellence in the environ-
ment of a major research university. To be
considered, please send an email to: cs__
search-08@cs.drexel.edu.
______________ Please include
a cover letter, CV, brief statements
describing your research program and
teaching philosophy, and contact information for at least four references. Electronic submissions in PDF format are
strongly preferred.
THE UNIVERSITY OF TENNESSEE,
KNOXVILLE. The Min Kao Department of Electrical Engineering and
Computer Science (EECS) at The
University of Tennessee, Knoxville
is searching for candidates for tenuretrack faculty positions in all areas but preference will be given to applicants in computer engineering, including dependable
and secure systems, wireless and sensor
networks, embedded systems, and VLSI.
Successful candidates must be committed to teaching at both undergraduate
and graduate levels and have a strong
commitment to research and a willingness to collaborate with other faculty in
research. The Department currently
enrolls approximately 400 undergraduate and about 250 graduate students.
Faculty research expenditures currently
average about $10 M per year. The
department is starting a new growth
phase thanks to gifts from alumnus Dr.
Min Kao and other donors plus additional
A
BEMaGS
F
state funding totaling over $47.5 M for a
new building and endowments for the
department. The University of Tennessee
and Battelle manage the nearby Oak
Ridge National Laboratory, which provides further opportunities for research.
Information about the EECS Department
can be found at http://www.eecs.
utk.edu/. Candidates should have an
_____
earned Ph.D. in Electrical Engineering,
Computer Engineering, Computer Science, or equivalent. Previous industrial
and/or academic experience is desirable.
The University welcomes and honors people of all races, genders, creeds, cultures,
and sexual orientations, and values intellectual curiosity, pursuit of knowledge,
and academic freedom and integrity.
Interested candidates should apply
through the departmental web site at
http://www.eecs.utk.___
edu and submit a
letter of application, a curriculum vitae, a
statement of research and teaching interests, and provide contact information for
three references. Consideration of applications will begin on January 1, 2008, and
the position will remain open until filled.
The University of Tennessee is an
EEO/AA/Title VI/Title IX/Section 504/
ADA/ADEA institution in the provision of
its education and employment programs
and services. All qualified applicants will
receive equal consideration for employment without regard to race, color,
national origin, religion, sex, pregnancy,
UNIVERSITY OF SOUTHERN CALIFORNIA
Faculty Positions
Ming Hsieh Department of Electrical Engineering
The Ming Hsieh Department of Electrical Engineering (http://ee.usc.edu) of the USC Viterbi School
of Engineering (http://viterbi.usc.edu) seeks outstanding faculty candidates for tenure track assistant professor, and tenured associate professor or professor positions. Areas of interest include but
are not limited to: VLSI design; networking for ubiquitous computing and communication; wireless communications; information theory and coding; quantum information processing; control
theory; bio and nano-signal processing; and cognitive signals and systems. Additional areas
include: circuit design for analog/mixed signal processing; wireless data and power transmission
for human-implantable systems; physical sensing and interfaces to biomolecular functions;
nanoscale electronics; nanophotonics; applied nanoscience; and quantum engineering.
Faculty members are expected to teach undergraduate and graduate courses, supervise undergraduate, graduate, and post-doctoral researchers, advise students, and develop a funded research
program. Applicants must have a Ph.D. or the equivalent in electrical engineering or a related field
and a strong research and publication record. Applications must include a letter clearly indicating area(s) of specialization, a detailed curriculum vitae, a one-page statement of current and
future research directions and funding, and contact information for at least four professional references.
The graduate program of the USC Viterbi School of Engineering is listed as seventh in the most
recent US News and World Report rankings. The School’s 170 tenured and tenure track faculty
includes 26 members of the National Academy of Engineering, seven winners of Presidential Early
Career Awards, 45 Early Career Awards, four winners of the Shannon Award, and a co-winner of
the 2002 Turing Award. USC faculty conduct research in leading-edge technologies with about
$160 million in research expenditures annually. The Viterbi School is home to the Information Sciences Institute (ISI), two National Science Foundation-funded Engineering Research Centers (the
Integrated Media Systems Center and the Biomimetic MicroElectronic Systems Center), the USC
Stevens Institute for Technology Commercialization, and the Department of Homeland Security’s
first center of excellence (CREATE).
Applications and the names and email addresses of at least four professional references should be
submitted electronically at http://ee.usc.edu/applications/faculty_position/faculty.php.
USC is an Affirmative Action/Equal Opportunity Employer and encourages applications from
women and members of underrepresented groups.
95
December 2007
Computer
Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next Page
A
BEMaGS
F
Computer
Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next Page
marital status, sexual orientation, age,
physical or mental disability, or covered
veteran status.
HEWLETT – PACKARD COMPANY is
accepting resumes for the following positions in San Diego, CA: Mechanical
Design Engineer (Reference # SDRUL),
Hardware Reliability Engineer (Reference
# SDRRA), Electrical/Hardware Engineer
(Reference # SDLPA). Please send resumes
with reference number to HewlettPackard Company, 19483 Pruneridge
Avenue, Mail Stop 4206, Cupertino, California 95014. No phone calls please.
Must be legally authorized to work in the
U.S. without sponsorship. EOE.
LEAD C++/SYBASE DEVELOPER - EGB
Systems ,Stamford, CT - Develops complex derivative fixed income, workflow,
pricing systems, Master in CS/Mathematics w/8 yrs exp in C++, sybase, SQL
multithreading apply to careersusa@egb_________
systems.com.
________
HEWLETT – PACKARD COMPANY has
an opportunity for the following position
in Cupertino, California. Customer Oriented R&D Engineer. Resp. for providing adv. tech. support to the Support,
Sales, Prof. Svc. organizations & customers. Reqs. education or knowledge of
applications supported by system/network/web/database
administrators;
admin. & troubleshooting Windows desktop & server operating systems; Client/
Server systems & familiarity w/ HTTP
tech.; UNIX admin: Sun Solaris, HP/UX,
Red Hat Linux; Java, C, C++ programming; Network troubleshooting; & highavailability environments. Reqs. incl. Master’s degree or foreign equiv. in CS,
Electrical & CE or related field of study.
Send resume & refer to job #CUPLGI.
Please send resumes with job number to
Hewlett-Packard Company, 19483
Pruneridge Ave., MS 4206, Cupertino, CA
95014. No phone calls please. Must be
legally authorized to work in the U.S.
without sponsorship. EOE.
HEWLETT – PACKARD COMPANY has
an opportunity for the following position
in Cupertino, California. Systems/Software Engineer. Reqs. driver dev. skills,
debugging, problem solving & troubleshooting skills in a Unix kernel environment; understanding internals of HPUX
crash
dump
subsystem;
implementation exp. w/ Storage Networking Industry Assoc. & Applic. Programming Interface. Reqs. incl. Master’s
degree or foreign equiv. in CS, CE, Tech.,
Electrical/Electronic Eng. or related & 5
years of related exp. Send resume & refer
to job #CUPKKA. Please send resumes
with job number to Hewlett-Packard
Company, 19483 Pruneridge Ave., MS
96
Computer
4206, Cupertino, CA 95014. No phone
calls please. Must be legally authorized to
work in the U.S. without sponsorship.
EOE.
UNIVERSITY OF NORTH TEXAS, Computer Science and Computer Engineering Faculty Positions. The
Department of Computer Science and
Engineering at the University of North
Texas invites applications and nominations for one faculty position at the level
of Associate Professor to start in Fall 2008.
Applicants are required to have expertise
in Security and Information Assurance,
Cyber Trust or a closely related area is
required. The position requires an earned
doctoral degree in Computer Science,
Computer Engineering or a related field.
Applicants must demonstrate an established record of quality teaching, funding
history, graduate student supervision and
national recognition. The CSE department offers a full complement of degrees
in Computer Science and Computer Engineering. More information about the
department can be found at http://
____
www.cse.unt.edu/. Interested persons
should send applications and nominations including a detailed curriculum vitae
and have at least three letters of reference
sent to: Faculty Search Committee,
Department of Computer Science and
Engineering, P.O. Box 311366, Denton,
Texas, 76203, or electronically to: __
faculty_search@cse.unt.edu.
______________ The committee
will begin its review of applications on
December 1, 2007, and will continue to
review applications once every month.
The committee will accept applications
until the position is filled, or the search is
closed. The University of North Texas is
an Equal Opportunity/Affirmative Action/
ADA employer, committed to diversity in
its faculty and educational programs.
HEWLETT – PACKARD COMPANY is
accepting resumes for the following positions in Palo Alto, CA: Information Technology Developer/Engineer (Reference #
PALBRA and #PALRAP), Pricing IT Functional Analyst/Designer (Reference #
PALMMA), Electrical Hardware Engineer
(Reference # PALDFA), Technical Analyst
(Specialist) (Reference # PALDJA),
Research Specialist (Reference # PALJJA).
Please send resumes with reference number to Hewlett-Packard Company, 19483
Pruneridge Avenue, Mail Stop 4206,
Cupertino, California 95014. No phone
calls please. Must be legally authorized to
work in the U.S. without sponsorship.
EOE.
RESEARCH SCIENTIST IN DOCUMENT
IMAGE ANALYSIS. FX Palo Alto Laboratory, Inc. (FXPAL) provides multimedia
and collaboration technology research for
Fuji Xerox Co., Ltd., a joint venture
between Xerox Corporation of America
A
BEMaGS
F
and FujiFilm of Japan. We currently have
an immediate opening for a Research Scientist with expertise in analysis of document images. Experience in document
layout analysis, text analysis, graphics
analysis, or in developing applications
integrated with these types of analysis is
desired. We are developing methods for
extracting content from document
images in English and Japanese and for
using the extracted content in applications such as viewing and retrieval. The
candidate should be interested in working on practical applications in a collaborative setting. Requires a Ph.D. in Computer Science or related field, strong
development skills and excellent publication record. For more information about
FXPAL, please see our site at www.fxpal.
com.
fxpalre___ To apply send resumes to: ____
sumes@fxpal.com and reference job code
__________
CM/2.
AUBURN UNIVERSITY, Department
of Computer Science and Software
Engineering, Assistant/Associate
Professor. The Department of Computer Science and Software Engineering
(CSSE) invites applications for one tenuretrack faculty position at the Assistant or
Associate Professor level to begin Fall
2008. We encourage candidates from all
areas of computer science and software
engineering to apply. The following are
preferred research areas: artificial intelligence, simulation, information assurance
and security, database systems, theory,
programming languages, and software
engineering. The candidate selected for
this position must be able to meet eligibility requirements to work in the United
States at the time appointment is scheduled to begin and continue working
legally for the proposed term of employment and be able to communicate effectively in English. Applicants should submit a current curriculum vita, research
vision, teaching philosophy, and the
names and addresses of three references
to: Dr. Kai H. Chang, Chair, Computer
Science and Software Engineering,
Auburn University, AL 36849-5347.
kchang@eng.auburn.edu
_______________ (with copy to
_____________
mccorba@auburn.edu),
334-844-6300
(Voice). The applicant review process will
begin January 15, 2008. Detailed
announcement of this position can be
found at: http://www.eng.auburn.edu/
csse/.
___ Auburn University is an Affirmative
Action/Equal Opportunity Employer.
Women and minorities are encouraged to
apply.
UNIVERSITY AT BUFFALO, THE STATE
UNIVERSITY OF NEW YORK, Faculty
Positions in Computer Science and
Engineering. Celebrating its 40th
anniversary this year, the CSE Department
solicits applications from excellent candidates in pervasive computing and high
performance computing for openings at
Computer
Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next Page
A
BEMaGS
F
Computer
Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next Page
the assistant professor level. The CSE
department has outstanding faculty and
is affiliated with successful centers
devoted to biometrics, bioinformatics,
biomedical computing, cognitive science,
document analysis and recognition, and
computer security. Candidates are
expected to have a Ph.D. in Computer
Science/Engineering or related field by
August 2008, with an excellent publication record and potential for developing
a strong funded research program. All
applications should be submitted by January 15, 2008 electronically via
recruit.cse.buffalo.edu. A cover letter, cur_____________
riculum vitae, and names and email
addresses of at least three references are
required. The University at Buffalo is an
Equal Opportunity Employer/Recruiter.
NOKIA SIEMENS NETWORKS US LLC
has the following exp/degree positions in
the following locations. Travel to unanticipated U.S. worksites may be required.
EOE. Irving, Texas. •Solutions Manager: Provide sales and marketing support and solutions to customers; identify
and qualify business opportunities and
strategies; and define and document
technical solutions. ID# NSN-TX-SM.
•Systems Engineer: Test and troubleshoot network elements with new SW
rollouts and customer trials, or design test
cases and create plan for Network Interoperability Testing. ID# NSN-TX-SE.
Chicago, Illinois. •Services Lead:
Perform GSM engineering operations
including network build-out, deployment; operations and management, services, scoping, procurement, and customer telecom management. ID#
NSN-IL-SL. Herndon, Virginia. •Service Solution Manager: Manage service products portfolios; provide solution
consulting; anticipate customer needs;
identify solutions; and anticipate internal/external business issues and developments. ID# NSN-VA-SSM. Mail resume to:
NSN Recruiter, Nokia Siemens Networks,
6000 Connection Dr., 4E-338, Irving, TX
75039. Must reference ID #.
BOSTON UNIVERSITY. The Department
of Electrical and Computer Engineering
(ECE) at Boston University anticipates
openings for faculty positions at all ranks
in all areas of electrical and computer
engineering. Areas of particular interest
in computer engineering are computer
systems, embedded and reconfigurable
systems, distributed systems, trusted
computing, design automation, VLSI,
computer networks, software engineering, and related areas. The ECE Department is part of a rapidly developing and
innovative College of Engineering. Excel-
A
BEMaGS
F
lent opportunities exist for collaboration
with colleagues in outstanding research
centers at Boston University, at other universities/colleges, and with industry
throughout the Boston area. The Department has 44 faculty members, 200 graduate students and 250 BS majors. For
additional information, please visit
http://www.bu.edu/ece/. In addition to
a relevant, earned PhD, qualified candidates will have a demonstrable ability to
teach effectively, develop funded research
programs in their area of expertise, and
contribute to the tradition of excellence in
research that is characteristic of the ECE
department. Applicants should send their
curriculum vita with a statement of teaching and research plans to: Professor David
Castañón, Chair ad interim, Department
of Electrical and Computer Engineering,
Boston University, 8 Saint Mary’s Street,
Boston, MA 02215. Boston University is
an Equal Opportunity/Affirmative Action
Employer.
HEWLETT – PACKARD COMPANY has
an opportunity for the following position
in Richardson, Texas. Technical Consultant. Resp. for providing expert consulting to external company customers in
the SAP Business Intelligence (BI) area.
Reqs. in-depth business knowledge,
expert SAP BI knowledge & travel to
TECHNOLOGY IN SUPPORT OF NATIONAL SECURITY
MIT Lincoln Laboratory is a premier employer
applying science and advanced technology
to critical, real-world problems of national
interest.
Computer Scientists
Computer Scientists are needed to
conduct applied research on
infrastructures for large distributed
sensing, communications, and decision
making systems. Candidates should have
experience and interest in several of the
following areas: distributed systems and
applications, autonomous intelligent
agents, knowledge representation and
ontology’s, knowledge-based systems,
databases, semantic web and semantic
web services, software and systems
architecture, service oriented architecture
(SOA), and user interfaces.
_______________
A recent PhD or an MS with at least
2 years experience in Computer Science
or related field is desired.
___________
Please apply online at:
__________
http://www.ll.mit.edu/careers/careers.html.
Lincoln Laboratory is an Equal
Opportunity Employer.
M/F/D/V - U.S. Citizenship required.
97
December 2007
Computer
Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next Page
A
BEMaGS
F
Computer
Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next Page
worksites throughout the U.S. Reqs. incl.
Bachelor’s degree or foreign equiv. in CS,
Eng., Electrical/Electronic Eng. or related
& 5 years of related exp. Send resume &
refer to job #RICJVA. Please send resumes
with job number to Hewlett-Packard
Company, 19483 Pruneridge Ave., MS
4206, Cupertino, CA 95014. No phone
calls please. Must be legally authorized to
work in the U.S. without sponsorship.
EOE.
UNIVERSITY OF WASHINGTON,
Computer Science & Engineering,
Tenure-Track, Research, and Teaching Faculty. The University of Washington's Department of Computer Science
& Engineering has one or more open
positions in a wide variety of technical
areas in both Computer Science and
Computer Engineering, and at all professional levels. A moderate teaching and
service load allows time for quality
research and close involvement with students. Our recent move into the Paul G.
Allen Center for Computer Science &
Engineering expands opportunities for
new projects and initiatives. The Seattle
area is particularly attractive given the
presence of significant industrial research
laboratories as well as a vibrant technology-driven entrepreneurial community
that further enhance the intellectual
atmosphere. Information about the
department can be found on the web at
http://www.cs.washington.edu. We welcome applicants in all research areas in
Computer Science and Computer Engineering including both core and inter-disciplinary areas. We expect candidates to
have a strong commitment both to
research and to teaching. The department is primarily seeking individuals at
the tenure-track Assistant Professor rank;
however, under unusual circumstances
and commensurate with the qualifications of the individual, appointments may
be made at the rank of Associate Professor or Professor. We also seek non-tenured
research faculty at Assistant, Associate and
Professor levels, and full-time annual Lecturers and Sr. Lecturers. Applicants for
both tenure-track and research positions
must have earned a PhD. by the date of
appointment; those applying for lecturer
positions must have earned at least a Master’s degree. Please apply online at
<http://www.cs.washington.edu/news/jo
bs.html> with a letter of application, a
_____
complete curriculum vitae, statement of
research and teaching interests, and the
names of four references. Applications
received by February 29, 2008 will be
given priority consideration. The University of Washington was awarded an Alfred
P. Sloan Award for Faculty Career Flexibility in 2006. In addition, the University
of Washington is a recipient of a National
Science Foundation ADVANCE Institutional Transformation Award to increase
the participation of women in academic
science and engineering careers. The Uni-
98
Computer
versity of Washington is an affirmative
action, equal opportunity employer. We
are building a culturally diverse faculty
and encourage applications from
women, minorities, individuals with disabilities and covered veterans.
RUTGERS UNIVERSITY, Tenure Track
Faculty Position in Computational
Biology or Biomedical Informatics.
The Department of Computer Science
and the BioMaPS Institute for Quantitative Biology at Rutgers University invite
applications for a tenure track faculty
position at the junior or senior level in the
Department of Computer Science. Candidates should have a strong background
and experience in computational biology
or biomedical informatics, including but
not limited to: structural and functional
genomics and proteomics, biological networks, evolutionary and systems biology,
computational modeling, machine learning and applications, large scale systems
data analysis, and informatics. They
should be prepared to work on interdisciplinary projects making substantive
Computer Science contributions. Applicants should submit a cover letter, curriculum vitae, research summary and
statement of future research goals,
together with a statement of teaching
experience and interests, and arrange for
four letters of recommendation to be sent
on their behalf. Materials should be sent
as PDF files to: Chair, Hiring Committee
DCS-BioMaPS,
Rutgers
University,
Department of Computer Science, Hill
Center, Busch Campus, Piscataway, NJ
08855 (email: _______________
hiringbio@cs.rutgers.edu
). For more information on the Department of Computer Science, see
http://www.cs.rutgers.edu, and for the
BioMaPS Institute, see __________
http://www.biomaps.rutgers.edu. To ensure proper con__________
sideration, applications should arrive by
February 1, 2008. Rutgers University is an
Affirmative Action/Equal Opportunity
Employer. Women and minority candidates are especially encouraged to apply.
.Net DEVELOPER – EGB Systems ,Stamford, CT - Designs, develops & tests .Net
applications in C#, C++, VB, asp, XML,
COM, DCOM, DTD, XSL, XSLT, CSS, Oracle, MS/SQL server & .Net. Bachelor in
Science/ Eng w/ 5+ yrs exp in .Net tech
apply to careersusa@egbsystems.com.
_________________
INFORMATION SYSTEMS ANALYST.
Analyze & design info. mgmt. sys. using
LAN. Dvlp. utilizing MS SQL. Troubleshoot, recover & maintain OS on
2000NT Server & networks. Analyze auto
accessories data including monthly sales,
financial data and performance reports to
President. Req: Master's in Info. Sys.
Mgmt., CIS., or Bus. Admin w/courses in
Computers. 40 hr/wk. Job/Interview Site:
City of Industry, CA. Send Resume to: TC
A
BEMaGS
F
Spoilers, Inc. @ 228 Sunset Ave, City of
Industry, CA 91744.
COMPUTER WEB DESIGN & DEVELOPMENT COMPANY (San Francisco,
CA) seeking Web Designer/ Developer.
Reqs MS in Web D&D +6 mos exp in PHP,
Perl, AJAX, JavaScript, ActionScript,
XHTML, XML, Shell. S/ware: SSH, Flash,
Dreamweaver, Photoshop, CorelDraw,
WebPosition, Visio, MS Office. Apply to
Roman Bogomazov, President, Papillon
Lab Corp, 1324 Clement St, San Francisco, CA 94118.
HEWLETT – PACKARD COMPANY has
an opportunity for the following position
in Cupertino, CA. Software Design
Engineer. Reqs. exp with SW Dvlpmt in
a Unix envir, C prgrm lang. Script lang
e.g. PERL, Unix Shell prgrm. Unix kernel
internals. Device driver dvlpmt. Must
have completed at least 2 full life cycle
projects. Mass storage e.g. SCSI & Fiber
Channel. Reqs. incl. Bachelor’s degree or
foreign equiv. in CS, CE, EE or related & 5
yrs of related exp. Employer will accept a
combination of foreign degrees and/or
diplomas and/or Certifs determined to be
equiv to a U.S. bachelor’s degree. Send
resume & refer to job #CUPTMA. Please
send resumes with job number to
Hewlett-Packard Company, 19483
Pruneridge Ave., MS 4206, Cupertino, CA
95014. No phone calls please. Must be
legally authorized to work in the U.S.
without sponsorship. EOE.
SR. TECH. CONSULTANT, Ascendant
Technology, Austin, TX. Req.: Master's (or
equiv.) in Comp. Sci., IT, Engineering,
Math., rel. or foreign equiv. Resume only
attn.: C. Jones (#06-1036) 10215 161st
Pl. NE, Redmond, WA 98052.
THE UNIVERSITY OF KANSAS, Faculty Position in Bioinformatics/
Computational Biology. The Center
for Bioinformatics (www.bioinformatics.
____ and the Department of Electrical
ku.edu)
Engineering and Computer Science
(http://www.eecs.ku.edu) at The University of Kansas invite applications for a
tenure-track assistant professor position
expected to begin August 18, 2008. The
Bioinformatics initiative is part of a major
expansion in Life Sciences and complements existing strengths in information
technology, structural biology, computational chemistry, biophysics, proteomics,
developmental and molecular genetics,
and drug design. Duties: to establish and
maintain an externally-funded research
program, to participate in teaching, and
to provide service. Required Qualifications: Ph.D. in a discipline related to
Bioinformatics or Computer Science
expected by start date of appointment;
potential for excellence in research in
Computer
Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next Page
A
BEMaGS
F
Computer
Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next Page
Bioinformatics; and commitment to
teaching bioinformatics and computer
science courses; strong record of research
accomplishments in at least one of the
following areas: biomolecular networks
modeling, bioinformatics databases, and
computational modeling of biomolecular systems. For the full position
announcement, refer to: http://www2.
ku.edu/~clas/employment/FY09_Index.
_______________________
htm and click the Bioinformatics/EECS
__
download button. E-mail application as
a single file, including CV, letter of application, statement of past and future
research and teaching interests and philosophy to: __________
asawyer@ku.edu. Have at
least three letters of reference sent separately to: Dr. Robert F. Weaver, Professor
and Associate Dean, College of Liberal
Arts and Sciences, c/o Anne Sawyer, The
University of Kansas, 1450 Jayhawk Blvd,
200 Strong Hall, Lawrence, KS 660457535. Initial review of applications
begins January 15, 2008 and will continue until the position is filled. EO/AA
Employer.
HEWLETT – PACKARD COMPANY has
an opportunity for the following position
in Palo Alto, California, and various unanticipated sites throughout the United
States. Software System Engineer.
Reqs. Master’s in CS, CE, Engineering or
related and 3 yrs related exp. Reqs. Web
Application program/design, J2EE, XML,
SOAP, C++. Send resume referencing
#PALPBR. Please send resumes with reference number to Hewlett-Packard Company, 19483 Pruneridge Ave., MS 4206,
Cupertino, CA 95014. No phone calls
please. Must be legally authorized to
work in the U.S. without sponsorship.
EOE.
GEORGETOWN UNIVERSITY, Senior
Faculty Position and Chair of
Department, Department of Computer Science. The Department of
Computer Science seeks a dynamic
scholar/teacher for a senior faculty position within the department. It is expected
that within a short time of coming to
Georgetown, this new faculty member
will assume the duties and responsibilities
of department chair. For more information, please visit our website at ____
http://
www.cs.georgetown.edu/.
DATABASE ADMINISTRATOR. Analyze, dsgn, dvlp, test & implmt data warehouses, web based applics & applic systems & s/w using 9I, 10 G w/RAC, Data
Guard, Physical standby DB, Toad, Sql
loader, exp/imp, Vertias Netbackup, SQL
Dvlpr, OEM, PL/SQL, RMAN & UNIXSolaris/AIX/Linux. Ability to perform &
troubleshoot advance Oracle DBA functions such as backups, recovery, tuning,
monitoring, scripting etc. Provide tech
assistance in connection w/system main-
tenance; prep technical documentation
for user ref; conduct training sessions for
end users. Reqs MS in Comp Sci or
related. Mail resumes to MCS Technology
Solutions LLC, 200 Centennial Ave, Ste
200, Piscataway, NJ 08854.
DATABASE ADMINISTRATOR sought
by Importers/Wholesalers. Respond by
resume only to: J. Wong, ITC Intertrade,
Inc., 1505 Sawyers St., Suite B. Houston,
TX 77007.
HEWLETT – PACKARD COMPANY is
accepting resumes for the following positions in Houston, TX: Business Systems
Analyst (Reference # HOURPU), Team
Leader/Architect (Reference # HOUEAL),
eCommerce IT Business Analyst (Reference # HOUBJO). Please send resumes
with reference number to HewlettPackard Company, 19483 Pruneridge
Avenue, Mail Stop 4206, Cupertino, California 95014. No phone calls please.
Must be legally authorized to work in the
U.S. without sponsorship. EOE.
SQL DEVELOPER - EGB Systems, Stamford, CT - Develops SQL database applications & stored procedures. Knowledge
of , Java, J2EE, JSP, JSF Portal, Java Script,
HTML, XML, TOAD, WIN2000/XP, Crystal Report 9.0 required. Master in CS
w/5+ yrs exp in SQL server apply to
careersusa@egbsystems.com.
_________________
RUTGERS UNIVERSITY, Tenure Track
Faculty Position in Computational
Biomedicine, Imaging and Modeling. The Rutgers University Department
of Computer Science and the Center for
Computational Biomedicine, Imaging
and Modeling (CBIM) seeks applicants in
computer graphics and related areas, for
a tenure-track faculty position starting
September 2008. We're particularly interested in synergy with CBIM and thus
we’re excited about receiving applications primarily in all areas of computer
graphics, as well as related areas such as
visualization, computer vision, machine
learning, and human-computer interaction. Rutgers University offers an exciting
and multidisciplinary research environment and encourages collaborations
between Computer Science and other
disciplines. Applicants should have
earned or anticipate a Ph.D. in Computer
Science or a closely related field, should
show evidence of exceptional research
promise, potential for developing an
externally funded research program, and
commitment to quality advising and
teaching at the graduate and undergraduate levels. Applicants should send
their curriculum vitae, a research statement addressing both past work and
future plans, a teaching statement and
arrange for four letters of recommenda-
A
BEMaGS
F
tion to be sent on their behalf to __
hiring@cs.rutgers.edu. If electronic submis___________
sion is not possible, hard copies of the
application materials may be sent to: Professor Dimitris Metaxas, Hiring Chair,
Computer Science Department, Rutgers
University, 110 Frelinghuysen Road, Piscataway, NJ 08854, Applications should
be received by February 15, 2008, for full
consideration. Rutgers University is an
Affirmative Action/Equal Opportunity
Employer. Women and minority candidates are especially encouraged to apply.
TENNESSEE TECHNOLOGICAL UNIVERSITY. The Department of Electrical
and Computer Engineering (ECE) at Tennessee Technological University invites
applications for a tenure-track position at
the Associate or Assistant Professor level
in Computer Engineering beginning in
August 2008. This position includes an
initial appointment as a Stonecipher Faculty Fellow in Computer Engineering.
Candidates for this position must have an
earned Ph.D. degree in computer engineering or closely related areas with
expertise in computer engineering areas
such as computer networks, parallel and
distributed systems, sensor networks
and/or computer security. Screening of
applications will begin February 1, 2008
and continue until the position is filled.
See http://www.tntech.edu/ece/jobs.
html for details. AA/EEO.
___
CDI CORP seeks IT Architect/Specialist
for Memphis, TN to provide varied set of
networking systems services in support of
complex networks/systems environments. Req BS, 1 yr technical work exp
and Cisco CCNA or CCNP cert. Mail
resume ATTN: Generalbox 1960 Research
Drive, Suite 200 Troy MI 48083 EOE.
HEWLETT - PACKARD COMPANY is
accepting resumes for the following position in Houston, TX: IT Analyst (Business Systems Analyst) (Reference #
HOUVKU). Please send resumes with reference number to Hewlett-Packard Company, 19483 Pruneridge Avenue, Mail
Stop 4206, Cupertino, California 95014.
No phone calls please. Must be legally
authorized to work in the U.S. without
sponsorship. EOE.
PROGRAMMER ANALYST: Jobsite:
Sacramento CA. MS Electrical Engineering or Computer Engineering required.
Send resume: Logic House Ltd. 78-670
Highway 111 #218, La Quinta, CA
92253.
COMPUTER & INFO SYSTEMS MANAGER sought by Chicago-based co. Create in house prgm for employee use: for
referral, rehab, payroll & frequency, mktg,
99
December 2007
Computer
Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next Page
A
BEMaGS
F
Computer
Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next Page
mailing, billing, supervisor nurse &
administrator depts, to benefit co. Min
reqmt 1 yr related work exp & BS in
Comp Info System. Send resumes to
Angela, Healing Hands Home Care, Inc,
6730 W. Higgins Ave, Chicago, IL 60656.
SR. LEAD CONSULTANT: Lead implemntation for supply chain, dist., create
financial modules of Oracle apps.
Implemnt; plan project, estimate, track,
reprt; resolve custmer issues; develop fnctional specs; maintain opratnal documntatn. Train users, post-prod supprt, design
custom flows/specs for extensns. BS/BA
Comp Sci, Commerce, Bus. + 2 yrs exp as
System Anal, Prog. Anal, or any related in
IT field, exp must incl. requirements gathering, analysis, process design, app config, systm test, industry exp and consult
exp w/ Oracle dist. apps., use of Oracle
distrib. suite. Will accept any suitable
comb of edu, training, exp. Arabella Lam,
RCM Technologies, 1055 W. 7th Street,
Suite 1820, Los Angeles, CA 90017. Ref.
Code RCMPL1. Include ref. code.
GEORGE
MASON
UNIVERSITY,
Department of Computer Science,
Volgenau School of Information
Technology and Engineering, Faculty Positions in Bioengineering.
The Volgenau School of Information
Technology and Engineering at George
Mason University is building a program
in bioengineering, including computational approaches to biology. As part of
this multidisciplinary initiative, tenure
track openings are available at the Assistant, Associate and Full Professor levels in
the School. Background, experience, and
interest will determine the departmental
affiliation of successful applicants. Within
this initiative, the Department of Computer Science is seeking faculty members
who can establish strong research and
teaching programs in the area of computational biology, bioinformatics, and
biometrics. Minimum qualifications
include a Ph.D. in Computer Science,
Bioinformatics, or a closely related field,
demonstrated potential for excellence
and productivity in research applying
computational approaches to address
fundamental questions in biology or
medicine, and a commitment to high
quality teaching. Candidates for a senior
position must have a strong record of
external research support. The School has
more than 100 full-time faculty members,
with over 40 in the CS Department. The
research interests of the CS department
includes artificial intelligence, algorithms,
computational biology and bioinformatics, computer graphics, computer vision,
databases, data mining, image processing, security, knowledge engineering,
parallel and distributed systems, performance evaluation, real-time systems,
robotics, software engineering, visualization, and wireless and mobile computing.
100
Computer
The department has several collaborative
research and teaching activities in computational biology and bioinformatics
with other Mason units. For more information, visit our Web site: http://cs.
gmu.edu/. For full consideration please
______
submit application and application materials on-line at http://jobs.gmu.edu (position number F9086z). To apply online
you will need a statement of professional
goals including your perspective on
teaching and research, a complete CV
with publications, and the names of four
references. The review of applications will
begin immediately and will continue until
the positions are filled. George Mason
University is a growing, innovative, entrepreneurial institution with national distinction in several academic fields. Enrollment is 30,000, with students studying
in over 100 degree programs on four
campuses in the greater Washington, DC
area. Potential interactions with government agencies, industry, medical institutions, and other universities abound.
GMU is an equal opportunity/affirmative
action employer that encourages diversity.
IOWA STATE UNIVERSITY OF SCIENCE AND TECHNOLOGY, College of
Liberal
Arts
and
Sciences,
Announcement of Opening for
Tenure-track Position. The Department of Computer Science at Iowa State
University is seeking outstanding candidates to fill a tenure-track position, to
commence in August, 2008. We are especially interested in applicants at the assistant professor level in Programming Languages and/or Software Engineering.
Successful candidates will have demonstrated potential for outstanding research
and instruction in computer science. A
Ph.D. or equivalent in Computer Science
or a closely related field is required. Our
department currently consists of 27 fulltime tenure-track faculty members. We
offer B.S., M.S., and Ph.D. degrees in
Computer Science and participate in new
B.S. degrees in Software Engineering and
in Bioinformatics and Computational Biology. We also participate in interdepartmental graduate programs in Bioinformatics and Computational Biology,
Human-Computer Interactions, and
Information Assurance. We have about
330 B.S. students, 60 M.S. students, and
110 Ph.D. students. Almost all graduate
students are supported by research or
teaching assistantships. We have strong
research and educational programs in
Algorithms and Complexity, Artificial
Intelligence, Bioinformatics and Computational Biology, Databases, Data Mining,
Information Assurance, Programming
Languages, Multimedia Systems, Operating Systems and Networks, Robotics,
and Software Engineering. Our department has over $6.5 million in active
research grants. With the above interdisciplinary activities included, we con-
A
BEMaGS
F
tribute to active research and training
grants totaling approximately $20 million. A dynamic faculty, a moderate
teaching load (typically 3 courses per year
with one course reduction for active
researchers and possible further reductions for junior faculty), a strong graduate program, and a well-funded research
program provide an excellent academic
environment. In addition, cutting-edge
research and education are nurtured
through interdisciplinary interactions
facilitated by the Laurence H. Baker Center for Bioinformatics and Biological Statistics, the Center for Computational
Intelligence, Learning and Discovery, the
Center for Integrative Animal Genomics,
the Cyber Innovation Institute, the Information Assurance Center, the Department of Energy's Ames Laboratory, and
the Virtual Reality Application Center.
Iowa State University is a major land-grant
university located in Ames, Iowa. It is a
pleasant, small, cosmopolitan city with a
population of over 50,000 (including
about 27,000 students), a vibrant cultural
scene, an excellent medical clinic, and a
secondary school system that ranks
among the best in the United States.
Ames is frequently ranked among the
best communities to live in North America: 20th nationally among best places to
live (2002), 3rd nationally in terms of
highly educated workforce for knowledge-based industry (2005), 12th nationally for its public schools (2006). Applicants should send a curriculum vita,
including teaching and research statements and the names and addresses of at
least three references, to: Chair of Search
Committee, Department of Computer
Science, Iowa State University, Ames,
Iowa 50011-1041; Fax: 515-294-0258;
Tel: 515-294-4377; E-mail: _____
facultysearch@cs.iastate.edu; Web: _______
www.cs.ias_____________
tate.edu. Review of applications will begin
_____
on December 1, 2007 and will continue
until the position is filled. Iowa State University is an equal opportunity employer.
Women and members of underrepresented minorities are strongly encouraged to apply. For more information,
please visit us at http://www.cs.iastate.
edu.
__
FUTURE TECHNOLOGY ASSOCIATES
in NYC seeks Consultant to design,
develop, deploy & maintain BizTalk, SQL
& EDI solutions for in- & out-bound documents; Req BS/BA and 1 yr BizTalk Architecture & development, 1 yr w/ .NET,
SQL, XML/XSLT & MQ Series BizTalk
client configuration & 6 mos. w/ EDI &
Microsoft MOM. Email resume to
tsevint@schools.nyc.gov.
______________
SYSTEM SOFTWARE ENGINEER: MS
in Electrical or Computer Engineering req.
Send resume: Sundance Digital Signal
Processing Inc., 4790 Caughlin Pkwy
#233 , Reno, NV 89509.
Computer
Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next Page
A
BEMaGS
F
Computer
Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next Page
TEXAS TECH UNIVERSITY: The Department of Computer Science invites applications for one or more Abilene faculty
positions at any rank starting Fall 2008.
Preferences are given to candidates with
background in software architecture, formal methods, verification, validation, and
security; web-based, trusted and intelligent software development. Other areas
will be considered for excellent candidates. Applicants must have a Ph.D.
degree in computer science or a closely
related field. Successful candidates must
have demonstrated achievements or
potentials for excellence in research and
teaching. The Abilene facility offers only
graduate programs at the masters and
the doctoral levels to students in Abilene
and other cities via ITV and distance learning. The department has received funding support from the Abilene foundation
to provide research assistantships in Abilene. We offer competitive salaries, a
friendly and cooperative environment,
and excellent research facilities. Review
will begin in January 2008 and continue
until the positions are filled. A letter of
application, curriculum vitae, brief
research and teaching goals, and three
letters of reference should be submitted
electronically at http://jobs.texastech.
edu. Please use Requisition number
___
75218. Additional information is available
at http://www.cs.ttu.edu. Texas Tech
University is an equal opportunity/affirmative action employer and actively seeks
applications from minority and female
applicants.
GEORGE
MASON
UNIVERSITY,
Department of Computer Science.
The Department of Computer Science at
George Mason University invites applications for a tenure-track faculty position at
the rank of Assistant Professor beginning
Fall 2008. We are seeking a faculty member who can establish strong research and
teaching programs in the area of computer game development. Applicants
must have a research focus in an area in
computer games technology — for
example, in artificial intelligence, computer graphics, real-time animation, simulation and modeling, distributed and
multi-agent systems, or software engineering, as applied to computer games.
Minimum qualifications include a Ph.D.
in Computer Science or a related field,
demonstrated potential for excellence
and productivity in research, and a commitment to high quality teaching. The
department currently offers a graduate
certificate in Computer Games Technology, and is adding a concentration in
Computer Game Design to its undergraduate program. The Computer Game
Design concentration is being developed
in close collaboration with faculty in the
College of Visual and Performing Arts at
Mason. For more information on these
and other programs offered by the
department, visit our Web site:
http://cs.gmu.edu/. The department has
over 40 faculty members with wide-ranging research interests including artificial
intelligence, algorithms, computer graphics, computer vision, databases, data mining, security, human computer interaction, parallel and distributed systems,
real-time systems, robotics, software engineering, and wireless and mobile computing. George Mason University is
located in Fairfax, Virginia, a suburb of
Washington, DC, and home to one of the
highest concentrations of high-tech firms
in the nation. There are excellent opportunities for interaction with government
agencies and industry, including many
game and “serious game” development
companies. In particular, the Washington
DC region is fast becoming a hub for the
serious games industry. Fairfax is consistently rated as being among the best
places to live in the country, and has an
outstanding local public school system.
For full consideration please submit application and application materials on-line
at http://jobs.gmu.edu (position number
F9084z). To apply, you will need a statement of professional goals including your
perspective on teaching and research, a
complete C.V. with publications, and the
names of four references. The review of
applications will begin immediately and
will continue until the position is filled.
GMU is an equal opportunity/affirmative
action employer. Women and minorities
are strongly encouraged to apply.
CAL POLY, SAN LUIS OBISPO, Computer Engineering. The Computer Science Department and Computer Engineering Program, at Cal Poly, San Luis
Obispo, invite applications for a full-time,
academic year, tenure track Computer
Engineering faculty position beginning
September 8, 2008. Rank and salary is
commensurate with qualifications and
experience. Duties include teaching core
undergraduate courses, and upper-division and master's level courses in a specialty area; performing research in a mainstream area of computer engineering;
and service to the department, the university, and the community. Applicants
from all mainstream areas of computer
engineering are encouraged to apply. A
doctorate in Computer Engineering,
Computer Science, Electrical Engineering,
or a closely related field is required. Candidates in the areas of architecture and
parallel computing are especially encouraged to apply. Candidates must have a
strong commitment to teaching excellence and laboratory-based instruction;
dedication to continued professional
development and scholarship; and a
broad-based knowledge of computer
engineering. Demonstrated ability in written and oral use of the English language
is required. Computer Engineering is a
joint program between the Departments
of Computer Science and Electrical Engineering. Cal Poly offers Bachelor's
A
BEMaGS
F
Degrees in Computer Engineering, Computer Science, Software Engineering and
Electrical Engineering, and Master's
Degrees in Computer Science and Electrical Engineering. Cal Poly emphasizes
"learn by doing" which involves extensive
lab work and projects in support of theoretical knowledge. The available computing facilities for instructional and faculty support are modern and extensive.
To
apply,
please
visit ____
WWW.
CALPOLYJOBS.ORG to complete a
____________
required online faculty application, and
apply to Requisition #101387. For full
consideration, candidates are required to
attach to their online application: (1)
resume, (2) cover letter, (3) candidate's
statement of goals and plans for teaching
and research. Upon request, candidates
selected as finalists will be required to submit three letters of reference and official
transcripts for final consideration. Review
of applications will begin January 7, 2008;
applications received after that date may
be considered. Questions can be emailed
to: _________________
cpe-recruit@csc.calpoly.edu. Please
include requisition #101387 in all correspondence. For further information about
the department and its programs, see
www.csc.calpoly.edu and www.cpe.
calpoly.edu. Cal Poly is strongly commit_______
ted to achieving excellence through cultural diversity. The university actively
encourages applications and nominations
of all qualified individuals. EEO.
Online Advertising
Are you recruiting for a
computer
scientist or engineer?
Submission Details: Rates are $160.00
for 30 days with print ad in Computer
magazine.
Send copy to:
Marian Anderson
IEEE Computer Society
10662 Los Vaqueros Circle
Los Alamitos, California 90720-1314;
phone: + 1 714.821.8380
fax: +1 714.821.4010;
email: ______________
manderson@computer.org.
http://computer.org
101
December 2007
Computer
Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next Page
A
BEMaGS
F
Computer
Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next Page
A
BEMaGS
F
BOOKSHELF
aturing Usability: Quality in
Software, Interaction and Value, Effie Law, Ebba Hvannberg,
and Gilbert Cockton, eds. This book’s
essays supply an understanding of how
current research and practice contribute
to improving quality from the perspectives of software features, interaction
experiences, and achieved value.
Divided into three parts, the book analyzes how using development tools can
enhance system usability and how
methods and models can be integrated
into the development process to produce effective user interfaces.
The essays address theoretical frameworks on the nature of interactions,
techniques, and metrics for evaluating
interaction quality, and the transfer of
concepts and methods from research
to practice. They also assess the impact
a system has in the real world, focusing
on increasing the value of usability
practice for software development and
on increasing value for users.
Springer; www.springer.com; 9781-84628-940-8; 429 pp.
M
overnance and Information Technology: From Electronic Government to Information Government,
Viktor Mayer-Schönberger and David
Lazer, eds. Developments in information and communication technology
and networked computing over the
past two decades have given rise to the
notion of electronic government, which
most commonly refers to the delivery
of public services over the Internet. This
volume argues for a shift from electronic government’s narrow focus on
technology and transactions to the
broader perspective of information
government—the information flows
within the public sector, between the
public sector and citizens, and among
citizens—as a way to understand the
changing nature of governing and governance in an information society.
The essays describe the interplay
between recent technological developments and evolving information
flows and the implications of different information flows for efficiency,
political mobilization, and democratic
accountability.
G
102
MIT Press; ____________
mitpress.mit.edu; 0-26263349-3; 352 pp.
haracter Recognition Systems: A
Guide for Students and Practitioners, Mohamed Cheriet, Nawwaf
Kharma, Cheng-Lin Liu, and Ching Y.
Suen. This book provides practitioners
and students with the fundamental
principles and state-of-the-art computational methods for reading printed
texts and handwritten materials. The
authors present information analogous
to the stages of a computer recognition
system, helping readers master the theory and latest methodologies used in
character recognition.
Each chapter contains major steps
and tricks to handle the tasks
described. Researchers and graduate
students in computer science and engineering might find this book useful for
designing a concrete system in OCR
technology, while practitioners might
find it a valuable resource for the latest advances and modern technologies
not covered elsewhere in a single book.
Wiley-Interscience; www.wiley.com;
978-0-471-41570-1; 360 pp.
C
igital Convergence—Libraries of
the Future, Rae Earnshaw and John
Vince, eds. The convergence of IT,
telecommunications, and media is revolutionizing how information is collected, stored, and accessed. Digital
information preserves content accuracy
in a way other systems do not. Highbandwidth transmission from one place
to another on the planet is now possible. Ubiquitous and globally accessible,
information can be held and accessed
just as easily on a global network as on
a local personal computer or in a local
library. Devices are increasingly intelligent and network-ready, while user
D
interfaces become more adaptable and
flexible. Digital intelligence is becoming seamless and invisible, enabling a
greater focus on content and the user’s
interaction with it.
This revolution affects the development and organization of information
and artifact repositories such as
libraries, museums, and exhibitions,
while also changing how physical and
digital aspects are mediated. Digital
convergence is bringing about changes
that are substantial and likely to be
long-lasting. This volume presents key
aspects in the areas of technology and
information sciences, from leading
international experts.
Springer; www.springer.com; 9781-84628-902-6; 416 pp.
jax in Action, Dave Crane and Eric
Pascarello with Darren James. Web
users are tiring of the traditional Web
experience. They get frustrated losing
their scroll position, become annoyed
waiting for refresh, and struggle to
reorient themselves on every new page.
With asynchronous JavaScript and
XML, known as Ajax, users can enjoy
a better experience. Ajax is a new way
of thinking that can result in a flowing
and intuitive interaction with the user.
This book explains how to distribute the application between the client
and server—by using a nested MVC
design, for example—while retaining
the system’s integrity. It also shows
how to make the application flexible
and maintainable using good, structured design to help avoid problems
like browser incompatibilities. Above
all, this book showcases the many
advantages gained by placing much
of the processing in the browser.
Developers with prior Web technologies experience might find this book
especially useful.
Manning;
www.manning.com/
crane; 1-932394-61-3; 680 pp.
A
Send book announcements to
newbooks@ computer.org.
_________________
Computer
Computer
Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next Page
A
BEMaGS
F
Computer
Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next Page
A
BEMaGS
F
SECURITY
Natural-Language
Processing for
Intrusion Detection
Allen Stone, Jacob & Sundstrom
EBIDS-SENLP, a system using
natural-language processing,
shows promise in defending
against social-engineering
attacks.
I
ntrusion-detection systems seek to
electronically identify malicious
traffic as it enters a defended network. Traditionally, IDSs have
used electronic signatures or traffic analysis to identify attack traffic.
Spam filtering keys on specific electronic segments of an e-mail and filters out those identified as possibly
malicious.
Social engineering, a unique type of
attack traffic, attempts to compromise
a network or system’s security metrics
by exploiting the human end user
through natural language, based on
common psychological flaws and
deception. These attacks have been
difficult to defend against in the past
with IDSs because natural language is
highly variable. An attacker could easily reword the text, while preserving
the attack’s same underlying conceptual meaning. Since this type of attack
exploits the end user, personnel resistance training has been the most effective means of defense, but all training
wears off over time, and even properly
trained personnel can make mistakes.
Natural-language processing teaches
computers the semantic meaning of
natural-language text. Thus, an NLP
system reads plain English (among
other languages) and categorizes what
it’s seen in terms of conceptual themes
and ontological concepts. The University of Maryland, Baltimore County,
is home to an advanced NLP system,
OntoSem.
until I could modify and streamline it
for this unintended purpose.
Defining rules for EBIDS
EBIDS, which I developed as part of
my master’s thesis at UMBC, simply
parses e-mail text and strips out the
body. It sends the message to a detection mechanism, then uses a threshold
system to decide if the e-mail is likely
malicious. The real value in EBIDS, as
with most IDSs, is in the definition of
its rules.
EBIDS rules are based on the concept of deception, rather than literal
string matches, although it can incorporate literal strings where it would
be best to do so. The concepts of
deception are more or less an equivalence set of literal strings, using
OntoSem’s categorization to build
these sets.
The language for rules allows for
detailed and complex signatures. Each
rule comprises some number of signatures in the following format:
• a comment field;
• either “SEM” for semantic analysis, “REF” for referential analysis,
or “LIT” for literal string;
• the contextual concept of deception that the signature looks for;
and
• a weighting to the rule.
SOCIAL-ENGINEERING ATTACKS
Social-engineering defense is flawed
because it depends on humans.
Effectively training the system—rather
than the end user—would eliminate
that flaw. The E-mail-Based Intrusion
Detection System to find Social
Engineering using Natural-Language
Processing (EBIDS-SENLP) does this
by reading plain-text e-mails and
sending the e-mail’s natural-language
body to OntoSem. However, OntoSem
needs pretraining to look for certain
segments of language as concepts of
deception to meaningfully analyze
the raw text and tell EBIDS what
types of deception the e-mail could
contain.
OntoSem proved too time-inefficient for the project’s testing, so I used
an equivalence set to work around this
The F.F. Poirot project attempts to
find phishing e-mails sent to financial
organizations using a similar ontology, but the structure of their rules is
rigidly defined, and the narrow focus
on financial groups is difficult to
expand. EBIDS isn’t restricted to
financial fraud, has an easily updated
ontology (OntoSem), and doesn’t
require as rigid a rule definition.
Figure 1 shows an example of an
EBIDS detection rule.
Rules for testing
The specific rules for testing EBIDS
focus primarily on phishing, because
phishing e-mails entirely comprised
the “known bad” corpus. However,
my intent is to expand the system to
other types of social engineering.
103
December 2007
Computer
Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next Page
A
BEMaGS
F
Computer
Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next Page
A
BEMaGS
F
SECURITY
cards, congratulatory language, a
corporation name, and an information request.
Triggers
Kevin Mitnick describes the psychological triggers these attacks typically use in The Art of Deception:
Controlling the Human Element of
Security (Wiley, 2002). These triggers
include the following:
Figure 1. An EBIDS detection rule. Each rule is composed of some number of signatures in
a particular format.
False-positive rate
120
Rate (percent)
100
1.92
3.57
24.55
9.13
75.45
90.87
Miss rate
Hit Rate
80
60
40
20
0
EBIDS
SpamAssassin
Tools
Figure 2. EBIDS and SpamAssassin performance numbers. Hits, misses, and false-positive
percentages are used as metrics to evaluate an IDS’s effectiveness.
The following rule set is used for
testing:
• Account compromise. Detects
when an attacker falsely claims
that a user’s account or computer
might be hacked, compromised, or
otherwise in unauthorized users’
hands. Such attacks can include
language that intimates an account
compromise, a corporation name
to make the entire thing sound official, a threat to suspend a user’s
account, or an information request
asking the user to follow a link or
reply to the e-mail with account
information.
• Financial opportunity. Detects when
a windfall offer is being sent, clearly
spelling out a monetary value. Such
e-mails typically use congratulatory
language to convince recipients that
the offer is exciting; money lan104
guage, dollar amounts, or the word
“free”; a corporation name; and an
information request.
• Account change or update. Detects
requests for account verification
due to a change in a user’s account.
Lost account information or
account updating are common justifications for requesting such
information. These requests typically include language implying
account changes, a corporation
name, an information request, and
a threat of denial of service with a
time limit.
• Opportunity. Detects all windfall
offers that don’t strictly refer to
money. This could include vacation offers, credit card offers, and
stock tips. For the purposes of this
experiment, I focused mainly on
credit cards. Such e-mails included
language that refered to credit
• Authority. People are more likely
to comply when someone with perceived authority makes the request.
• Reciprocation. People tend to comply with a request when they
believe they’ve been given or
promised something of value.
• Consistency. People will sometimes
comply with a request if they’ve
made a public commitment or
endorsed a cause.
• Social validation. People will tend
to comply with a request when it
appears to be in line with what others are doing.
• Scarcity. People will comply with
requests when the sought-after
object is in short supply, valued by
others, or only available for a limited time.
TESTING
To define the project’s corpus, I
divided it into known-bad, a publicly
available phishing corpus from Jose
Nazario (http://vx.netlux.org/lib/
anj01.html), and known-good, a sub________
division of the publicly available
Enron corpus that was absent of
phishing e-mails.
I then divided these two corpuses
into testing and training sets to
develop the signatures with some of
the corpuses’ literal language, since
language evolves and the data being
tested doesn’t necessarily use the same
language as when I researched EBIDS.
The testing and training sets were
mutually exclusive. The training set
wasn’t used to test the system and
vice versa.
I used other tools to compare the
evaluation of EBIDS results. I installed
SpamAssassin with an updated signa-
Computer
Computer
Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next Page
A
BEMaGS
F
Computer
Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next Page
ture set. I also installed Snort and
updated it with the most recent rules,
but I couldn’t produce any meaningful
results. It might have been limitations
on my end, so I’m not claiming that
Snort can’t be used to detect social
engineering, but I couldn’t use it, perhaps due to the system’s nature.
Summary of results
Several metrics have been established to evaluate the effectiveness of
an IDS, as Figure 2 shows They
include the hit rate, the percentage of
malicious e-mails correctly identified;
the false-positive rate, the percentage
of innocuous e-mails incorrectly
flagged as malicious; and the adjusted
false-positive rate, the percentage of
false positives plus bad traffic that
was correctly flagged for the wrong
reasons.
As Table 1 shows, with only four
detection rules written, EBIDS
achieved a relatively high hit rate of
75 percent. That’s somewhat lower
than SpamAssassin’s 90 percent hit
rate; however SpamAssassin’s signature set is more extensive, requiring
783 Kbytes of storage and comprising
thousands of signatures.
EBIDS didn’t outperform SpamAssassin as far as detecting malicious
social engineering e-mails as they
occur. However, the system outperformed SpamAssassin when it came to
false positives, with both systems faring relatively well in that regard. False
positives were expected to be high
with EBIDS because of the nature of
its algorithm as it relates to stringmatching plain English.
Problems with EBIDS
A few common foibles led to higherthan-expected EBIDS miss rate (inverse of the hit rate). One such problem was a subset of the Nazario
phishing corpus that included e-mails
with subject lines containing the actual
misleading natural language and bodies composed entirely of an image or
executable/zipped text.
I foresaw this problem, but saw
more of that type of e-mail than
expected. It’s probably possible to cor-
A
BEMaGS
F
Table 1. Summary of EBIDS and SpamAssassin results.
Metric
EBIDS
E-mails
Known bad e-mails
Non-SE e-mails
Alerts
Correct alerts
Completely false alerts
Partially false alerts
Adjusted false alerts
Hit rate
False-positive rate
Adjusted false-positive rate
536
224
312
175
169
6
71
77
0.75446
0.01923
0.14365
rect the problem by checking the
e-mail subject line and testing for
blank e-mail bodies that include only
executable/zipped text or images.
EBIDS also partially false-fired on a
specific PayPal security audit fraud,
and since that fraud was the most frequently occurring e-mail in the
Nazario corpus, the adjusted falsepositive rate was higher than
expected. Of course, due to the lowerthan-expected false-positive rate, the
adjusted rate just about leveled out to
expectation with the high adjusted
false positives.
Comparison with SpamAssassin
SpamAssassin performed much better in its hit rate. It caught 90 percent
of the phishing e-mails, looking for
technical points in each e-mail, such
as the structure of the e-mail code, the
legitimacy of the sender and date
fields, and other such criteria.
The system is finely tuned to detect
spam as it comes in, and the e-mails in
the corpus aren’t extremely recent, so it
fared as well as, if not better than,
expected. However, its false-positive
rate was higher than expected, nearly
doubling the rate of EBIDS. In perspective, the rate was still manageable
at approximately 3 percent, which isn’t
bad for most practical applications.
Overall, SpamAssassin would be the
better choice to run in a production
environment, even if the current iteration of EBIDS was feasible in production. However, with a more
SpamAssassin
549
241
308
230
219
11
0.90871
0.03571
complete signature set, EBIDS might
compare more favorably in the future.
T
he EBIDS project has proven that
intrusion detection can effectively
detect social engineering in
e-mails. Although ripe for refinement,
with only four rules developed, the system performed nearly as well as the
industry standard. It could eventually
let operators put network defense in
the hands of a system, rather than
users. ■
Allen Stone is a software engineer at
Jacob & Sundstrom. Contact him at
astone@jsai.com.
___________
Editor: Jack Cole, US Army
Research Laboratory’s Network
Science Division, ____________
jack.cole@ieee.org;
http://msstc.org/cole
Renew your
IEEE
Computer Society
membership
today!
www.ieee.org/renewal
105
December 2007
Computer
Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next Page
A
BEMaGS
F
Computer
Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next Page
A
BEMaGS
F
HOW THINGS WORK
SMS: The Short
Message Service
Jeff Brown, Bill Shipman, and Ron Vetter
University of North Carolina Wilmington
Although it is a widely used
communication mechanism for cell
phone users, SMS is far more than
just a technology for teenage chat.
C
ell phones have become an
integral part of the modern
world, providing human connectivity in a way never before possible. A recent United
Nations report (www.cellular-news.
com/story/25833.php) estimated that
_______________
the total number of mobile phone subscribers in the world now exceeds 2.68
billion. Around 80 percent of the
world’s population has mobile phone
coverage, with 90 percent coverage
forecast by 2010 (www.textually.org/
textually/archives/2006/10/013841.
___________________________
htm).
___
While most cell phones are used for
their original intent—making telephone calls wirelessly—these devices
are also loaded with other features
that are often little used or even
ignored. One feature that users have
begun to fully exploit in recent years is
the short message service or text messaging. This basic service allows the
exchange of short text messages between subscribers. According to Wikipedia, the first commercial short text
message was sent in 1992 (http://en.
wikipedia.org/wiki/Short_message_
___________________________
service).
_____
Although the popularity of text
messaging is well established in many
countries, in others, such as the US,
106
interest in the thumb-driven phenomenon has only recently skyrocketed.
Consider these statistics from the
Cellular Telecommunications & Internet Association (CTIA), the international association for the wireless
telecommunications industry:
• In 1995, roughly 13 percent of the
US population had cell phones; by
June 2007, it was 80 percent.
• In 2000, 14.4 million text messages were sent per month; by June
2007, the number had increased to
28.8 billion per month, which represents a 130 percent increase over
June 2006.
• In the second quarter of 2007,
Verizon Wireless alone says it handled 28.4 billion text messages.
Based on these numbers, it is
evident that SMS messaging is becoming a widely used communication
mechanism for cell phone users. The
“Typical SMS Applications” sidebar lists the variety of uses for this
technology.
THE SMS SPECIFICATION
SMS technology evolved out of the
Global System for Mobile Communications standard, an internationally
accepted cell phone network specification the European Telecommunications
Standards Institute created. Presently,
the 3rd Generation Partnership Project
maintains the SMS standard.
SMS messages are handled via a
short message service center that the
cellular provider maintains for the end
devices. The SMSC can send SMS
messages to the end device using a
maximum payload of 140 octets. This
defines the upper bound of an SMS
message to be 160 characters using
7-bit encoding. It is possible to specify
other schemes such as 8-bit or 16-bit
encoding, which decreases the maximum message length to 140 and 70
characters, respectively.
Text messages can also be used for
sending binary data over the air.
Typically, specific applications on the
phone handle messages that contain
binary data—for example, to download ring tones, switch on and off animation, exchange picture messages,
or change the look and feel of the
handset’s graphical user interface.
The system can segment messages
that exceed the maximum length into
shorter messages, but then it must use
part of the payload for a user-defined
header that specifies the segment
sequence information.
SMSCs operate in either a storeand-forward or a forward-and-forget
paradigm. In the store-and-forward
paradigm, the system resends the message for some period of time until it is
successfully received. In a forwardand-forget paradigm, the system sends
the message to the end device without
assurance of receipt or an attempt to
redeliver in the case of failure.
The SMS protocol stack comprises
four layers: the application layer, the
transfer layer, the relay layer, and the
link layer. Table 1 provides an example of a transfer protocol data unit.
SMS CONCEPTS AND
SYSTEM ARCHITECTURE
To fully understand how the various
components in the SMS system architecture interact, it is worthwhile to first
discuss a few SMS concepts and the role
of several independent system entities.
Computer
Computer
Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next Page
A
BEMaGS
F
Computer
Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next Page
A
BEMaGS
F
Short codes
A short code is an abbreviated number (four, five, or six digits) that is used
as an “address” for text messages.
Individual carriers can use short codes
that are valid only on their network or
are interoperable across network carriers (known as a common short code).
For example, a carrier might need to
send a text message to a subscriber
concerning changes in policy or phone
configuration, and it probably would
send such a message from a short code
only that carrier uses. On the other
hand, common short codes can be
used to send messages to and from
users on multiple cellular networks.
In the US, one entity, the CTIA
(www.
usshortcodes.com), adminis__________________
ters common short codes. Content
providers can lease a common short
code, thereby allowing subscribers to
text a keyword to the provider’s short
code, and the provider can respond
with information specific to the keyword used. For example, texting the
word “movie” to short code 90947
might result in a text message being
returned to the sender that contains
a list of movies showing at a local
theater.
Google offers a variety of interactive
SMS applications through its short
code 466453 (http://google.com/sms).
_______________
Typical SMS Applications
CONSUMER APPLICATIONS
•
•
•
•
Person-to-person messaging: (chat with friends)
Interactive information services: (text to get today’s weather forecast)
Entertainment services: (download a ring tone)
Location-based services: (restaurant suggestions based on handset location)
CORPORATE APPLICATIONS
• Notification and alert services: (emergency broadcast messages)
• Managing contacts, correspondence, and appointments: (SMS
integration with Microsoft Outlook)
• Vehicle location: (bus tracking)
CELL OPERATOR APPLICATIONS
• Subscriber identity module updates—network operators can remotely
update data stored on a mobile phone’s SIM card: (remotely update customer service profiles or address book entries)
• WAP push—push a URL of the content to be displayed, and a browser on
the mobile phone presents the content to the subscriber: (push a mediarich advertisement to a user and automatically display it within the mobile
phone’s browser)
Because short codes also have premium billing capabilities, wireless
subscribers can send text messages to
short codes to access and pay for a
variety of mobile content and services.
For example, if a user sends a text to
get a daily horoscope, a charge for
the content will appear on the next
phone bill.
E-mail gateways
In the US, most messages sent to
applications or services are addressed
to a short code instead of a standard
phone number. It’s possible to offer a
service through a cellular modem,
which is, in effect, a cell phone hooked
to a computer. However, cellular
modems are quite slow and not
Table 1. Transfer protocol data unit: 07917283010010F5040BC87238880900F10000993092516195800AE8329BFD4697D9EC37
Octet(s)
Description
07
91
72 83 01 00 10 F5
Length of the SMSC information (in this case, 7 octets).
Type-of-address of the SMSC (91 means international format of the phone number).
Service center number (in decimal semi-octets). The length of the phone number is odd (11), so a trailing F has been
added to form proper octets. The phone number of this service center is “+27381000015.”
First octet of this SMS-DELIVER message.
Address length. Length of the sender number (0B hexidecimal = 11 decimal).
Type-of-address of the sender number.
Sender number (decimal semi-octets), with a trailing F (“+27838890001”).
Protocol identifier (00 = SME-to-SME protocol—implicit).
Data coding field (00 = 7 bit, 01 = 8 bit, 10 = 16 bit, 11 = reserved).
Time stamp (semi-octets) in order (YY, MM, DD, HH, MM, SS, TIMEZONE in relation to GMT in units of 15 minutes). So,
0x99 0x30 0x92 0x51 0x61 0x95 0x80 means 29 Mar 1999 15:16:59 GMT+2.
User data length: length of message. The data coding field indicated 7-bit data, so the length here is the number of septets
(10). If the data coding field were set to indicate 8-bit data or Unicode, the length would be the number of octets (9).
Message “hellohello,” 8-bit octets representing 7-bit data.
04
0B
C8
72 38 88 09 00 F1
00
00
99 30 92 51 61 95 80
0A
E8329BFD4697D9EC37
More details can be found at www.dreamfabric.com/sms
107
December 2007
Computer
Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next Page
A
BEMaGS
F
Computer
Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next Page
A
BEMaGS
F
HOW THINGS WORK
SMS
SMPP
SMS
SMPP
API
SMS broker
Content
server and
software
applications
Aggregator
Content provider
SMPP
SMS
Mobile users
SMSC
Service providers
Figure 1. SMS system architecture. Rather than communicating directly with the various
SMSCs, content providers go through a message aggregator.The message aggregator
uses the SMPP to maintain connections with carrier networks.
intended for production environments
requiring any significant volume.
Most carriers provide e-mail-to-SMS
gateways that receive e-mail messages
and convert them to SMS messages.
For example, an e-mail addressed to
11235551234@mycarrier.com
______________________ could
trigger an SMS message for 11235551234 on mycarrier’s network.
Some companies offer bulk SMS
services and deliver their messages
through e-mail-to-SMS gateways.
However, not only are these e-mail
gateways one-way and unreliable, but
carriers also actively monitor them
and can block them at any time. Some
gateways automatically block messages originating from an IP address
when volume reaches a certain
threshold. This could cause serious
problems in the event of mass text
messages being sent for emergency
notification purposes.
This is a store-and-forward operation.
The SMSC usually has a configurable
time limit for how long it will store the
message, and users can usually specify a shorter time limit if they want.
Figure 1 depicts the general system
architecture for sending and receiving
SMS messages. Each service provider
operates one or more SMSCs. Mobile
users’ SMS messages are sent through a
wireless link via a cell tower to the
SMSC. The SMSC access protocols
enable interactions between two SMSCs
or interactions between external shortmessage entities (SMEs) and an SMSC.
SMEs are software applications on network elements (such as a mobile handset) or hardware devices that can send
and receive short messages. The SMS
Forum has adopted the Short Message
Peer-to-Peer (SMPP) protocol to enable
interactions between external SMEs and
service centers that the different manufacturers maintain and operate.
SMS centers
When a user sends a text message to
another user, the phone actually sends
the message to the SMSC, which
stores the message and then delivers it
when the recipient is on the network.
108
Message aggregators
Typically, content providers do not
communicate directly with the various SMSCs, but instead go through a
broker, or message aggregator, as
Figure 1 shows. An aggregator is a
business entity that negotiates agreements with network providers to act
as a middleman providing access to a
cellular network for messaging services to third parties who have no
direct relationship with the cellular
network. The message aggregator uses
the SMPP to maintain connections
with carrier networks. Aggregators
typically provide access to their
servers either through SMPP or using
custom APIs written in Java, PHP,
Perl, and so on.
Most aggregators will also manage
the rental of the common short code
for clients who do not want to deal
directly with the CTIA. Another
important function of the aggregator
is to assist with provisioning of the
short code on the various carriers.
Before a carrier allows the use of a
short code on its network, it requires
a complete description of how the
code will be used, including how subscribers will opt in and opt out of subscription services. The provisioning
process can take eight weeks or more.
Content providers
A mobile content provider is an
entity that provides value-added content and applications for mobile
devices. For example, when a mobile
phone user sends an interactive text
message to retrieve information, the
content provider returns the information (in this case, a text message back
to the user) through the aggregator.
The aggregator is responsible for
transmitting the message to the end
user. This second transmission actually masks two transmissions: first a
transmission from the aggregator to
the cellular provider, and then a transmission from the cellular provider to
the mobile handset.
BUILDING APPLICATIONS
We have established the hardware,
software, and network infrastructure
necessary to build and deploy advanced SMS applications. We have
registered a short code and established
a new company, Mobile Education, to
develop two-way SMS-based applica-
Computer
Computer
Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next Page
A
BEMaGS
F
Computer
Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next Page
tions. We have formed this company
with the expressed intent of developing and marketing intellectual property that we are creating in partnership
with the University of North Carolina
Wilmington.
Mobile Education (www.mymobed.
com) currently hosts the following
___
SMS-based applications for UNCW
students:
• a subscription service for daily
campus activities and events;
• an interactive system to request
and receive real-time shuttle bus
information;
• an application that integrates with
the student Banner system to allow
students to request and retrieve a
grade for a particular course;
• an interactive service to get campus
activity and event information,
specifically movie dates and times
at the campus theater; and
• a broadcast text-message-based
alert system for campus-originated
emergency information.
We have learned some valuable
lessons while establishing this new
business.
Vanity code or random code
The process of obtaining a short
code is different from country to country. In the US, obtaining a short code
requires choosing between a “vanity”
and a “random” code. Short codes that
spell a name, such as Google = 466453
or Hawks = 42957, are known as vanity short codes. Vanity codes cost
$1,000 per month. Random short
codes, which might or might not spell
a word, generally cost $500 per month.
Being only five- or six-digit numbers,
random short codes are still fairly easy
to remember, so it might or might not
be worth the extra $500 per month
cost to get a vanity code.
You can find out who has registered
many of the short codes assigned in the
US via the Common Short Code
Administration Directory (www.
____
usshortcodes.com) or the US Common
____________
Short Code WHOIS Directory (www.
____
usshortcodeswhois.com).
_________________
Fees and the
provisioning process
The fee to lease the common short
code simply guarantees that no one
else can use that code. A carrier must
provision a code before the code can
be used on its network. The provisioning process is complicated, expensive, and takes up to eight weeks.
During the provisioning process, the
carriers examine how their users will
interact with the applications associated with the short code. They are particularly interested in applications that
involve users subscribing to receive
messages on a regular basis. They
require double opt-in, which means
that the user requests the subscription
and then confirms. Users must be able
to send a message with the keyword
“stop” to indicate that they do not
want to receive any more messages.
Maintaining a common
short code is an expensive,
long-term commitment
and should be
thoroughly researched.
The cost of provisioning a short
code will generally be between $2,000
and $4,000, including aggregator and
carrier setup fees. The monthly fee to
maintain service with an aggregator is
generally between $1,000 and $1,500.
Working with aggregators
There are significant differences
between aggregators, and you should
look closely at more than one before
choosing. Several companies that sell
aggregator services do not actually
maintain SMPP connections with carrier SMSCs, but go through another
aggregator instead.
The custom APIs that aggregators
provide differ widely—some require a
constant socket connection, while others use XML over HTTP and do not
rely on a constant connection. Some
aggregators have relatively inexpensive testing programs and allow you
to test their API on a demonstration
A
BEMaGS
F
short code. This means that you can
send messages that come from the
demo code, and if a message sent to
that code starts with a special keyword established for you, then the
message is routed to your server.
However, some aggregators do not
allow any testing with their servers
until you have signed a contract.
Message sending rates
There are also vast differences in the
rate at which you can send messages.
Some plans allow only two messages
per second, and some claim rates of
up to 100 per second. Some aggregators allow you to send a message to
users knowing only their phone number, while some require that you also
know their service provider.
Carrier rules and restrictions
Another important aspect of common short codes is that the mobile
content providers who register them
must agree to a long list of network
operator rules and restrictions. For
example, the following is a statement
for text message content from a typical short code registration agreement:
The following content is not
allowed: adult (swimsuit pictures
are ok), or any unlawful, harmful,
threatening, defamatory, obscene,
harassing, or racially, ethically or
otherwise objectionable content.
Services that facilitate illegal activity, gambling, promote violence,
promote discrimination, promote
illegal activities, or incorporate any
materials that infringe or assist others to infringe on any copyright,
trademark, or other intellectual
property rights are prohibited.
In many cases, network operators
even provide lists of inappropriate
words that cannot be used within the
content that traverses their networks.
They can decide to shut down your
short code if you breach the agreement you signed to gain access
to their network. These restrictions
are considerably different from the
Internet.
109
December 2007
Computer
Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next Page
A
BEMaGS
F
Computer
Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next Page
One more reason to become an
IEEE Computer Society member
IEEE COMPUTER SOCIETY
e-learning campus
Advance your career and improve your knowledge
with online resources
Further your
career or just
increase your
knowledge
The e-Learning
campus provides
easy access to
online learning
materials to IEEE
Computer Society
members. These
resources are
either included in
your membership
or offered at a
special discount
price to members.
Online Courses
Over 1,300 technical courses available
online for Computer Society members.
IEEE Computer Society Digital Library
The Digital Library provides decades of
authoritative peer-reviewed research at your
fingertips: Have online access to 25 society
magazines and transactions, and more than
1,700 selected conference proceedings.
Books/Technical Papers
Members can access over 500 quality online
books and technical papers anytime they
want them.
IEEE ReadyNotes are guidebooks and tutorials that serve as a quick-start reference for
busy computing professionals. They are
available as an immediate PDF download.
Certifications
The CSDP (Certified Software Development
Professional) is a professional certification
meant for experienced software
professionals.
Brainbench exams available free for
Computer Society members, provide solid
measurements of skills commonly requested
by employers. Official Brainbench
certificates are also available at a discounted
price.
Visit http://computer.org/elearning
for more information
Computer
A
BEMaGS
F
HOW THINGS WORK
Policy issues
Maintaining a common short code
requires working very closely with an
aggregator and being bound by the
limitations of its policies and software.
It’s an expensive, long-term commitment and should be thoroughly
researched.
T
he short message service has
emerged as one of the most popular wireless services. SMS is far
more than just a technology for teenage
chat. Mobile marketing campaigns are
already a very profitable business and
growing rapidly. Next-generation SMS
applications will incorporate locationbased capabilities that are now being
incorporated into mobile handsets.
This will enable a new set of innovative services that are targeted and personalized, further refining mobile
advertising models and driving revenue
growth for carrier operators, aggregators, and mobile content providers. ■
Jeff Brown is a professor of mathematics
and statistics at the University of North
Carolina Wilmington and cofounder
of Mobile Education. Contact him at
brownj@uncw.edu.
____________
Bill Shipman is a graduate student in
computer science and information systems at the University of North Carolina
Wilmington. Contact him at _______
wjs6797@
uncw.edu.
_______
Ron Vetter is a professor of computer science at the University of North Carolina
Wilmington and cofounder of Mobile
Education. Contact him at ______
vetterr@
uncw.edu.
_______
Computer welcomes your submissions to this bimonthly column.
For additional information, or to
suggest topics that you would like
to see explained, contact column
editor Alf Weaver at ________
weaver@cs.
virginia.edu.
________
Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next Page
A
BEMaGS
F
Computer
Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next Page
A
BEMaGS
F
SOFTWARE TECHNOLOGIES
Conquering
Complexity
Gerard J. Holzmann, NASA/JPL Laboratory for Reliable Software
In complex systems, combinations
of minor software defects can lead
to large system failures.
I
n his book Normal Accidents:
Living with High-Risk Technologies (Princeton University
Press, 1984), sociologist Charles
Perrow discussed the causes of failure in highly complex systems, concluding that they were virtually
inevitable. He argued convincingly that
when seemingly unrelated parts of a
larger system fail in some unforeseen
combination, dependencies can become
apparent that couldn’t have been
accounted for in the original design.
In safety-critical systems, the potential impact of each separate failure is
normally studied in detail and remedied by adding backups. Failure combinations, though, are rarely studied
exhaustively; there are just too many
of them, and most have a low probability of occurrence.
A compelling example in Perrow’s
book is a description of the events
leading up to the partial meltdown of
the nuclear reactor at Three Mile
Island in 1979. The reactor was carefully designed with multiple backups
that should have ruled out what happened. Yet, a small number of relatively minor failures in different parts
of the system conspired to defeat those
protections. A risk assessment of the
probability of the scenario that
unfolded would probably have con-
cluded that it had a vanishingly small
chance of occurring.
No software was involved in the
Three Mile Island accident, but we
can draw important lessons from it,
especially in the construction of
safety-critical software systems.
SOFTWARE DEFECTS
We don’t have to look very far to
find highly complex software systems
that perform critically important functions: The phone system, fly-by-wire
airplanes, and manned spacecraft are
a few examples.
The amount of control software
needed to, say, fly a space mission is
rapidly approaching a million lines of
code. If we go by industry statistics, a
really good—albeit expensive—development process can reduce the number of flaws in such code to somewhere in the order of 0.1 residual
defects per 1,000 lines. (A residual
defect is one that shows up after the
code has been fully tested and delivered. The larger total number of
defects hiding in the code are often
referred to as the latent defects.)
Thus, a system with one million
lines of code should be expected to
experience at least 100 defects while
in operation. Not all these defects will
show up at the same time of course,
and not all of them will be equally
damaging.
BIG FAILURES OFTEN
START SMALL
Knowing that software components
can fail doesn’t tell us any more about
a software system than knowing that
a valve can get stuck in a mechanical
system. As in any other system, this is
only the starting point in the thought
process that should lead to a reliable
design.
We know that adding fault protection and redundancy can mitigate the
effects of failures. However, adding
backups and fault protection inevitably
also increases a system’s size and complexity. Designers might unwittingly
add new failure modes by introducing
unplanned couplings between otherwise independent system components.
Backups are typically designed to
handle independent component failures. In software systems, they can
help offset individual software defects
that could be mission-ending if they
strike. But what about the potential
impact of combinations of what otherwise would be minor failures?
Given the magnitude of the number
of possible failure combinations, there
simply isn’t enough time to address
them all in a systematic software testing process. For example, just 102
residual defects might occur in close
to 104 different combinations.
This, then, opens up the door to a
“Perrow-class” accident in increasingly complex software systems. As
before, the probability of any one specific combination of failures will be
extremely low, but as experience
shows, this is precisely what leads to
major accidents. In fact, most software failures in space missions can be
reasoned back to unanticipated combinations of otherwise benign events.
Minor software defects typically
have no chance of causing outright disaster when they occur in isolation, and
are therefore not always taken very
seriously. Perrow’s insight is that reducing the number of minor flaws can also
reduce the chances for catastrophic failures in unpredictable scenarios.
111
December 2007
Computer
Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next Page
A
BEMaGS
F
Computer
Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next Page
A
BEMaGS
F
SOFTWARE TECHNOLOGIES
CODING STANDARDS
We can reduce the number of minor
defects in several ways. One way is to
adopt stricter coding rules, like the
ones I described in this column in
“The Power of Ten: Rules for
Developing Safety-Critical Code”
(Computer, June 2006, pp. 95-97).
Among these recommendations is the
required use of strong static source
code analyzers such as Grammatech’s
CodeSonar, or the analyzers from
Coverity or Klocwork, on every build
of the software from the start of the
development process. Another is to
always compile code with all warnings
enabled, at the highest level available.
Safety-critical code should pass such
checks with zero warnings, not even
invalid ones. The rationale is that such
code should be so clearly correct that
it confuses neither humans nor tools.
ing more than 600,000 lines of code
(K. Grimm, “Software Technology in
an Automotive Company—Major
Challenges,” Proc. 25th Int’l Conf.
Software Engineering, IEEE CS Press,
2003, pp. 498-503).
Many robotic spacecraft also use
redundant computers, although these
typically operate only in standby
mode, executing the same software as
the main computer. When the controlling computer crashes due to a
hardware or software fault, the
backup takes over.
Adopting stricter
coding rules
is one way to
reduce the number
of minor defects.
DECOUPLING
Another way to reduce Perrow-class
failures is to increase the amount of
decoupling between software components and thereby separate independent system functionality as much as
possible. Much of this is already standard in any good development process
and meshes well with code-structuring mechanisms based on modularity
and information hiding.
Many coding standards for safetycritical software development, such
as the automotive industry’s MISRA
C guidelines, require programmers to
limit the scope of declarations to the
minimum possible, avoiding or even
eliminating the use of global declarations in favor of static and constant
declarations—using, for example,
keywords such as const and enum
in C.
One of the strongest types of decoupling is achieved by executing independent functions on physically
distinct processors, providing only
limited interfaces between them.
High-end cars already have numerous embedded processors that each
perform carefully separated functions.
Mercedes-Benz S-Class sedans, for
example, reportedly contain about 50
embedded controllers, jointly execut112
This strategy, of course, offers limited protection against software
defects. In principle, decoupling could
be increased by having each computer
control a different part of the spacecraft, thereby limiting the opportunities for accidental cross-coupling in
the event of a failure. When the hardware for one of the computers fails,
the other(s) can still assume its work
load, preserving protection against
hardware failures.
CONTAINMENT
The separation of functionality
across computers is also a good example of a defect containment strategy.
If one computer fails, it affects only
the functionality controlled by that
computer.
Another example of defect containment is the use of memory protection
to guarantee that multiple threads of
execution in a computer can’t corrupt
each other’s address spaces. Memory
protection is standard in most consumer systems, but it’s not always used
in embedded software applications.
For safety- and mission-critical systems, this should probably become a
requirement rather than an option.
MARGIN
Another common defense against
seemingly minor defects is to provide
more margin than is necessary for system operation, even under stress. The
extra margin can remove the opportunity for close calls, whereas temporary overload conditions can provide
one of the key stepping stones leading
to failure.
Margin doesn’t just relate to performance, but also to the use of, for
example, memory. It can include a
prohibition against placing missioncritical data in adjacent memory
locations. Because small programming defects can cause overrun
errors, it’s wise to place safety margins around and between all critical
memory areas. These “protection
zones” are normally filled with an
unlikely pattern so that the effects of
errant programs can be detected
without causing harm.
The recent loss of contact with the
Mars Global Surveyor spacecraft illustrates the need for these strategies. An
update in one parameter via a memory patch missed its target by one
word and ended up corrupting an
unrelated but critical earth-pointing
parameter that happened to be next
to the first parameter in memory. Two
minor problems now conspired to
produce a major problem and ultimately led to the loss of the spacecraft.
Had the key parameters been separated in memory, the spacecraft would
still be operating today.
LAYERING
The use of multiple layers of functionality—less gloriously known as
workarounds—is yet another way to
provide redundancy in safety-critical
code.
For example, most spacecraft have
a nonvolatile memory system to store
data; on newer missions, this is typically flash memory. Several independent means are used to store and
access critical data on these devices. A
spacecraft can often also function
entirely without an external file system, in so-called “crippled mode,” by
building a shadow file system in main
Computer
Computer
Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next Page
A
BEMaGS
F
Computer
Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next Page
memory. The switch to the backup
system can be automated.
We can’t predict where software
defects might show up in flight, but we
can often build in enough slack to
maximize the chances that an immediately usable workaround is available
no matter where the bugs hide.
MODEL-BASED ENGINEERING
Perhaps the main lesson that can
be drawn from Perrow’s discussion
of complex systems is the need to
scrutinize even minor software defects to reduce the chances that combinations of minor flaws can lead to
larger failures.
The strategies mentioned so far deal
mostly with defect prevention and
defect containment. We’ve left the
most commonly used strategy for last:
defect detection.
Defect detection in software development is usually understood to be a
best effort at rigorous testing just
before deployment. But defects can be
introduced in all phases of software
design, not just in the final coding
phase. Defect detection therefore
shouldn’t be limited to the end of the
process, but practiced from the very
beginning.
Large-scale software development
typically starts with a requirementscapture phase, followed by design,
coding, and testing. In a rigorous
model-based engineering process,
each phase is based on the construction of verifiable models that capture
the main decisions.
Engineering models, then, aren’t
just stylized specifications that can be
used to produce nice-looking graphs
and charts. In a rigorous model-based
engineering process, requirements are
specified in a form that makes them
suitable to use directly in the formal
verification of design, for example
with strong model-checking tools such
as Spin (www.spinroot.com).
______________
Properly formalized software
requirements can also be used to generate comprehensive test suites automatically, and they can even be used to
embed runtime monitors as watchdogs
in the final code, thereby significantly
A
BEMaGS
F
strengthening the final test phase
(www.runtime-verification.org).
_____________________
Finally, test randomization techniques can help cover the less likely execution scenarios so often missed even
in thorough software testing efforts.
Gerard J. Holzmann is a Fellow at
NASA’s Jet Propulsion Laboratory, California Institute of Technology, where he
leads the Laboratory for Reliable Software. Contact him at _________
gholzmann@
acm.org.
D
The research described in this article was
carried out at the Jet Propulsion Laboratory, California Institute of Technology, under a contract with the National
Aeronautics and Space Administration.
efect prevention, detection, and
containment are all important—
none of these techniques should
be skipped in favor of any of the others. Even if, for example, developers
exhaustively verify all the engineering
models used to produce a safety-critical software system, it’s still wise to test
the resulting code as thoroughly as possible. There are simply too many steps
in safety critical software development
to take anything for granted. ■
Editor: Michael G. Hinchey,
Loyola College in Maryland;
mhinchey@loyola.edu
_____________
Giving You
the Edge
IT Professional magazine gives
builders and managers of enterprise
systems the “how to” and “what for”
articles at your fingertips, so you can
delve into and fully understand issues
surrounding:
• Enterprise architecture and standards
• Information systems
• Network management
• Programming languages
• Project management
• Training and education
• Web systems
• Wireless applications
• And much, much more …
www.computer.org/itpro
113
December 2007
Computer
Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next Page
A
BEMaGS
F
Computer
Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next Page
A
BEMaGS
F
ENTERTAINMENT COMPUTING
Enhancing the
User Experience
in Mobile Phones
S.R. Subramanya and Byung K. Yi
LG Electronics North America R&D
munications. Applications-related issues
deal with enhancing the users’ experience when interacting with mobile
applications.
DEVICE-RELATED FACTORS
Device-related factors are concerned
with features that are built into the
device. These features must satisfy several constraints that stem from mobile
phones’ small form factor, while also
accounting for the devices’ cost and
complexity.
Improving text input schemes
To remain competitive, mobiledevice vendors, developers,
and network operators must
provide end users with a rich
and satisfying experience.
W
ith more than 2 billion
mobile phone subscribers already enrolled, and
some 2,000 different
mobile phone models in
service, mobile communication technology has seen unparalleled growth
and penetration. As Figure 1 shows,
this technology has swiftly outstripped
older rivals to become the most rapidly
pervasive and ubiquitous technology
to date.
However, the average revenue per
user for voice communication has
decreased steadily, and mobile operators have been increasing their investments in data and entertainment
services. In parallel with this trend, the
increasing hardware and software
capabilities of mobile phones are transforming them from simple voice communication devices to increasingly
complex carriers of computing and
data services such as messaging, e-mail,
and Web access. The entertainment
sector has seen comparable growth in
the mobile delivery of music, video,
TV, and games. Analysts expect both
trends to accelerate in the near future,
with global revenues from mobile
114
entertainment estimated to reach $37
billion by 2010 (Informa Telecoms and
Media; www.informatm.com).
_______________ To give
some insight into the many functions
mobile phones can fulfill, Figure 2
shows the major nonvoice applications
used by mobile customers in the US
during the first quarter of 2007 (www.
____
mmetrics.com).
__________
To drive and sustain this tremendous growth in mobile applications
and services, developers must understand mobile users’ needs. In addition
to improving core technologies, they
must also focus on providing end users
with a rich and satisfying mobile-communication experience. The factors
contributing to a good user experience
include interactions with mobile
devices and applications that are natural, intuitive, simple, pleasant, easy
to remember, and adaptive to individuals’ idiosyncrasies.
Our approach to enhancing the user
experience focuses on three dimensions.
Device-related issues deal with the
hardware features that facilitate ease
of use of devices and accessories.
Communications-related issues address
efforts to enrich person-to-person com-
Text messaging is the most popular
mobile application, both in terms of
subscriber percentage and number of
messages sent. To provide a good
user experience, manufacturers must
develop a means to input text that is
easy, quick, error-resistant, and nonstressful.
Currently, standard mobile phone
keypads require multiple taps when
doing text input. The Fastap keypad
offers an attractive alternative. Developed by Digit Wireless (digitwireless.
com), this technology neatly integrates
letters and punctuation keys around
the mobile phone’s standard numeric
keypad and is available on the LG
6190 phone. This keypad eliminates
multiple tapping and is at least twice
as fast as a standard keypad.
The Sony-Ericsson M600 uses an
innovative design that has each key
enabling and recognizing distinct
pushes—left, right, up, and down—to
provide a full QWERTY keyboard
using just the keys from a standard
mobile phone keypad. This facilitates
easy messaging and e-mail. The M600
also features touch screen and handwriting recognition.
Special hardware for gaming
The LG SV360 has a 1-million polygon-per-second graphics accelerator
chip that can process data much faster
than standard mobile phones. The
enhanced graphics let users play games
with more detailed and realistic 3D
graphics. The relatively larger 2.2-inch
LCD screen lets users play 3D games
with improved display quality. The
Computer
Computer
Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next Page
A
BEMaGS
F
Computer
Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next Page
100
BEMaGS
F
Television
Electricity
Radio
Penetration of potential world market (percent)
A
Telephone
VCR
80
Automobile
60
PC
40
Internet
Cellular
20
0
0
10
20
30
40
50
60
70
80
90
100
110
120
130
140
Years
Figure 1. Mobile technology’s meteoric rise. Mobile phones have proliferated faster than any previous technology. Source: Joseph Jacobsen, Organizational and Individual Innovation Diffusion.
Special hardware
for music, video, and TV
Downloaded mobile
game
6.75
Purchased wallpaper
or screensaver
6.84
Used work e-mail
As another example of mobile
phones’ increased versatility, some
models now have music players that
use simple thumb-wheel scroll buttons
to control the player’s volume, eliminating standard models’ awkward
navigation across several menus to
reach the volume control. Users also
can manipulate the thumb-wheel to
navigate across menu items of other
applications, which is easier than button presses. For example, it could also
be used to flip channels in the mobile
TV and control video playback.
10.19
Used mobile instant
messenger
13.80
Used personal e-mail
17.34
Purchased ringtone
20.00
Browsed news and
information
20.48
Used photo
messaging
30.67
Sent text message
81.18
0
10
20
30
40
50
Millions of users
60
70
80
90
Figure 2. Mobile applications spectrum.This chart shows nine of the many functions US
mobile phones provided during the first quarter of 2007.
phone also sports acceleration sensors
that let users interact more closely and
provides them with better controls, a
more realistic feeling, and greater
game enjoyment.
Pushing this trend still further,
Qualcomm is licensing the unified
shader architecture incorporated in
gaming-console graphics processor
units for rendering high-performance
graphics in their mobile-station modem
chipsets. This fusion will enable even
higher-end graphics for gaming applications on mobile phones.
Touch-sensitive screens
Touch-sensitive screens have been
deployed in a few recent phone models, including Apple’s iPhone (3.5”
screen with 320 480 pixels resolution), LG’s KE850 (3” screen with 240
300 pixels resolution), and Samsung’s F700 (2.8” screen with 240 440 pixels resolution). The touch
screen facilitates provision of a natural
interface for many of the phone’s applications. For example, for dialing and
receiving calls, it provides the traditional mobile phone keypad; for the
calculator application, it displays the
115
December 2007
Computer
Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next Page
A
BEMaGS
F
Computer
Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next Page
A
BEMaGS
F
ENTERTAINMENT COMPUTING
typical calculator keypad; and for the
music-player application, it displays
controls such as the Play/Pause, FastForward, and Rewind buttons. It thus
provides the flexibility of a natural
interface suited to each application,
instead of forcing use of the phone’s
keypad to control several different
applications.
COMMUNICATIONS-RELATED
FACTORS
Enhancing the mobile-user experience for person-to-person communications broadly involves providing the
communicating parties with a feeling
of being as close to face-to-face communication as possible.
Providing a virtual vicinity
Providing video clips and background images of the conversing participants’ surroundings could provide
a feeling of sharing the same vicinity.
One application developed in this
area, SWIS (“see what I see”), lets
what one person sees be shared with
another as the two converse. This
requires voice communication with a
concurrent video session that allows
content sharing.
Techniques must be developed for
providing these capabilities costeffectively by judiciously using combinations of available technologies
and bandwidths. One simple example
might be the appropriate use of prestored images and video clips, drawn
from earlier communications. Thus,
instead of transmitting this data again,
which would consume precious bandwidth, the system can seamlessly pull
it from the two parties’ respective
mobile phones.
Providing feelings of touch
Currently, mobile phones provide
only the auditory and visual dimensions
in communications. A natural next step
would be to provide feelings of touch.
For example, the development of technologies and techniques for providing
a virtual hug, handshake, or kiss—
depending on the relationship’s intimacy—would significantly enhance the
users’ communications experience.
116
Providing gestures
Gestures are unique to individuals.
Adding gestures in communications
would provide the communicating
parties with a sense of closeness. For
this to be feasible, researchers must
improve the technology for determining the gestures’ parameters, together
with schemes for gesture recognition
and classification. The gestures could
be constructed by receiving only elementary information, such as gesture
types and associated parameters, thus
using minimal bandwidth.
Personalization of many
interface facets would
help users perceive and
react quickly to events
on the mobile phone.
contexts, varying moods, limited use of
hands, and expectations of quick and
easy interactions.
Avoiding feature overload
Mobile applications developers
should learn from the mistakes of PC
applications developers and pay careful
attention to mobile feature sets. In
most PC applications, typically 90 percent of users never take advantage of
about 90 to 95 percent of their applications’ features. With mobile device
resources being limited, the feature set
must be minimized and should be
derived through extensive use-case
studies and diligent analysis. Subsequently, the distilled features should
be implemented efficiently. Researchers
must develop both quantitative and
qualitative metrics of feature-effectiveness and incorporate them into feature
design and implementation.
Hands-free usage facilities
Differentiated communications
Communications between the calling and called parties could be differentiated based on their relationships,
call times, and contexts. For example,
supposing an intimate relationship
between two parties, such as a husband and wife, and an appropriate
time, such as outside scheduled meetings or business hours, the communications could include visual effects
such as avatars, live video, or short
video clips of the two parties.
APPLICATIONS-RELATED
FACTORS
In the person-(machine)-to-machine(person) part of mobile communications, the applications’ user interface
provides the most important layer
directly contributing to the user experience by hiding the complexities of the
underlying hardware and software.
The UI should adopt a user-centric
design that considers several parameters primarily from a user’s perspective.
For example, in addition to the mobile
devices’ constraints, design should take
into consideration mobile-user constraints such as limited attention span
while moving, changing locations and
The traditional modes of WIMP
(windows, icons, mouse, and pointing
device) interactions—developed for PC
applications—would not work well on
mobile devices. Mobile users’ limited
attention span, coupled with mobile
devices’ limited keypads, call for novel
interaction modes. For example, interactions should be carefully designed
using judicious combinations of audio
and visual cues, along with voice-activated commands.
Although several mobile phone
models provide voice input, these are
restricted to dialing and only a few
applications. The provision of voiceactivated commands, which are less
error-prone and robust in noisy environments, facilitates increased use of
hands-free operations.
Personalization
Mobile devices are inherently personal. Users would prefer to have
unique mobile phone content, such as
ring tones, wallpaper, and themes.
Many times, users might expect that
their devices would provide the facilities
for creating such content themselves.
Ring-back tones have recently become popular. These are tunes and
Computer
Computer
Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next Page
A
BEMaGS
F
Computer
Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next Page
music set by users, which callers hear
before the call is picked up. An extension of this could be that a user could
set different ring-back tones for different sets of callers. Personalization of
many interface facets would help users
perceive and react quickly to events on
the mobile phone. Personalization—
such as configurable menus based on
usage frequencies—also caters to user
idiosyncrasies.
Byung K. Yi is a senior executive VP at
LG Electronics North America R&D.
Contact him at _____________
byungkyi@lge.com.
A
BEMaGS
F
Editor: Michael van lent,
vanlent@ict.usc.edu
_____________
Combining visual
and auditory stimuli
Using combinations of text, audio,
images, graphics, and video is an
effective means of conveying information. Interfaces should use judicious combinations of visual and
auditory stimuli and feedback. As a
specific example, associating distinguishing sounds for the different menu
selection classes would provide users
with auditory feedback that, together
with the visual stimulus of icons or
text, will likely speed up selections
and reduce errors. This is especially
beneficial in the mobile context when
users’ attention spans are shorter and
they cannot afford to stare at the
screen for as long as they could while
stationary. Any media imbalance or
overload should be avoided because it
would adversely affect ease of comprehension and use.
______________________
I
t is important to consider the user
experience while designing mobile
devices, applications, and user interfaces. The user experience should take
into consideration the constraints of
mobile devices as well of those of
mobile users. It is crucial to develop
models for the description and metrics
for the evaluation of the effectiveness
of the mobile user experience. This
opens up many exciting opportunities
for researchers and developers in these
areas. ■
S.R. Subramanya is a senior research
scientist at LG Electronics North America R&D. Contact him at subra@
lge.com.
117
December 2007
Computer
Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next Page
A
BEMaGS
F
Computer
Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next Page
A
BEMaGS
F
INVISIBLE COMPUTING
Taking Online Maps
Down to Street Level
Luc Vincent, Google
Street View enables
simple navigation between
street-level images without
losing the map context.
To address this limitation, Google
launched the Street View feature of
Google Maps in May 2007. The underlying idea is very simple: Provide an
interface that can display street-level
images in a natural way that enables
convenient navigation between images
without losing the map context.
Figure 1 shows a screen shot of the
interface. To use Street View, you simply point your browser to _____
http://
maps.google.com, visit one of the 15
_____________
cities that currently has coverage, such
as San Francisco or Portland, and
move around by either clicking on the
map or navigating from image to
image.
EARLY RESEARCH EFFORTS
S
ince MapQuest.com’s
__________ creation
in 1996, online mapping systems have rapidly gained
worldwide popularity. With
Google Maps leading the
charge, new sophisticated AJAX
(Asynchronous JavaScript and XML)
mapping applications began appearing
in 2004. These geographic Web 2.0
applications, which continually add new
features and improvements, have made
online maps almost as essential to our
daily lives as the search engine.
Figure 1. Google Maps Street View interface.
118
A fairly recent offering of the best
online mapping systems is aerial
imagery. Such imagery takes the
so-called GeoWeb to the next level,
providing tremendous value to activities such as real estate sales, insurance,
environmental management, municipal government, emergency services,
and law enforcement, as well as engaging serendipitous Web users. Yet the
imagery often is not detailed enough,
and viewing buildings and streets from
above can be disconcerting.
This idea behind Street View is not
new. The earliest related project dates
back to the late 1970s, when MIT’s
Andrew Lippmann created a system
known as Movie Maps that let users
take virtual tours of the city of Aspen,
Colorado (A. Lippman, “Movie Maps:
An Application of the Optical Videodisc to Computer Graphics,” ACM
SIGGRAPH Computer Graphics, vol.
14, no. 3, 1980, pp. 32-42).
The Movie Maps data-capture system used a gyroscopic stabilizer and
four cameras mounted atop a car, with
an encoder triggering the cameras
every 10 feet. The system digitized the
captured panoramic imagery, organized it into several scenes, each covering approximately a city block, and
stored it on a laser disk. A user could
then virtually navigate Aspen streets
using the interface shown in Figure 2.
Several related systems subsequently emerged, with the pace of
development accelerating in the late
1990s and early 2000s. One noteworthy offering was BlockView in
Amazon’s A9.com
______ search engine.
Introduced in 2005, BlockView was
the first Web-based system to expose
large amounts of street imagery to
users. However, it was discontinued
less than two years later.
More recently, the Street View system itself went through several phases
before going online. Google cofounder
Larry Page bootstrapped the project by
Computer
Computer
Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next Page
A
BEMaGS
F
Computer
Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next Page
A
BEMaGS
F
Figure 3. Example pushbroom perspective image.
Figure 2. Movie Maps interface.
personally collecting a sample of urban
video footage from his car using a camcorder. This indirectly led to the idea
of creating long, seamless images with
somewhat unusual perspectives known
as “pushbroom panoramas,” an example of which is shown in Figure 3.
In 2006, then Stanford University
graduate student Augusto Román
developed techniques to create more
visually pleasing “multiperspective
panoramas” (A. Roman, “Multiperspective Imaging for Automated
Urban Imaging,” doctoral dissertation, Dept. Electrical Eng., Stanford
University, 2006, ______________
http:// graphics.stanford.edu/ papers/aroman_ thesis).
_________________________
However, such imagery was difficult
to generate and perhaps even harder
to use in an intuitive user interface, so
360-degree panoramas took center
stage and ultimately led to the current
implementation.
The Street View team also tied the
imagery to the map in a way that makes
navigation easy, whether from the map
or from the imagery itself. In particular,
the Street View avatar, often referred to
as “pegman,” rotates and moves along
with the map even when navigating
from the panoramic image itself.
However, Street View is not merely an
exercise in advanced interface design.
To collect and process the vast amounts
of imagery required for the application,
a fleet of custom vehicles such as the one
shown in Figure 4 is currently deployed
and busily capturing data that will
enable coverage well beyond the current
15 US cities. According to The World
Factbook: 2007, there are more than
19.4 million kilometers of paved roads
around the globe, so the Street View
data-collection team expects to be busy
collecting data for quite a while.
OTHER CHALLENGES
Aside from interface design and
data collection, Street View poses
challenges related to fleet management, pose optimization, and computer vision.
Fleet management
When more than just a few cars are
involved, there is a strong financial
incentive to manage them as efficiently
as possible. Fleet management is an
active area of operations research and
of great interest to the transportation
industry (T.G. Crainic and G. Laporte,
eds., Fleet Management and Logistics,
Springer, 1998).
Managers and dispatchers can use
operations research techniques to
optimize the deployment of a fleet of
data-acquisition vehicles. Using local
planning techniques can also ensure
that drivers traverse each road segment as few times as possible, which
is difficult given road graph constraints (L. Kazemi et al., “Optimal
Traversal Planning in Road Networks
with Navigational Constraints,” Proc.
15th ACM Int’l Symp. Advances in
MERGING IMAGERY WITH MAPS
Street View literally puts users in the
driver’s seat. One of the key technical
challenges Street View addresses is the
merging of street-level imagery with
maps in a natural way. In addition,
like most Web 2.0 applications, it
works across numerous operating systems and browser combinations.
Yet, Street View is more than a pure
AJAX application in that it adds Adobe
Flash technology to the mix: The “picture bubble” is actually a Flash application that delivers a rich graphic
experience without requiring use of a
custom browser plug-in. This richness
is best experienced by rotating the view
right or left 360 degrees or zooming in
on details such as parking signs.
Figure 4. Street View custom data-collection vehicle.
119
December 2007
Computer
Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next Page
A
BEMaGS
F
Computer
Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next Page
A
BEMaGS
F
INVISIBLE COMPUTING
Geographic Information Systems,
ACM Press, 2007).
Pose optimization
When capturing imagery, it is essential to know as accurately as possible
each image’s location and the camera’s
orientation. However, the Global
Positioning System alone is typically
insufficient to provide this data, especially in cities with many “urban
canyons,” streets flanked with dense
tall buildings.
Our solution is to equip the vehicles
with additional sensors such as rate
gyros, accelerometers, and wheel encoders to capture vehicle speed and
acceleration at a high sampling rate.
Software such as James Diebel’s Trajectory Smoother (http://ai.stanford.
edu/~diebel/smoother.html) or the
___________________
open source Google Pose Optimizer
(http://code.google.com/p/gpo) can
then be used offline to compute an
improved pose estimate from these
measurements.
120
Computer vision
Creating 360-degree panoramas by
stitching together a series of digital pictures is not a new problem, and many
excellent panorama-stitching packages
are available such as Autostitch (M.
Brown, “Autostitch: A New Dimension in Automatic Image Stitching,”
www.cs.ubc.ca/~mbrown/autostitch/
autostitch.html).
___________
However, it is one thing to manually create a few panoramas and quite
another to automatically generate millions of attractive panoramas across a
wide variety of lighting conditions.
Solutions involve a pipeline of complex algorithms for camera calibration, vignetting correction, color
correction and balancing, image alignment, stitching, blending, and so on.
A
project like Street View requires
complex engineering but surprisingly little groundbreaking research. What makes it uniquely chal-
lenging is its intrinsic scale. This probably explains why few of the dozens of
similar projects have reached Street
View’s still rather modest size. However,
as more users and organizations appreciate the value of street-level imagery,
similar services are sure to emerge and
accelerate innovation in this space.
Such imagery will become increasingly integrated with the overall online
map experience and might extend
beyond streets to include interiors, real
estate, national parks, and more. In
fact, not even the sky is the limit: The
Street View panoramic viewer is featured in the recently updated Google
Moon service (http://moon.google.
com) to showcase pictures taken dur___
ing various Apollo missions. ■
Luc Vincent leads several geographicrelated projects at Google. Contact him
at ____________
luc@google.com.
Editor: Bill N. Schilit, Google;
bill.schilit@computer.org,
________________
Computer
Computer
Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next Page
A
BEMaGS
F
Computer
Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next Page
A
BEMaGS
F
_______________
____________________________
Computer
Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next Page
A
BEMaGS
F
Computer
Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next Page
A
BEMaGS
F
THE PROFESSION
Continued from page 124
external devices that can easily be
attached to the computer, but that the
computer can do without.
We should make the operating system as efficient as possible so that, contrary to the rather vulgar tendency
these days, it can run on slower rather
than faster CPUs. Good programs
should run at an acceptable speed on
a 50-MHz CPU. A slower CPU means
less power consumption and a longer
battery life that, if the computer is used
for work, we need a lot more than
semitransparent menus or fading shadows below and on the right of windows. Frankly, assuming that computers these days should need a 1-GHz
machine to make a menu appear in a
reasonable time is ridiculous.
DO WE REALLY NEED COLOR?
We are not awestruck teenagers, so
we don’t need shaded and embossed
windows. Simple, possibly black-andwhite windows would be more than
adequate. I am not an expert in these
areas, but if a black-and-white screen
can save some power, add a couple of
hours to battery life, and take away a
few pounds, then I welcome it. For
people who absolutely need color, it
shouldn’t be hard to add this option
to the operating system. The designer
could also add a switch to turn off the
computer. A real one, I mean, with
two positions separated by a click. I
am tired of unplugging the computer
or the battery every time the operating system crashes disastrously.
On the other hand, we must avoid
building the computer too small: If the
intended user does a lot of writing, a
small screen becomes a pain, a small
keyboard even more so. I usually carry
many papers, so I have no incentive to
make the computer smaller than an A4
or letter-size sheet. All the simplification a simpler operating system affords
should be directed toward making the
computer very thin and light.
DIE, INSTALLATION, DIE!
Installing a program should only
require copying a bunch of files in a
directory; uninstalling it should only
require removing those files. This is
122
how a user installs emacs (http://
____
directory.fsf.org/project/emacs), which
provides a good model for how things
should be done. There should be no
dependencies between programs. Few
things annoy more than installing program A only to discover program B
must be installed first, which in turn
requires program C, and so on until the
user runs out of patience, letters of the
alphabet, or both.
_____________________
Many computer programs
do have shortcuts, but
they are too clumsy
and limited to be of use.
The programs should follow the
operating system’s philosophy: Be
small and run fast. Using half a Gbyte
for a word processor and a few other
things is sheer madness. Overloading
the program with features few people
will ever use is even worse. Interfaces
based on menus and buttons might be
fine for the casual user, but for the
professional, they are too slow. They
should be offered as an option, but for
the serious user there is nothing faster
than keyboard commands that don’t
require hand movements, grasping the
mouse, and performing complex
motions.
I challenge anyone to use MS Word
to write a mathematical paper faster
than I can with LaTeX. Many computer programs do have shortcuts, but
they are too clumsy and limited to be
of use. MS Word, for example, makes
it impossible to typeset an equation or
place a figure using just the keyboard.
Presentation software presents a
delicate issue, with undertones that
border on the religious. Edward R.
Tufte, in his The Cognitive Style of
PowerPoint: Pitching Out Corrupts
Within (Graphics Press, 2006), argued
cogently that presentation software
has degraded the quality of presentations. So the simpler the presentation
software the better. Of all the presentations with animations I have seen—
including my own youthful efforts—
every single one would have been the
better for removing the animation and
all colors but two.
Compatibility should be enforced
only when it doesn’t obstruct speed
and simplicity. For a word processor,
for example, the MS Word .doc format is too cumbersome: Using it condemns the word processor to
inefficiency. Using the .rtf or TeX formats will provide all the necessary features with far fewer complications.
Unless there is some good reason for
it—such as the Internet—compatibility is just a hassle. Let us not forget
that any good innovation came about
because somebody did things differently from how they were done until
then, compatibility be damned.
IT’S THE VACCINE
THAT KILLS YOU
The operating system should prevent programs from loading anything
from the Web into the computer unless
the user explicitly tries to download
such software. Most programs I use
these days have the unfortunate mandate to keep downloading upgrades,
patches, and fixes without telling me
anything, to the point that I now fear
the programs I have been forced to
install more than I fear any virus.
Putting something in my computer
without my explicit request is tantamount to trespassing and squatting:
In the unfortunate absence of legal
measures against software manufacturers, the operating system should
block any unrequested download. If
this means living without cookies, so
be it: I am fairly sure that we will survive. If this means we must live without the razzmatazz of Web sites that
look like an LSD flashback, even better. We will definitely be better off
without them.
H
aving described what I would
like to see in a computer, I admit
that many of its characteristics,
such as the search for minimal memory occupancy and maximal speed,
might appear to make more sense for
portable than desktop computers.
Computer
Computer
Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next Page
A
BEMaGS
F
Computer
Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next Page
Consider, then, the following two
points.
First, these days more and more
people use a portable as their main
computer. So, why not design an operating system that, instead of being
basically a desktop operating system
with which laptops must live, provides
a laptop operating system with which
desktop computers must live?
Second, by forcing programmers to
design code focused on maximizing
memory occupancy and processing
speed, we will get better quality code.
Programmers, quite unsurprisingly,
seem to produce just that when guided
by something other than an impossible delivery schedule and continuous
requests for features. This, at least, is
the lesson we might be inclined to
draw by comparing the quality of, say,
embedded programs with the things
that usually run on our PCs or, even
worse, on the Internet. A simpler
operating system architecture, proper
algorithm design, and a more careful
implementation—one that gives priority to processing speed and memory
A
BEMaGS
F
occupancy—would likely create a better operating system. ■
Simone Santini is an associate professor
at the Universidad Autónoma de Madrid,
Spain. Contact him at ___________
simone.santini@
uam.es.
_____
Editor: Neville Holmes, School of
Computing, University of Tasmania;
neville.holmes@utas.edu.au. Links to
__________________
further material are at www.comp.utas.
edu.au/users/nholmes/prfsn.
__________________
_________________________________
123
December 2007
Computer
Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next Page
A
BEMaGS
F
Computer
Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next Page
A
BEMaGS
F
THE PROFESSION
Making Computers
Do More with Less
Simone Santini, Universidad Autónoma de Madrid
The ideal computer
will keep things simple,
small, and efficient.
M
y work these days consists mostly of doing
mathematics. I don’t use
the computer often, except for writing articles,
answering e-mail, and reading the
online version of some newspapers.
I still do some programming, but
recently my students have been doing
most of it for me.
Still, people know I work with computers and, for most of my friends and
relatives, the boundary between a university professor and a system administrator blurs beyond perception.
Thus, they often ask me to help solve
some complicated computer issue and,
being basically a good guy, I try to
help them.
This is how I came to spend the best
part of last weekend helping a friend
untangle a rather complicated mess
involving, among other villains, a file
exchange program, a firewall, a
printer driver, and—in the role of the
master villain it plays so well—the
operating system manufactured by a
well-known company headquartered
in the northwestern corner of the US.
The problem, which involved multiple, overlapping, and frustratingly
idiosyncratic interactions among programs, drove me up the wall. Several
times I proclaimed, quite loudly and
124
in terms not suitable for American
national TV, what these people should
do with their operating system. I
remember at least once making an
oath to abstain forever from any further interaction with computers.
Alas, abstain I can’t, since these days
I find it impossible to be a professional
in any field unless I’m willing to interact with the frustrating machines I
incautiously chose to associate myself
with professionally.
COMPUTING FOR WORKERS
Modern programs are disastrously
unreliable artifacts, but they needn’t
be that way. In no other complex
device would we accept a degree of
unreliability comparable to that found
in software. If we inaugurated a new
building only to find that disconnecting the elevator causes all the fifthfloor faucets to turn on, or that
changing a light bulb in the basement
results in a minor collapse in the east
wing, we would probably hang the
architect by the toes. That we accept
this and worse when it comes to software doesn’t speak well of our profession’s technical culture: We seem to
be the first professionals for which
quality is a secondary concern. It
might not, however, be too late to do
something about it.
If someone out there itches to design
a new operating system, computer
system, or application program, I submit my idea for how I would like to
see it done. The computer and programs I envision are meant to be for
people who use a computer as a work
instrument, not as a toy or pastime.
When I have some spare time, I walk,
attend the theater or cinema, go to a
party, or have a drink with friends. If
you see me with my hands on a computer keyboard, it means I am working, and when I work I don’t want to
be entertained by colorful programs:
I want a fast and efficient system to do
whatever I need done, in the shortest
time possible so I can get on with my
life.
Alas, these days no major operating
system appears designed completely
around the guidelines I propose.
MAKE IT SIMPLE,
MAKE IT SMALL
My ideal operating system’s main
design guideline mandates keeping
things simple, small, and efficient.
Eliminate all unnecessary features and
then eliminate half of what is left.
Einstein said that we should do things
as simply as possible, yet no simpler.
Fortunately, with computer programs,
things can be kept pretty simple and
still be functional.
First, design a simple, small, fast
kernel to deal with secondary memory backup, persistent core memory
allocation, and process management,
à la Unix, and with some basic network functions. The kernel should be
small and easy to install, the rest of
the operating system equally so. An
operating system should never need to
occupy more than 50 Mbytes of secondary storage, but we can grant it
twice as much: 100 Mbytes. An operating system this size, with similarly
small programs, can run a whole computer on a 500-Mbyte memory stick,
with abundant memory to spare, so
that somebody can finally build a laptop that weighs less than a pound and
has a battery life of 20 hours.
Magnetic disks and CDs should be
Continued on page 122
Computer
Computer
Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next Page
A
BEMaGS
F
Computer
Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next Page
________________________
A
BEMaGS
F
____________
____________________________
____________
________________
______________
Computer
Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next Page
A
BEMaGS
F
Computer
Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next Page
A
BEMaGS
F
Mild-Mannered ALM,
Super Quality
Developing quality software requires a heroic effort, from tracking thousands of the tiniest details,
to keeping team communication flowing smoothly.
TestTrack Studio 2008 powers the application lifecycle, automating processes and keeping track
of issues, change requests, test cases, and test results. With TestTrack Studio 2008, you have the
tools and the time to prioritize, communicate, and track the status of your projects more
effectively, without breaking a sweat.
Let TestTrack Studio 2008 do the heavy lifting—Be a superhero!
Download your fully functional evaluation
software now from www.seapine.com/ieee08
©2007 Seapine Software, Inc. Seapine, TestTrack Studio and the Seapine logo are trademarks of Seapine Software, Inc.
All Rights Reserved. All other trademarks are the property of their respective owners.
Computer
Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next Page
A
BEMaGS
F