Uploaded by ari3lgc

11 Spectrum 23

advertisement
It’s Time for Wind Turbines
That Float Companies vie
to harness the best wind
P. 30
Are You Prepared for Your
Digital Afterlife? Data is
forever. We are not
P. 38
Better Design With
Generative AI A robotics
engineer gives it a try
P. 44
A Robot for
Humanity
The quest for
assistive robots
that truly empower
their users
Henry Evans
sees robots as
“the best hope
for significant
independence.”
FOR THE
TECHNOLOGY
INSIDER
NOVEMBER 2023
Boost Your Lab’s Performance
in Quantum Research
We support your application with
Performant instrumentation
→ High-fidelity qubit control, readout,
and feedback up to 8.5 GHz
Intuitive software
→ Swift experiment design in
a dedicated language
In-depth application support
→ Direct contact with specialists
for your research field
Zurich
Instruments
VOLUME 60 / ISSUE 11
A Robot
for Humanity
NOVEMBER 2023
22
It gives users—and their
caregivers—much needed
independence.
By Evan Ackerman
Buoyant Behemoths
Here’s why we need wind
turbines that float.
By Peter Fairley
Generative AI
for Better Design
30
44
AI image generators helped
me imagine a better robot.
By Didem Gürdür Broo
EDITOR’S NOTE
2
Chatbot: A new podcast features
the humans behind the robots.
NEWS
Fusion Without Neutrons
Quantum-Safe Keys
Graphene Water Sensor
6
DIDEM GÜRDÜR BROO/MIDJOURNEY
HANDS ON
16
Learn system-level architecture
with the Cerberus 2100.
CAREERS
19
Kyle Clark journeys from hockey
enforcer to eVTOL entrepreneur.
5 QUESTIONS
Luke Tan uses green hydrogen
to make whisky.
21
PAST FORWARD
Birth of the Office Cubicle
52
The
Creepy
New
Digital
Afterlife
Industry
Our data outlives us–
and that’s when things
can get weird.
By Wendy H. Wong
38
ON THE COVER:
Photo by Peter Adams
Illustration by Harry Campbell
NOVEMBER 2023
SPECTRUM.IEEE.ORG
1
EDITOR’S NOTE
BY HARRY GOLDSTEIN
Our Chatbot podcast features roboticists
in conversation with each other
Senior Editor Evan Ackerman [at left] meets
a new robot from Disney Research at the IROS
robotics conference in Detroit last month.
W
listening,” Ackerman says, “because I’ll be as excited
as you are to see how each episode unfolds.”
We think this unique format gives the listener
the inside scoop on aspects of robotics that only the
roboticists themselves could get each other to
reveal. Our first few episodes are already live. They
include Skydio CEO Adam Bry and the University
of Zurich professor Davide Scaramuzza talking
about autonomous drones, Labrador Systems CEO
Mike Dooley and iRobot chief technology officer
Chris Jones on the challenges domestic robots face
in unpredictable dwellings, and choreographer
Monica Thomas and the Robotics, Automation, and
Dance Lab’s Amy LaViers discussing how to make
Boston Dynamics’ robot dance.
We have more Chatbot episodes in the works, so
please subscribe on whatever podcast service you
like, listen and read the transcript on our website,
or watch the video versions on Spectrum’s YouTube
channel. While you’re at it, subscribe to our other
biweekly podcast, Fixing the Future, where we talk
with experts and Spectrum editors about sustainable
solutions to climate change and other topics of interest. And we’d love to hear what you think about our
podcasts: what you like, what you don’t, and especially who you’d like to hear on future episodes.
hen IEEE Spectrum editors are putting together an issue of the magazine,
a story on the website, or an episode
of a podcast, we try to facilitate dialogue about technologies, their development, and
their implications for society and the planet. We feature expert voices to articulate technical challenges
and describe the engineering solutions they’ve
devised to meet them.
So when Senior Editor Evan Ackerman cooked
up a concept for a robotics podcast, he leaned hard
into that idea. Ackerman, the world’s premier robotics journalist and author of this month’s inspiring
cover story, “A Robot for Humanity” [p. 22], talks
with roboticists every day. Recording those conversations to turn them into a podcast is usually a
straightforward process, but Ackerman wanted to
try something a little bit different: bringing two
roboticists together and just getting out of the way.
“The way the Chatbot podcast works is that we
invite a couple of robotics experts to talk with each
other about a topic they have in common,” Ackerman explains. “They come up with the questions,
not us, which results in the kinds of robotics conversations you won’t hear anywhere else—uniquely
informative but also surprising and fun.”
Each episode focuses on a general topic the
roboticists have in common, but once they get to
chatting, the guests are free to ask each other about
whatever interests them. Ackerman is there to make
sure they don’t wander too far into the weeds,
because we want everyone to be able to enjoy these
conversations. “But otherwise, I’ll mostly just be
2
SPECTRUM.IEEE.ORG
NOVEMBER 2023
“They come
up with the
questions,
not us,
which results
in the kinds
of robotics
conversations
you won’t
hear
anywhere
else—
uniquely
informative
but also
surprising
and fun.”
CORRECTIONS: The article “Iron Fuel Shows Its Mettle”
[News, October] had several inaccuracies. Iron+ should
properly be characterized as a startup and Metalot is
an innovation center. Also, Philip de Goey is not the chief
technical advisor at iron fuel technology company RIFT.
The company’s three founders are De Goey’s former
students, but he has no formal role at RIFT. IEEE Spectrum
regrets the errors.
PORTRAIT BY SERGIO ALBIAC; IEEE SPECTRUM
Robots and the
Humans Who
Make Them
Simulate real-world
designs, devices,
and processes
with COMSOL
Multiphysics®
Innovate
faster.
Test more design iterations
before prototyping.
comsol.com/feature/multiphysics-innovation
Innovate
smarter.
Analyze virtual prototypes and
develop a physical prototype
only from the best design.
Innovate with
multiphysics
simulation.
Base your design decisions
on accurate results with
software that lets you study
unlimited multiple physical
effects on one model.
CONTRIBUTORS
 DIDEM GÜRDÜR BROO
Broo is an assistant professor in
the department of information
technology at Uppsala University,
in Sweden, where she leads
the Cyber-physical Systems Lab,
directing research on designing
sustainable and human-centric
intelligent systems, such as
collaborative robots, autonomous
vehicles, and smart cities. In this
issue, Broo describes her experiments
using AI image generators for
engineering design [p. 44]. She
continues to use these tools and
shares some of her designs on
Instagram under @generative.robots.
 HARRY CAMPBELL
Campbell created the illustrations
for the book excerpt in this issue
on the “digital afterlife industry”
[p. 38]. He specializes in vector art,
which is based on the use of lines.
The resulting illustrations are often
intricately detailed and precise.
The notion of reconstructing the
deceased from data they’ve left
behind inspired Campbell’s ghostly
“Dad” on our Contents page. “It’s not
that farfetched to think that in the
near future we’ll have 3D holograms
of our loved ones,” he comments.
Fairley, a contributing editor to IEEE
Spectrum, has been tracking energy
technologies and their environmental
implications for over two decades.
In “Buoyant Behemoths” [p. 30], he
describes the global race to develop
and deploy gigawatts’ worth of
floating wind power. He started
his reporting at the Floating Offshore
Wind Turbines conference in May.
Fairley was “blown away
to see over 1,300 people working
on what was a tiny niche area when
I first covered it 15 years ago,” he says.
 WENDY H. WONG
This issue’s article about the digital
afterlife industry [p. 38] was adapted
from Wong’s new book We, the
Data: Human Rights in the Digital
Age. Wong, a professor of political
science at the University of British
Columbia, is fascinated by the
implications of services that could
allow people to participate in the
online world after they’re dead. This
new reality, she says, challenges
“the ways that human communities
have developed to deal with the
fact that we’re not all here forever.”
SPECTRUM.IEEE.ORG
EXECUTIVE EDITOR Jean Kumagai, j.kumagai@ieee.org
MANAGING EDITOR Elizabeth A. Bretz, e.bretz@ieee.org
CREATIVE DIRECTOR
Mark Montgomery, m.montgomery@ieee.org
DIRECTOR OF DIGITAL INNOVATION
Erico Guizzo, e.guizzo@ieee.org
EDITORIAL DIRECTOR, CONTENT DEVELOPMENT
Glenn Zorpette, g.zorpette@ieee.org
SENIOR EDITORS
Evan Ackerman (Digital), ackerman.e@ieee.org
Stephen Cass (Special Projects), cass.s@ieee.org
Samuel K. Moore, s.k.moore@ieee.org
Tekla S. Perry, t.perry@ieee.org
Philip E. Ross, p.ross@ieee.org
David Schneider, d.a.schneider@ieee.org
Eliza Strickland, e.strickland@ieee.org
ART & PRODUCTION
DEPUTY ART DIRECTOR Brandon Palacio, b.palacio@ieee.org
PHOTOGRAPHY DIRECTOR Randi Klett, randi.klett@ieee.org
ONLINE ART DIRECTOR Erik Vrielink, e.vrielink@ieee.org
PRINT PRODUCTION SPECIALIST
Sylvana Meneses, s.meneses@ieee.org
MULTIMEDIA PRODUCTION SPECIALIST
Michael Spector, m.spector@ieee.org
NEWS MANAGER Margo Anderson, m.k.anderson@ieee.org
ASSOCIATE EDITORS
Willie D. Jones (Digital), w.jones@ieee.org
Michael Koziol, m.koziol@ieee.org
SENIOR COPY EDITOR Joseph N. Levine, j.levine@ieee.org
COPY EDITOR Michele Kogon, m.kogon@ieee.org
EDITORIAL RESEARCHER Alan Gardner, a.gardner@ieee.org
EDITORIAL INTERN Gwendolyn Rak, g.rak@ieee.org
CONTRACT SPECIALIST Ramona L. Foster, r.foster@ieee.org
AUDIENCE DEVELOPMENT MANAGER
Laura Bridgeman, l.bridgeman@ieee.org
CONTRIBUTING EDITORS Robert N. Charette, Steven C
­ herry, Charles
Q. Choi, Peter Fairley, Edd Gent, W. Wayt Gibbs, Mark Harris, Allison
Marsh, Prachi Patel, Julianne Pepitone, Lawrence Ulrich, Emily Waltz
 PETER FAIRLEY
4
EDITOR IN CHIEF Harry Goldstein, h.goldstein@ieee.org
NOVEMBER 2023
THE INSTITUTE
EDITOR IN CHIEF Kathy Pretz, k.pretz@ieee.org
ASSOCIATE EDITOR Joanna Goodrich, j.goodrich@ieee.org
DIRECTOR, PERIODICALS PRODUCTION SERVICES Peter Tuohy
ADVERTISING PRODUCTION MANAGER
Felicia Spagnoli, f.spagnoli@ieee.org
ADVERTISING PRODUCTION +1 732 562 6334
EDITORIAL ADVISORY BOARD, IEEE SPECTRUM
Harry Goldstein, Chair; Robert Schober, Ella M. Atkins, Sangyeun
Cho, Francis J. “Frank” Doyle III, Hugh Durrant-Whyte, Matthew
Eisler, Shahin Farshchi, Alissa Fitzgerald, Benjamin Gross, Lawrence
O. Hall, Daniel Hissel, Jason K. Hui, ­Michel M. Maharbiz, Somdeb
Majumdar, Lisa May, Carmen S. Menoni, Ramune Nagisetty, Paul
Nielsen, Sofia Olhede, Christopher Stiller, Mini S. Thomas, Wen Tong,
Haifeng Wang, Boon-Lock Yeo
EDITORIAL ADVISORY BOARD, THE INSTITUTE
Kathy Pretz, Chair; Qusi Alqarqaz, Stamatis Dragoumanos,
Jonathan Garibaldi, Madeleine Glick, Lawrence O. Hall,
Harry Goldstein, Francesca Iacopi, Cecilia Metra, Shashi Raj Pandey,
John Purvis, Chenyang Xu
MANAGING DIRECTOR, PUBLICATIONS Steven Heffner
DIRECTOR, BUSINESS DEVELOPMENT,
MEDIA & ADVERTISING Mark David, m.david@ieee.org
EDITORIAL CORRESPONDENCE
IEEE Spectrum, 3 Park Ave., 17th Floor,
New York, NY 10016-5997 TEL: +1 212 419 7555
BUREAU Palo Alto, Calif.; Tekla S. Perry +1 650 752 6661
ADVERTISING INQUIRIES Naylor Association Solutions,
Erik Albin +1 352 333 3371, ealbin@naylor.com
REPRINT SALES +1 212 221 9595, ext. 319
REPRINT PERMISSION / LIBRARIES Articles may be
photocopied for private use of patrons. A per-copy fee must
be paid to the Copyright Clearance Center, 29 Congress St.,
Salem, MA 01970. For other copying or republication, contact
Managing Editor, IEEE Spectrum.
COPYRIGHTS AND TRADEMARKS
IEEE Spectrum is a registered trademark owned by The Institute
of Electrical and Electronics Engineers Inc. Responsibility for the
substance of articles rests upon the authors, not IEEE, its organizational
units, or its members. Articles do not represent official positions of
IEEE. Readers may post comments online; comments may be excerpted
for publication. IEEE reserves the right to reject any advertising.
IEEE BOARD OF DIRECTORS
PRESIDENT & CEO Saifur Rahman, president@ieee.org
+1 732 562 3928 Fax: +1 732 981 9515
PRESIDENT-ELECT Thomas M. Coughlin
TREASURER Mary Ellen Randall
SECRETARY Forrest “Don” Wright
PAST PRESIDENT K.J. Ray Liu
VICE PRESIDENTS
Rabab Ward, Educational Activities; Sergio Benedetto,
Publication Services & Products; Jill I. Gostin, Member &
Geographic Activities; John P. Verboncoeur, Technical Activities;
Yu Yuan, President, Standards Association;
Eduardo F. Palacio, President, IEEE-USA
DIVISION DIRECTORS
Franco Maloberti (I); Kevin L. Peterson (II); Khaled Ben Letaief (III);
Alistair P. Duffy (IV); Cecilia Metra (V); Kamal Al-Haddad (VI);
Claudio Cañizares (VII); Leila De Floriani (VIII); Ali H. Sayed (IX);
Stephanie M. White (X)
REGION DIRECTORS
Greg T. Gdowski (1); Andrew D. Lowery (2); Theresa A. Brunasso (3);
Vickie A. Ozburn (4); Bob G. Becnel (5); Kathy Hayashi (6); Robert
L. Anderson (7); Vincenzo Piuri (8); Enrique A. Tejera (9); ChunChe
“Lance” Fung (10)
DIRECTOR EMERITUS Theodore W. Hissey
IEEE STAFF
EXECUTIVE DIRECTOR & COO Sophia A. Muirhead
+1 732 562 5400, s.muirhead@ieee.org
ACTING CHIEF INFORMATION OFFICER Priscilla Amalraj
+1 732 562 6017, j.prescila@ieee.org
CHIEF HUMAN RESOURCES OFFICER Liesel Bell
+1 732 562 6347, l.bell@ieee.org
GENERAL COUNSEL & CHIEF COMPLIANCE OFFICER
Anta Cissé-Green +1 212 705 8927, a.cisse-green@ieee.org
CHIEF MARKETING OFFICER Karen L. Hawkins
+1 732 562 3964, k.hawkins@ieee.org
PUBLICATIONS Steven Heffner
+1 212 705 8958, s.heffner@ieee.org
CORPORATE ACTIVITIES Donna Hourican
+1 732 562 6330, d.hourican@ieee.org
MEMBER & GEOGRAPHIC ACTIVITIES Cecelia Jankowski
+1 732 562 5504, c.jankowski@ieee.org
STANDARDS ACTIVITIES Konstantinos Karachalios
+1 732 562 3820, constantin@ieee.org
EDUCATIONAL ACTIVITIES Jamie Moesch
+1 732 562 5514, j.moesch@ieee.org
CHIEF FINANCIAL OFFICER Thomas R. Siegert
+1 732 562 6843, t.siegert@ieee.org
TECHNICAL ACTIVITIES Mary Ward-Callan
+1 732 562 3850, m.ward-callan@ieee.org
MANAGING DIRECTOR, IEEE-USA Russell T. Harrison
+1 202 530 8326, r.t.harrison@ieee.org
IEEE PUBLICATION SERVICES & PRODUCTS BOARD
Sergio Benedetto, Chair; Stefano Galli, Maria Sabrina Greco,
Lawrence O. Hall, James Irvine, Charles M. Jackson, Clem Karl,
Yong Lian, Fabrizio Lombardi, Aleksandar Mastilovic, Anna
Scaglione, Gaurav Sharma, Isabel Trancoso, Peter Winzer,
Weihua Zhuang
IEEE OPERATIONS CENTER
445 Hoes Lane, Box 1331, Piscataway, NJ 08854-1331 U.S.A.
Tel: +1 732 981 0060 Fax: +1 732 981 1721
IEEE SPECTRUM (ISSN 0018-9235) is published monthly by
The Institute of Electrical and Electronics Engineers, Inc. All rights
reserved. © 2023 by The Institute of Electrical and Electronics
Engineers, Inc., 3 Park Avenue, New York, NY 10016-5997, U.S.A.
Volume No. 60, Issue No. 11. The editorial content of IEEE Spectrum
magazine does not represent official positions of the IEEE or its
organizational units. Canadian Post International Publications Mail
(Canadian Distribution) Sales Agreement No. 40013087. Return
undeliverable Canadian addresses to: Circulation Department,
IEEE Spectrum, Box 1051, Fort Erie, ON L2A 6C7. Cable address:
ITRIPLEE. Fax: +1 212 419 7570. INTERNET: spectrum@ieee.org.
ANNUAL SUBSCRIPTIONS: IEEE Members: $21.40 included in dues.
Libraries/institutions: $399. POSTMASTER: Please send address
changes to IEEE Spectrum, c/o Coding Department, IEEE Service
Center, 445 Hoes Lane, Box 1331, Piscataway, NJ 08855. Periodicals
postage paid at New York, NY, and additional mailing offices.
Canadian GST #125634188. Printed at 3401 N. Heartland Dr, Liberty,
MO 64068, U.S.A. Printed at Spenta Multimedia Pvt. Ltd., Plot 15,
16 & 21/1, Village Chikhloli, Morivali, MIDC, Ambernath (West), Dist.
Thane. IEEE Spectrum circulation is audited by BPA Worldwide. IEEE
Spectrum is a member of the Association of Business Information
& Media Companies, the Association of Magazine Media, and
Association Media & Publishing. IEEE prohibits discrimination,
harassment, and bullying. For more information, visit https://www.
ieee.org/about/corporate/governance/p9-26.html.
SAVE
WITH YOUR
MEMBERSHIP
r
e
b
Mem unts
o
c
s
i
D
GET
EXCLUSIVE
MEMBER BENEFITS
AND SAVINGS
ieee.org/discounts
THE LATEST DEVELOPMENTS IN TECHNOLOGY, ENGINEERING, AND SCIENCE
ENERGY
Five New Fusion
Prospects, Minus
the Neutrons
Promise of
nonlingering
radiation fuels nextgen nuclear quest
BY TOM CLYNES
I
nterest in fusion energy is surging
today in response to the world’s
­desperate need for abundant clean
power. At least 43 private companies
are now pursuing the goal of safely fusing
two atomic nuclei to form a heavier
nucleus while releasing energy. Nevertheless, the standard d
­ euterium-tritium
(D-T) reaction at the core of fusion
­reactors comes loaded with big, longterm problems.
Deuterium and tritium are hydrogen
isotopes that fuse at lower temperatures
and release more energy than other reactions. But they also yield a superflux of
neutrons, mandating complex (and still
unperfected) containment technologies to
keep the neutron radiation from wrecking
reactor walls, supportive infrastructure,
and nearby living things.
A new breed of maverick fusioneers is
aiming to solve the neutron problem. Their
approach is to swap D-T fuels for readily available elements that, when fused,
release energy that’s carried by charged
particles, instead of neutrons. Proponents
of this method, aneutronic fusion, argue
that the devices will ultimately be easier to
build and better suited to power systems,
since it will be easier to convert the energy
of charged particles into electricity. They
also produce little or no radioactive waste.
“There was a lot of work in what we
then called ‘advanced fuels’ from the 1960s
6
SPECTRUM.IEEE.ORG
NOVEMBER 2023
through the 1980s,” says Gerald Kulcinski,
a nuclear engineer and professor emeritus
at the University of Wisconsin. The work
fell out of favor, he says, “because it’s about
10 times harder to produce that reaction
than it is the D-T reaction. But in the last
decade or so, people have started to think
more and more about advanced fuels,
because of how much damage neutrons
can do to [a reactor’s] first walls.”
Hydrogen-Boron
TAE Technologies, formerly known as
TriAlpha Energy, has the most established private aneutronic fusion program. The company launched in 1998
and is now capitalized at about US $1.25
billion, according to CEO Michl Binderbauer. TAE’s approach calls for fueling
its reactions with hydrogen and boron,
a mix also known as p-B11. When fused,
hydrogen-boron releases three positively charged helium-4 nuclei, known
as alpha particles.
The TAE design confines plasma—
fuel so hot that electrons are stripped
away from the atoms, forming an ionized
TAE TECHNOLOGIES
NOVEMBER 2023
gas—via a technique called a field-reversed configuration (FRC). In an FRC,
the plasma contains itself mostly in its
own magnetic field, rather than relying
on an externally applied field.
TAE’s cylindrical linear research
reactor, dubbed Norman, is capped on
each end by inward-facing electromagnetic plasma cannons, which accelerate
rings of plasma into a central chamber.
There, the rings combine to create a
single cylindrical plasma, stabilized
by a beam of neutral atoms coming in
from the sides. These beams also heat
the plasma and supply it with fresh
fuel. TAE’s power-plant design would
deposit heat in the containment vessel’s
walls and convert it to steam to drive
a turbine using a conventional thermal-conversion system.
“It’s a superelegant beast,” says Binderbauer. “In typical magnetic-confinement designs, about 60 percent of the cost
of the machine is the cost of the magnets.
If you can make the most of your magnetic field with the plasma itself, it gives
you a huge advantage economically.”
The TAE C-2W reactor (also
known as Norman) represents a
fifth-generation iteration on
the promise of neutron-free
—or aneutronic—fusion. Unveiled
in 2017, Norman has sustained
plasmas up to 75 million
degrees Celsius, 250 percent
higher than its original goal.
But FRCs have historically proved
to be unruly: If the plasma misbehaves, the confining magnetic field also
disintegrates and the plasma cools.
­Binderbauer’s team has spent the past
decade researching means to stabilize
the plasma. In recent years, the company
has developed methods and hardware
to reshape and reposition the plasma in
real time, taking advantage of advances
in artificial intelligence and machine
learning.
“We now have that stability,” Binderbauer says. “We can manipulate
these currents and keep them steady
and stable. We get beautiful magnetic
fields, behaving exactly the way they are
predicted.”
There’s another significant downside
to burning hydrogen-boron fuel to create
fusion energy: It requires extreme temperatures, more than 3 billion degrees
Celsius—20 or 30 times as high as the
temperatures required for a deuterium-tritium reaction. The traditional
thinking among many physicists is that,
at these temperatures, the electrons will
NOVEMBER 2023
SPECTRUM.IEEE.ORG
7
radiate so much that they’ll cool the
plasma faster than it can be heated.
Binderbauer counters that the electrons will be the main carrier of the
energy out of the plasma, but the temperature of those electrons is clamped
by relativistic effects. “Since the 1990s
we’ve done extremely sophisticated
work and published a bunch of peer-reviewed papers. Others have measured
these things and found that there is no
catastrophic radiative cooling that kills
the state.”
Betting on a Rare Isotope
Ten-year-old Helion Energy also plans
to use a field-reversed configuration in
the plant it is building in Everett, Wash.
But instead of hydrogen-boron, the company is placing its bets on a helium-3 and
deuterium fuel cycle.
Unfortunately, helium-3 is extremely
rare—accounting for just 0.0001 percent
of available helium on Earth—and is
extremely expensive to produce. Helium-3
could eventually be mined on the surface
of the moon, where an estimated 1.1 million tonnes exist. But instead of building a
spaceship, Helion plans to breed helium-3
in its machine via deuterium-deuterium
side reactions. Thus far, the company
“In the last decade
or so, people have
started to think
more and more
about advanced
fuels, because
of how much
damage neutrons
can do.”
—GERALD KULCINSKI,
UNIVERSITY OF WISCONSIN
The Norman reactor’s central cylindrical fusion chamber [seen from above] is
enmeshed in a maze of wires, magnets, and optics­
—all in service of the ambitious goal of sustainable nuclear fusion power.
8
SPECTRUM.IEEE.ORG
NOVEMBER 2023
has produced only a very small amount
of helium-3, but they intend to use “a patented high-efficiency closed-fuel cycle” to
increase helium-3 output.
“D-helium-3 could be the stopgap
step between deuterium-tritium and
p-B11,” says Kulcinski, “since the reaction requires a temperature of several
hundred million degrees, in between
deuterium-tritium and p-B11.”
The D-helium-3 reactions aren’t
completely aneutronic, but they release
only about 5 percent of their energy in
the form of fast neutrons. That won’t
completely eliminate the complications
of radiation damage, but it will reduce
them significantly.
Helion’s device, like TAE’s, will be a
cylinder capped with opposing plasma
cannons. Rather than attempting to
create a sustained reaction, the machine’s
plasma guns would pulse about once a
second, the company says, creating a stationary FRC in the center and condensing
the plasma with a magnetic field until it
becomes hot and dense enough to fuse.
As the energy is released, the plasma
will push outward against the magnetic
field, allowing the system to harvest the
charged energy through magnetic coils.
“These are innovations that are on
the margins,” says Matthew J. Moynihan, a nuclear engineer and fusion consultant to investors. “Both ramping up
the frequency of the pulsed approach
and breeding helium-3 are going to be
challenging to do on a scale that’s going
to be needed for a viable power plant.”
To create the pulses, the Helion device
will depend on large banks of capacitors
that will store a whopping 50 megajoules
of energy and discharge it in less than a
millisecond—over and over again.
Despite this technical hurdle and
others, Helion lined up its first customer for a power plant that it says will
go on line in 2028. The company recently
finalized an agreement with Microsoft to
provide at least 50 megawatts of electricity—enough for a factory or data center—
after a one-year ramp-up period.
Many in the fusion-energy community dismissed it as a publicity stunt, or
at best an overoptimistic reach for a company that has yet to demonstrate a net
energy gain from its reactions. But these
days, optimism is growing in an industry
that is racing to solve the climate crisis—
with or without neutrons.
TAE TECHNOLOGIES
NEWS
The FIDO2 hardware security key, a popular alternative to password-based
authentication, has become increasingly popular in IT. This is one such
key, made by the startup Yubico, based in Santa Clara, Calif. Now Google has
developed a FIDO2 key that it says is resilient to quantum-computer-based
cyberattacks.
CYBERSECURITY
Google Develops
Quantum-Safe Security
Keys Professional-grade
authentication method gets a
makeover for the quantum age
BY TAMMY XU
GK IMAGES/ALAMY
T
here’s a race on to update the
cybersecurity infrastructure
before quantum computers
become capable of cracking
the current standards. Now Google has
developed a quantum-resilient way of
implementing the FIDO2 security-key
standard, an increasingly popular
method of authentication that’s used as
an alternative to passwords.
Security keys, like passwords, help
users prove their identity so they can
authenticate to digital services. But
unlike passwords, security keys are
unlikely to be compromised because
they’re physical devices built for the sole
purpose of performing authentication.
They are the size of USB sticks, and they
plug into secondary devices like laptops
when users need to perform authentication. Security keys are resistant to
phishing attacks because they work in
two directions: They help users authenticate services, and they authenticate
users to services. Because authentication
happens on a separate device that’s engineered to be hard to compromise, these
keys are generally quite secure.
“Whenever you have a website
that supports FIDO2 authentication,
you can use your security key,” said
­quantum-security researcher Tommaso
Gagliardoni, who works at Kudelski
Security. “It’s still a very small number
of people who are using that, but among
security professionals, I think they are
becoming more and more common.”
Services are slowly adding support
for security keys, starting with the big
operators like Google, Microsoft, and
Facebook. Drawbacks include their
cost—most other forms of authentication are free—and the potential for users
to misplace their security keys and the
need to replace them.
Public-key cryptography is the technology that makes security keys possible,
by providing the proof-of-identity logic
to authenticate users and services using
digital signatures. That technology is also
what makes security keys vulnerable to
quantum attacks, because ultimately,
quantum computers will break all current forms of public-key cryptography,
researchers say.
Google’s implementation uses one of
the post-quantum cryptography algorithms approved by the National Institute
of Standards and Technology (NIST) for
standardization last year. The algorithm,
called Dilithium, is designed specifically
for digital signatures. Because Dilithium
is not yet an official standard and has not
long been in use under real-world conditions, Google took a hybrid approach
that combines a traditional public-key
cryptography algorithm with Dilithium
for authentication.
Gagliardoni said that Google’s biggest contribution is in finding a way to
optimize the Dilithium algorithm so that
it can run on the hardware of a typical
security key, which has limited memory
and processing power.
NOVEMBER 2023
SPECTRUM.IEEE.ORG
9
NEWS
10
SPECTRUM.IEEE.ORG
NOVEMBER 2023
Using a graphene field-effect transistor (FET), this sensor chip is
designed to affordably and rapidly test drinking water samples for
contaminants like heavy metals and bacteria.
SEMICONDUCTORS
Graphene Sensor
Makes Safe Drinking
Water More Affordable
AI teases out signals
indicating levels of bacteria
and heavy metals
BY PRACHI PATEL
H
undreds of thousands of
people die from drinking
unsafe water every year,
according to the World
Health Organization. For example,
diarrhea transmitted from bacterial
contamination is estimated to
cause over 500,000 deaths annually. Toxic heavy metals in drinking
water, such as arsenic, lead, and
mercury, also pose huge health
risks. And climate change will only
exacerbate the risks of water-related diseases, according to WHO.
Sensors that can accurately
and quickly detect such contaminants could prevent many
waterborne illnesses and deaths.
Now, engineers have developed
a path to mass-manufactured,
­high-performance graphene sensors that can detect heavy metals
and bacteria in flowing tap water.
Because of its price point—US $10
per unit, now with expectations
that economies of scale will reduce
the cost further—this advance
allows people to test their drinking
water for toxins at home.
The sensors have to be extraordinarily sensitive to catch the
minute concentrations of toxins
that can cause harm. For example,
the U.S. Food and Drug Administration states that bottled water
must have a lead concentration of
no more than five parts per billion.
Today, detecting parts-perbillion or even parts-per-trillion
concentrations of heavy metals,
JUNHONG CHEN
“If you take the implementation of
the quantum-resistant scheme as it is
published by NIST and you try to put it
in hardware, it will not work because it
will require too much memory,” he said.
To make it work, Google reduced
the amount of memory Dilithium is
supposed to run on in exchange for a
slightly slower operation. David Turner,
senior director of standards development
at FIDO Alliance, which manages password-free authentication standards, said
post-quantum changes to security keys
are expected to come with challenges. In
order to create a more secure connection, new algorithms could increase the
complexity of authentication protocols
and require more time to process the
authentication.
Google’s implementation still lacks a
protection against side-channel attacks,
Gagliardoni said. That’s where hackers
break the cryptography by gaining direct
physical access to the security keys. A
stereotypical side-channel attack might
involve a hacker breaking into the hotel
room of a target and hacking into the
security key they’d left unguarded
on a desk, stealing the target’s digital
signature, then leaving the key intact
without the target ever knowing, he
said. Google’s implementation ignores
those types of local threats and focuses
only on remote attacks—which makes
some sense because it would be difficult to sneak a quantum computer into
a hotel room.
The implementation was released
through Google’s open-source project for security keys, OpenSK. Many
platforms that rely on public-key cryptography will soon need to make the
transition to post-quantum algorithms,
particularly platforms that handle
highly sensitive encrypted information and important services that have
long life-spans, such as satellites. Services and data with long life-spans are
vulnerable to quantum attacks even
if threats take decades to materialize,
which is why they should be prioritized.
Security keys can be in use for many
years but are only just gaining in popularity, so they are a good early choice
for transitioning.
There will be many more transitions
like this in the years to come, including
Google’s recent work with Transport
Layer Security in the Chrome browser.
SOURCE: MAITY, A., ET AL., NATURE COMMUNICATIONS 14, 2023
bacteria, and other toxins is possible
only by analyzing water samples in the
laboratory, says Junhong Chen, a professor of molecular engineering at the
University of Chicago and the lead water
strategist at Argonne National Laboratory. But his group has developed a
sensor with a graphene field-effect transistor that can detect toxins at those low
levels within seconds.
The sensor is based on a nanometers-thick semiconducting graphene
oxide sheet, which acts as the channel
between the source and drain electrodes
in a FET; a gate electrode controls current through the channel. The graphene
sheets are deposited on a silicon wafer,
and then gold electrodes are printed
on the sheets, followed by a nanometer-thick insulating layer of aluminum
oxide to separate the gate electrode from
the semiconducting channel.
The researchers attach chemical and
biological molecules to the graphene
surface that will bind with the desired
­targets—in this case E. coli bacteria
and the heavy metals lead and mercury.
When even the tiniest amount of the
­contaminants attach to the graphene,
its conductivity changes, with the magnitude of change correlating to the concentrations of the toxins.
The device uses an array with three different sensors, one for each contaminant,
to measure parts-per-trillion concentrations in flowing water. Machine-learning
algorithms help differentiate among the
contaminants, Chen says. “Its response is
very fast, just like any other FET, so you
can see results right away,” he says. “Also,
it is potentially low cost because FET is
a cost-effective and scalable technology
[that’s already used] in computers, laptops, and cellphones.”
Manufacturing sensors with reliable, consistent performance was a
major challenge, he says. That’s because
the insulating aluminum oxide layer
can have defects that trap charges and
degrade performance.
So Chen and his colleagues came up
with a way to detect defective devices using
a nonintrusive process. While the sensors
are immersed in water, the researchers test
them using impedance spectroscopy—a
technique that involves applying an AC
voltage at frequencies ranging from a few
hertz to a few tens of thousands of hertz—
and measuring the current through the
Test
solution
External water
container
Piezoelectric motor
Polydimethylsiloxane
(PDMS) mold
Sensor
chamber
Central
processing
unit
FET
sensor
chip
The three graphene FET sensors [purple and yellow] are housed in a
sensor chamber through which the water to be tested passes, courtesy of a piezoelectric motor. A low-cost CPU processes the sensors’
signals and determines whether contaminants are present in substantial
amounts in the solution.
“[The sensor’s]
response is very
fast, just like any
other FET, so you
can see results
right away.”
—JUNHONG CHEN,
UNIVERSITY OF CHICAGO
devices. This lets them detect structural
defects in the aluminum oxide.
“On each wafer you would have hundreds of sensor chips,” Chen says. “In
future manufacturing, we can introduce
this quality-control step to screen out
bad devices and pick out the good-quality devices.”
The team is now trying to commercialize the technology through a startup
called NanoAffix Science. “The first
product we hope to introduce is a handheld device that allows people to test
drinking water quality directly from the
tap,” Chen says.
The device would have a replaceable
one-time-use graphene sensor. While the
sensor costs about $10 right now, with
scale-up it should eventually come down
to $1, he says. His team is also studying
ways to remove the contaminants from
the graphene to make the sensors reusable. “In principle, it is doable,” Chen says.
“In the future, you could imagine this type
of sensor on faucets or water meters to
continuously monitor water quality.”
The team’s research was reported
in a recent issue of the journal Nature
Communications.
NOVEMBER 2023
SPECTRUM.IEEE.ORG
11
NEWS
Need an
adhesive
for your
MEDICAL
DEVICE
APPLICATION?
ARTIFICIAL INTELLIGENCE
We Offer
epoxies, silicones, light curing
compounds for bonding, sealing,
coating, potting & encapsulating
Cerebras Introduces a 2-Exaflop
AI Supercomputer Condor
Galaxy 1 is the start of a nine-system,
36-exaflop network
Our products meet
USP Class VI for biocompatibility
& ISO 10993-5 for cytotoxicity
Our experts are
ready to help
offering adhesive solutions for
medical device manufacturers
Download our
eBook on
adhesives for
medical device
applications
+1.201.343.8983 • main@masterbond.com
www.masterbond.com
12
SPECTRUM.IEEE.ORG
NOVEMBER 2023
N
ot very long ago, the following statement would
have sounded like the
tagline for a sci-fi movie:
“Generative AI is eating the world.”
These six words are how Andrew
Feldman, CEO of the Silicon Valley
AI computer maker Cerebras,
began his introduction to his company’s AI supercomputer, capable
of 2 billion billion operations per
second (2 exaflops). Cerebras is on
track to double the size of the
system, called Condor Galaxy 1, this
month. In early 2024, it will be
joined by two more full-size systems. The Silicon Valley company
plans to keep adding Condor
Galaxy installations next year until
it is running a network of nine
supercomputers capable of 36 exaflops in total.
If large language models and
other generative AI are eating the
world, Cerebras’s plan is to help
them digest it. And the Sunnyvale,
Calif., company is not alone. Other
makers of AI-focused computers
are building massive systems
around either their own specialized processors or Nvidia’s latest
GPU, the H100. While it’s difficult
to judge the size and capabilities
of most of these systems, Feldman claims that Condor Galaxy 1
is already among the largest.
Condor Galaxy 1—assembled
and started up in just 10 days—is
made up of 32 Cerebras CS-2 computers and is set to expand to 64.
The next two systems, to be built in
Austin, Texas, and Asheville, N.C.,
will also house 64 CS-2s each.
The heart of each CS-2 is the
Wafer-Scale Engine-2, an AI-­
specific processor with 2.6 trillion
transistors and 850,000 AI cores
made from a full wafer of silicon.
The chip is so large that the scale of
memory, compute resources, and
other stuff in the new supercomputers quickly gets a bit ridiculous.
One of Cerebras’s biggest advantages in building big AI supercomputers is its ability to scale up
resources simply, says Feldman.
For example, a 40-billion-parameter network can be trained in
about the same time as a 1-billion-­
parameter network if you devote
40-fold more hardware resources
to it. Importantly, such a scale-up
CEREBRAS
BY SAMUEL K. MOORE
At 32 CS-2 nodes, Condor Galaxy 1 has
twice the number of AI compute nodes
seen here. By the end of November the
total will have doubled to 64.
doesn’t require additional lines of code.
Demonstrating linear scaling has historically been very troublesome because of
the difficulty of dividing up big neural
networks so they operate efficiently. “We
scale linearly from one to 32 [CS-2s] with
a keystroke,” he says.
The Condor Galaxy series is owned
by Abu Dhabi–based G42, a holding
company with 10 AI-based businesses
including G42 Cloud, one of the largest
cloud-computing providers in the Middle
East. Feldman describes the relationship
as a “deep strategic partnership,” which
is what’s needed to get 36 exaflops up
and running in just 18 months, he says.
Feldman is planning to commute to and
A Whole
Lot of
Silicon
Nodes
Accelerators
Accelerator cores
AI compute
Memory (terabytes)
CPUs
Transistors
Accelerator silicon
(square millimeters)
CEREBRAS
CONDOR GALAXY 1
(PRE-UPGRADE)
NVIDIA
DGX SUPERPOD
32
32
32 WSE-2s
256 H100 GPUs
27 million
4.46 million
2 exaflops (at FP16)
1 exaflops (at FP8)
41
84.5
568
64
>83 trillion
>20 trillion
1,480,160
208,384
With 32 nodes, the first Cerebras Condor Galaxy 1 is half of its projected
ultimate size. Archrival Nvidia makes a 32-node computer using groups of
eight of its H100 GPUs. Condor Galaxy 1 ultimately uses a lot more silicon
than Nvidia’s similarly scaled system.
from the United Arab Emirates for several months beginning later this year to
help manage the collaboration, which will
“substantially add to the global inventory
of AI compute,” he says. Cerebras will
operate the supercomputers for G42 and
can rent resources its partner is not using
for internal work.
Demand for training large neural networks has shot up, according to Feldman.
The number of companies training neural-network models with 50 billion or
more parameters went from two in 2021
to more than 100 this year, he says.
Obviously, Cerebras isn’t the only
one going after businesses that need to
train really large neural networks. Big
players such as Amazon, Google, Meta,
and ­Microsoft have their own offerings.
­Computer clusters built around Nvidia
GPUs dominate much of this business,
but some of these companies have developed their own silicon for AI, such as
Google’s TPU series and Amazon’s Trainium. There are also startup competitors
to Cerebras, making their own AI accelerators and computers, including Habana
(now part of Intel), Graphcore,
and SambaNova.
Examples of huge AI systems
abound. For example, Google
constructed a system containing 4,096 of its TPU v4 accelerators for a total of 1.1 exaflops.
That system ripped through the
BERT natural-­language processor neural network, which
is much smaller than today’s
LLMs, in just over 10 seconds.
Google also runs Compute
Engine A3, which is built around
Nvidia H100 GPUs and a custom
­infrastructure-processing unit
made with Intel. The cloud
­provider CoreWeave, in partnership with Nvidia, tested a system
of 3,584 H100 GPUs that trained a
benchmark representing the large
language model GPT-3 in just
over 10 minutes. In 2024, Graphcore plans to build a 10-exaflop
system called the Good computer
made up of more than 8,000 of its
Bow ­processors.
NOVEMBER 2023
SPECTRUM.IEEE.ORG
13
THE BIG PICTURE
Space Flight
By Willie D. Jones
Until recently, space
trips were exclusively
for highly trained
personnel handpicked
by national space
agencies. But with
the debut of Virgin
Galactic’s VSS Unity
space plane (and
its VMS Eve spacecraft
carrier), tourist
trips to space could
become as quotidian
as intercontinental
flights. (“VSS” stands
for “Virgin Space Ship”
and “VMS” for “Virgin
Mother Ship.”) This
image shows Anastatia
Mayers—who, at 18 years
old, is the youngestever person to go
to space—looking down
at Earth from aboard
VSS Unity after VMS
Eve carried the rocketpropelled space plane
to an altitude of 13.5
kilometers and Unity’s
boosters pushed it
to the outer reaches
of Earth’s atmosphere.
VMS Eve is designed
to touch down on an
airport runway instead
of splashing down
in an ocean, making
it available for reuse
without undergoing
major repairs.
If Virgin Galactic’s
plans pan out,
summer vacations and
holiday getaways
will soon be literally
out of this world.
PHOTOGRAPH BY
VIRGIN GALACTIC
14
SPECTRUM.IEEE.ORG
NOVEMBER 2023
NOVEMBER 2023
SPECTRUM.IEEE.ORG
15
TECH TO TINKER WITH
Cerberus 2100 uses a reprogrammable
system architecture to combine
two different 8-bit CPUs.
Fat-Cavia
Video memory
Fat-Spacer
6502 CPU
Z80 CPU
High memory
Fat-Scunk
Low memory
Character memory
Fat-Cat (I/O controller)
16
SPECTRUM.IEEE.ORG
NOVEMBER 2023
Illustrations by James Provost
NOVEMBER 2023
Software-Defined
Architecture Learn
system-level design with
this dual-CPU computer
BY BERNARDO KASTRUP
W
hen the home computer
revolution arrived, it filled
my childhood with fascination and inspired me to
study computer engineering. I wanted to
design a microcomputer to my own specifications. But at school I was never
taught how a complete computer system
was put together. Instead we studied various subsystems and the theory of things
like digital signal processing and so on.
Somebody, somewhere else, would always be responsible for assembling the
whole system and making everything
work together.
This was unfortunate and unjustified:
Putting a complete working computer
together isn’t difficult, and it can give students critical early confidence in their
ability to live up to the label “computer
engineer.” So, having recently retired from
the high-tech industry, I decided to design
a didactical but fully functional computer
that could serve as a platform for learning
and experimenting with system-level
design issues—the Cerberus 2100.
I didn’t want to commit Cerberus to a
particular CPU, as doing so would conflate system-level architecture concepts
with the specific timings and control signals of that CPU. Much as a software-­
engineering course focuses on the
structure of an algorithm rather than the
syntax of its implementation in a particular language, I wanted Cerberus to focus
on the system-level structure. Cerberus
is thus a multi-CPU system, featuring
Another critical
design challenge
was to decouple
the logic of the
computer from
the timings of the
video circuitry.
both a Z80 and a W65C02S (6502), two
well-known workhorse 8-bit processors
that featured prominently in the
home-microcomputer era. There is a
wealth of resources available for learning
how to program these processors, which
are powerful enough to be useful and
entertaining, yet simple enough to master.
The problem, of course, is that these
two CPUs operate with very different
interfaces to other parts of the computer,
such as memory or input/output devices.
For instance, the 6502 uses a single control line to indicate whether it is reading
or writing to the data bus, while the Z80
uses two lines. This means the 6502’s
signal needs to be combined with the
signal from the system clock, via an AND
gate, to prevent memory miswrites, while
the Z80 has no such issue. Also, the Z80
has an output line to signal that the value
on the address bus is stable, a function
absent in the 6502. And so on.
These differences mean that I couldn’t
use a standard control bus in the Cerberus. Instead, I used a large complex programmable logic device (CPLD) chip I
dubbed “Fat-Spacer” to translate the
control signals of each CPU into an
abstraction layer. This layer defines the
system architecture. Fat-Spacer then
translates the output of the abstraction
layer into the appropriate input signals for
each component in the system. These two
steps of translation entail both Boolean
logic and timing control through flipflops. I used a CPLD instead of an FPGA
(field-programmable gate array) because,
unlike FPGAs, CPLDs have a fixed propagation delay regardless of the Boolean
logic implemented in them. This is critical
because it allows users to make changes
to the system architecture—by reprogramming the CPLD—without having to
worry that the complexity of their changes
will take too long to pass through a chain
of logic gates, and so miss the timing windows imposed by the system clock.
Because of its internal abstraction
layer, Cerberus is uniquely suitable for
expansion: A direct memory access
NOVEMBER 2023
SPECTRUM.IEEE.ORG
17
CTRL
Buzzer
Z80
6502
(DMA) expansion port is also connected
to Fat-Spacer. By directly allowing access
to system memory, I let the user add even
more CPUs and microcontrollers to the
system via the expansion port.
Another critical design challenge I
faced was to decouple the system-level
logic of the computer from the timings
of the video circuitry. Traditionally, these
two are tightly tied together so as to coordinate access to video and character
memories by the CPU and display circuitry without causing conflicts or artifacts. But with two CPUs and the DMA
expansion port, this wasn’t an option.
Instead, Cerberus uses two dualported static RAMs (SRAMS) as video
and character memories. Each port allows
asynchronous access to the memory’s
contents. One port of each SRAM is connected to the computer proper, while the
other is exclusive to the video circuitry.
Despite the dual-ported memories,
onscreen glitches could still occur if the
video circuitry read from a given address
as a CPU wrote to that same address. Fortunately, dual-ported SRAMs provide a
“BUSY” signal to indicate a conflict. This
signal is used by Fat-Spacer to pause the
CPUs for the duration of the conflict. The
control abstraction layer comes very handy
here too, as it already has the appropriate
translation logic for pausing the CPUs.
18
SPECTRUM.IEEE.ORG
NOVEMBER 2023
The Z80 and 6502 processors
use different control
signals to interface with
memory and interface chips.
A reprogrammable logic chip,
dubbed Fat-Spacer translates
these signals as required.
Another reprogrammable logic
chip handles storage and the
keyboard interface, while a
third generates video signals.
Fat-Cavia
Fat-Scunk
Fat-Cat
MicroSD
High mem:
32-KB SRAM
Keyboard
Char mem:
2-KB DP-SRAM
DATA BUS
ADDR BUS
Fat-Spacer
Expansion
circuit
CTRL
Low mem:
32-KB SRAM
Video mem:
2-KB DP-SRAM
HANDS ON
Monitor
Expansion
card(s)
Fat-Spacer isn’t the only CPLD in
Cerberus: Three of them constitute the
system’s core chipset. Fat-Cavia continuously scans the video and character
memories, and sends bitmaps to FatScunk, which then generates the appropriate RGB signals and syncs pulses to
create a 320-by-240-pixel VGA output.
Meanwhile, as we’ve seen, Fat-Spacer
provides the glue logic. Finally, there’s an
additional chip: Fat-Cat, which is actually an ATmega328PB microcontroller.
This is used to handle I/O: The microcontroller manages a keyboard, buzzer,
the expansion protocol, and a microSD
card for storage. The I/O firmware is held
in the ATmega’s memory, meaning it
leaves no memory footprint in the
64 kilobytes of RAM accessible to the
Z80 and 6502.
The Cerberus 2100 is an open hardware design available to all and complete details are available on my
website. But for those who don’t want
to build their own machine from
scratch, I am working with European
electronics company Olimex for the
sale of a fully assembled version
shortly. I hope it helps students and
hobbyists to understand—and faculty
to teach—how a complete, fully functional computer can be put together,
regardless of the target CPU.
SHARING THE EXPERIENCES OF WORKING ENGINEERS
BY GLENN ZORPETTE
Careers:
Kyle Clark
From hockey enforcer to high-flying
eVTOL CEO
United Therapeutics founder Martine Rothblatt
[left], an early investor in Beta Technologies,
completed an evaluation flight of Beta’s
all-electric Alia aircraft alongside Beta CEO
Kyle Clark in 2021.
BETA TECHNOLOGIES
K
yle Clark, the 43-year-old founder and CEO
of Beta Technologies, is not quite your typical tech entrepreneur. For one thing, he’s a
former professional ice hockey player. Then,
too, many afternoons you won’t find him behind a
desk at the company’s headquarters near the airport
in Burlington, Vt. In fact, you won’t find him on the
premises at all because he’s up in the air, flying one of
the company’s radically innovative electric aircraft.
Among the hundreds of companies building electric vertical takeoff and landing (eVTOL) aircraft, Beta
has lately established itself as the clear No. 2, behind
Joby Aviation. On 2 October, Beta announced the
completion of a 17,500 square-meter manufacturing
facility in South Burlington that will eventually be
capable of producing 300 aircraft per year. No other
eVTOL company has comparable manufacturing
capabilities except for EHang, in China, although
Archer Aviation, Joby, Lillium, Overair, and Volocopter
are now operating or building production facilities.
It’s another memorable milestone for Clark, “the
most impressive polymath I’ve ever met,” says Dean
Kamen, an IEEE Honorary Member and president of
Deka Research & Development Corp. “He has the
most broad-based collection of skill sets and experience in physics, aerodynamics, structures, propulsion,
and electric motors. He’s remarkable.”
Growing up in Essex, Vt., Clark dreamed of flying
and building aircraft. But as a nearly 200-centimeter
(6-foot-6-inch) teenager, he also played ice hockey in
high school with a fierceness and physical style that
landed him a spot on the U.S. National Junior Team,
a group of young elite players being developed for
possible inclusion on the U.S. Olympic team. There
he became a legend for his energy and commitment:
He racked up 171 penalty minutes in one season,
which still stands as the U.S. National Junior Team
record. (He was also named team captain.)
Next stop: Harvard, in 1998, to pursue a bachelor’s
degree in engineering. He played on the university’s
hockey team, and also dreamed of building a radically
“You can’t
be a good
electrical
engineer
unless
you have
generated
enough
empathy
for the
people that
are going
to use the
product.”
different kind of aircraft. During his freshman year,
he became consumed by an idea he had for “a
hybrid-electric aircraft that utilized a very high-­
power-density motorcycle engine to drive a pusher
propeller in an aircraft with a high wing and a fly-bywire system.” It was the basis of the two aircraft now
being built at Beta Technologies. But getting those
aircraft built would be a roundabout journey, starting
with a detour into professional ice hockey. During his
junior year, he left Harvard after he was drafted by
the National Hockey League’s Washington Capitals.
“I went and played hockey for a while, but that’s
kind of where the Beta story starts,” he explains.
“I was always enamored with airplanes. I got my
signing bonus from the Capitals, and I literally went
straight to the airport and said, “I want to get a pilot’s
license.” And he did.
After knocking around the Capitals’ farm system
for a couple of years, Clark returned to Harvard to
finish his degree in materials science engineering.
After his junior year, he met Valery Kagan, an elderly
Russian-born engineer who taught Clark “some basic
principles of power electronics design.” Around the
same time, through a company where he interned,
Husky Injection Molding in Milton, Vt., he became
aware of “a problem in thixotropic magnesium molding,” a technique used to produce strong and lightweight parts out of magnesium.
In 2005, Clark, Kagan, and three others launched
iTherm Technologies in South Burlington. “It was my
job to work like hell to solve the problem,” Clark
recalls. That problem was a lack of power supplies
robust enough to withstand the demands of high-­
impedance induction heating, on which the magnesium molding technique depended.
“I built hundreds of power supplies and blew up
hundreds of IGBTs [insulated gate bipolar transistors], just sitting there with an oscilloscope and
­LabView for controls,” he adds. This is how Clark got
NOVEMBER 2023
SPECTRUM.IEEE.ORG
19
CAREERS
A full-scale proof-of-concept version of the Alia-250 eVTOL aircraft completed a piloted hover test at the Burlington
International Airport, in Vermont [left]. Kyle Clark is not only CEO of Beta Technologies, he’s also one
of its test pilots. Here, Clark prepares to fly one of the company’s two all-electric prototype aircraft [right].
It was an auspicious start. Today, Beta has some
600 employees and a market valuation in the range
of $1.5 billion to $2.3 billion, according to Dealroom.co.
It is building two electric aircraft based on the same
basic airframe, each with a 15-meter wingspan. Both
are designed to carry a pilot and either four passengers or three standard cargo pallets. The only major
difference between the two is related to horizontal
rotors: one has them, and the other doesn’t.
The Alia CX300 is an eCTOL (electric conventional takeoff and landing) aircraft with a single
­pusher-prop in back for propulsion. The Alia-250
adds four rotors on top for vertical lift, so it is an
eVTOL. So far, Beta has built a prototype of each,
both of which are flown nearly every day, Clark says.
The company has sales contracts or agreements
20
SPECTRUM.IEEE.ORG
NOVEMBER 2023
Employer:
Beta Technologies
Title:
CEO
Education:
Bachelor’s
degree in
materials science
engineering,
Harvard
600
PEOPLE WORKING
AT BETA, WHICH
HAS A MARKET
VALUATION IN
THE RANGE OF
US $1.5 BILLION
TO $2.3 BILLION
for its aircraft with Air New Zealand, Bristow Group,
LCI Aviation, United Therapeutics, UPS, the U.S. Air
Force, and the U.S. Army. Beta is also working on a
network of charging stations in the United States
capable of charging not only its aircraft but also conventional road EVs. It has built about a dozen such
stations and has around 55 more in development.
Clark, an IEEE member, advises young engineers
interested in working on eVTOLs to do “real” engineering. “We see people who are very good on analytical tools, but they haven’t developed the intuition
to understand where they’re going to take haircuts
because of design for manufacturing, or material
availability, or what can actually be made without a
massive tooling cost,” he says. “All these things
require an intuition that’s only developed by building
things. By micro experimentation.
“Sitting down and actually doing the hard work
of writing code makes you appreciate how hard it is
to actually just fix things in software when the software is safety critical,” he adds. “Molding things out
of composite makes you realize that ‘I can’t put that
radius in there to make that thermal shroud for the
power electronics.’ Building things with semiconductors, you realize, ‘Hey, that may have a datasheet, with
a heat-transfer coefficient between the junction and
the heat sink, but I’m never actually getting that kind
of transfer because thermal paste dries out.’ You start
to develop your own intuitive book of knowledge of
where the real gremlins hide in engineering.”
He stresses that success in engineering means
becoming familiar with products from many perspectives, not just in design and engineering but also manufacturing and end use.
“Everybody gets flight lessons for free here, so they
get to use the product and learn what it means to use
it,” he says. “You can’t be a good electrical engineer
unless you have generated enough empathy for the
people that are going to use the product that you’re
designing and also the people that are going to build
the product that you’re designing.”
BETA TECHNOLOGIES
his first intense experiences in real-world electrical
engineering, which would later serve him well at Beta.
For his bachelor’s degree thesis, Clark designed a
flight-control system for that hybrid-electric aircraft
of his dreams. It was named student paper of the year
by Harvard’s engineering department.
iTherm, meanwhile, became a profitable company
and was sold to Dynapower, an energy-storage and
power-conversion firm in South Burlington. With the
proceeds from the sale, Clark got the chance to focus
on aviation full time with the launch of Beta.
His big break came five years later during a chance
meeting with the investor and entrepreneur Martine
Rothblatt, who had made a fortune from starting up
Sirius Satellite Radio. In 1996, Rothblatt founded
United Therapeutics, a biotech company based in Silver
City, Md., that she established with a long-term goal of
greatly expanding rapid access to organs for transplantation. A centerpiece of her vision was an electric rotorcraft that could swiftly ferry the organs to hospitals.
With US $52 million from Rothblatt, according to
Forbes, in 2017 Clark and a team of eight got to work.
“In 10 months, we built a 4,000-pound [1,800-kilogram] electric vertical takeoff and landing prototype,”
Clark says.
Q&A
BY ELIZA STRICKLAND
compression system into one, making a simpler
solution for the end user and driving down costs.
5 Questions
for Luke Tan
Why his startup is producing green
hydrogen for a distillery
What sectors did you originally imagine you’d be
applying this technology to?
Tan: Hydrogen goes into synthetic fertilizers, it goes
into fuel refineries, and it goes into chemical production. And we are still targeting those sectors
today. We did a project with ScottishPower that
demonstrated that by delivering hydrogen with
Supercritical’s electrolyzer, we could drive down the
cost of [producing] ammonia by 21 percent.
D
ozens of companies are trying to replace
the typical method of producing hydrogen—from natural gas—with electrolysis,
in which an electric current is used to split
water into hydrogen and oxygen. The goal for many
of these companies is “green hydrogen,” in which
the electricity used comes from renewable sources.
Supercritical Solutions, a startup based in the
United Kingdom, is pioneering a kind of electrolysis
that begins with water in its supercritical state,
which combines the properties of liquids and gases.
The startup has found a partner in the liquor behemoth Beam Suntory. In Scotland, the two companies
are taking the first steps that could lead to the
world’s first zero-emission, hydrogen-powered
whisky production. Luke Tan, the cofounder and
chief product officer of Supercritical, spoke to IEEE
Spectrum about the project.
What sets Supercritical’s technology apart?
Luke Tan: Supercritical has the world’s first
high-pressure, ultrahigh-efficiency electrolyzer. We
deliver two main things. With high temperature we
achieve class-leading efficiency. With high pressure,
our electrolyzer is able to natively produce over
200‑bar [20,000-kilopascal] hydrogen and over
200-bar oxygen without the need for any gas compressors. Essentially what we do is combine a typical
water electrolyzer system and a typical hydrogen
Photo-illustration by Stuart Bradford
How does using water in its supercritical state give
you these advantages?
Tan: Operating at around 400 °C allows faster electrochemical kinetics [and therefore faster reaction
rates], which means that we require less power to
produce a given amount of hydrogen. In addition,
when you’re producing more and more hydrogen in
a typical electrolyzer cell, you will encounter mass
transport limitations—this is where gaseous hydrogen will interfere with the liquid water trying to
reach the electrically active sites. But because our
produced hydrogen is also in a supercritical state,
it can move away from the active site far more readily, and the reactant water can get into the active site
with minimal resistance.
Luke Tan and his
cofounders started
Supercritical in 2020
with a mission to
pioneer hydrogen
technology that will
enable a transition
from fossil fuels. Prior
to Supercritical, he
was at the sustainable
technology company
Johnson Matthey
working on hydrogento-methanol plants and
hydrogen fuel cells.
But how did you get involved in using hydrogen
to produce whisky?
Tan: The whisky sector wasn’t necessarily top of our
list, but it certainly is a part of the puzzle that needs
to be solved. Critically for us, it uses industrial heat,
which is one of our target sectors. We want to use
green hydrogen to decarbonize some of the biggest
industries today that utilize fossil fuel to generate
heat for their processes. During this project, Beam
Suntory will demonstrate, for the first time ever, that
they can use hydrogen in place of natural gas [to
produce heat] directly under a copper still to create
as good, if not better, whisky!
When will someone first sip a dram of whisky from
the hydrogen-powered still? What will it taste like?
Tan: Well, I hope it tastes better than other whiskies;
it will certainly feel better. Within the time frame of
the actual project, Beam Suntory will produce spirit.
But [the spirit] has to undergo a minimum of three
years of maturation before it can be called whisky.
And it’s anticipated that this [batch] will be matured
for nearer to 10 years to give it the credit it deserves
for being the first of a kind.
NOVEMBER 2023
SPECTRUM.IEEE.ORG
21
→ Henry Evans gives
a rose to his wife, Jane,
with the assistance
of a Stretch robot.
Photo by Peter Adams
A
ROBOT
FOR
HUMANITY
HOW ROBOTS CAN
EMPOWER PEOPLE
WHO NEED THEM
THE MOST
BY E VA N AC KE R M A N
22
SPECTRUM.IEEE.ORG
IN 2010, HENRY EVANS saw a robot on TV. It
was a PR2, from the robotics company Willow
Garage, and Georgia Tech robotics professor Charlie
Kemp was demonstrating how the PR2 was able to
locate a person and bring them a bottle of medicine.
For most of the people watching that day, the PR2 was
little more than a novelty. But for Evans, the robot had
the potential to be life changing. “I imagined PR2 as my
body surrogate,” Evans says. “I imagined using it as a
way to once again manipulate my physical environment
after years of just lying in bed.”
→ Vy Nguyen [left]
is an occupational
therapist at Hello
Robot who has been
working extensively
with both Henry and
Jane to develop
useful applications
for Stretch in their
home.
24
Eight years earlier, at the age of 40, Henry was
working as a CFO in Silicon Valley when he suffered
a stroke-like attack caused by a birth defect, and
overnight, became a nonspeaking person with quadriplegia. “One day I was a 6'4", 200 Lb. executive,”
Evans wrote on his blog in 2006. “I had always been
fiercely independent, probably to a fault. With one
stroke I became completely dependent for everything…. Every single thing I want done, I have to ask
someone else to do, and depend on them to do it.”
Evans is able to move his eyes, head, and neck, and
slightly move his left thumb. He can control a computer cursor using head movements and an onscreen
keyboard to type at about 15 words per minute,
which is how he communicated with IEEE Spectrum
for this story.
After getting in contact with Kemp at Georgia
Tech, and in partnership with Willow Garage, Evans
and his wife, Jane, began collaborating with the
roboticists on a project called Robots for Humanity.
The goal was to find ways of extending independence for people with disabilities, helping them and
their caregivers live better and more fulfilling lives.
The PR2 was the first of many assistive technologies
developed through Robots for Humanity, and Henry
was eventually able to use the robot to (among other
things) help himself shave and scratch his own itch
for the first time in a decade.
“Robots are something that was always science
fiction for me,” Jane Evans told me. “When I first
SPECTRUM.IEEE.ORG
NOVEMBER 2023
began this journey with Henry, it never entered my
mind that I’d have a robot in my house. But I told
Henry, ‘I’m ready to take this adventure with you.’
Everybody needs a purpose in life. Henry lost that
purpose when he became trapped in his body, and
to see him embrace a new purpose—that gave my
husband his life back.”
Henry stresses that an assistive device must not
only increase the independence of the disabled
person but also make the caregiver’s life easier.
“Caregivers are super busy and have no interest in
(and often no aptitude for) technology,” he explains.
“So if it isn’t dead simple to set up and it doesn’t save
them a meaningful amount of time, it very simply
won’t get used.”
While the PR2 had a lot of potential, it was too
big, too expensive, and too technical for regular realworld use. “It cost $400,000,” Jane recalls. “It
weighed 400 pounds. It could destroy our house if
it ran into things! But I realized that the PR2 is like
the first computers—and if this is what it takes to
learn how to help somebody, it’s worth it.”
For Henry and Jane, the PR2 was a research project rather than a helpful tool. It was the same for
Kemp at Georgia Tech—a robot as impractical as
the PR2 could never have a direct impact outside of
a research context. And Kemp had bigger ambitions.
“Right from the beginning, we were trying to take
our robots out to real homes and interact with real
people,” he says. To do that with a PR2 required the
Photo by Peter Adams
26
assistance of a team of experienced roboticists and
a truck with a powered lift gate. Eight years into the
Robots for Humanity project, they still didn’t have
a robot that was practical enough for people like
Henry and Jane to actually use. “I found that incredibly frustrating,” Kemp recalls.
In 2016, Kemp started working on the design of
a new robot. The robot would leverage years of
advances in hardware and computing power to do
many of the things that the PR2 could do, but in a
way that was simple, safe, and affordable. Kemp
found a kindred spirit in Aaron Edsinger, who like
Kemp had earned a Ph.D. at MIT under Rodney
Brooks. Edsinger had cofounded a robotics startup
that was acquired by Google in 2013. “I’d become
frustrated with the complexity of the robots being
built to do manipulation in home environments and
around people,” says Edsinger. “[Kemp’s idea]
solved a lot of problems in an elegant way.” In 2017,
Kemp and Edsinger founded Hello Robot to make
their vision real.
The robot that Kemp and Edsinger designed is
called Stretch. It’s small and lightweight, easily movable by one person. And with a commercial price of
US $20,000, Stretch is a tiny fraction of the cost of
a PR2. The lower cost is due to Stretch’s simplicity—
SPECTRUM.IEEE.ORG
NOVEMBER 2023
it has a single arm, with just enough degrees of freedom to allow it to move up and down and extend
and retract, along with a wrist joint that bends back
and forth. The gripper on the end of the arm is based
on a popular (and inexpensive) assistive grasping
tool that Kemp found on Amazon. Sensing is focused
on functional requirements, with basic obstacle
avoidance for the base along with a depth camera
on a pan-and-tilt head at the top of the robot. Stretch
is also capable of performing basic tasks autonomously, like grasping objects and moving from room
to room.
T
his minimalist approach to
mobile manipulation has benefits
beyond keeping Stretch affordable.
Robots can be difficult to manually control, and each additional joint adds extra
complexity. Even for nondisabled users, directing a
robot with many different degrees of freedom using
a keyboard or a game pad can be tedious, and requires
substantial experience to do well. Stretch’s simplicity
can make it a more practical tool than robots with
more sensors or degrees of freedom, especially for
novice users, or for users with impairments that may
limit how they’re able to interact with the robot.
VY NGUYEN/HELLO ROBOT
↑ To scratch an itch
on his head, Henry
uses a hairbrush
that has been
modified with a soft
sleeve to make it
easier for the robot
to grasp.
CLOCKWISE FROM TOP LEFT: VY NGUYEN/HELLO ROBOT (2); JULIAN MEHU/HELLO ROBOT
“The most important thing for Stretch to be
doing for a patient is to give meaning to their life,”
explains Jane Evans. “That translates into contributing to certain activities that make the house run,
so that they don’t feel worthless. Stretch can relieve
some of the caregiver burden so that the caregiver
can spend more time with the patient.” Henry is
acutely aware of this burden, which is why his focus
with Stretch is on “mundane, repetitive tasks that
otherwise take caregiver time.”
Vy Nguyen is an occupational therapist who has
been working with Hello Robot to integrate Stretch
into a caregiving role. With a $2.5 million Small Business Innovation Research grant from the National
Institutes of Health and in partnership with Wendy
Rogers at the University of Illinois Urbana-­
Champaign and Maya Cakmak at the University of
Washington, Nguyen is helping to find ways that
Stretch can be useful in the Evans’s daily lives.
There are many tasks that can be frustrating for
the patient to depend on the caregiver for, says
Nguyen. Several times an hour, Henry suffers from
itches that he cannot scratch, and which he describes
as debilitating. Rather than having to ask Jane for
help, Henry can instead have Stretch pick up a
scratching tool and use the robot to scratch those
itches himself. While this may seem like a relatively
small thing, it’s hugely meaningful for Henry,
improving his quality of life while reducing his reliance on family and caregivers. “Stretch can bridge
the gap between the things that Henry did before
his stroke and the things he aspires to do now by
enabling him to accomplish his everyday activities
and personal goals in a different and adaptable way
via a robot,” Nguyen explains. “Stretch becomes an
extension of Henry himself.”
This is a unique property of a mobile robot that
makes it especially valuable for people with disabilities: Stretch gives Henry his own agency in the
world, which opens up possibilities that go far
beyond traditional occupational therapy. “The
researchers are very creative and have found several
uses for Stretch that I never would have imagined,”
Henry notes. Through Stretch, Henry has been able
to play poker with his friends without having to rely
on a teammate to handle his cards. He can send recipes to a printer, retrieve them, and bring them to
Jane in the kitchen as she cooks. He can help Jane
deliver meals, clear dishes away for her, and even
transport a basket of laundry to the laundry room.
Simple tasks like these are perhaps the most meaningful, Jane says. “How do you make that person feel
NOVEMBER 2023
↑ Henry’s control
interface features
multiple camera
views and large
buttons to make it
easier for Henry to
do tasks like feeding
himself [bottom
left]. Using Stretch
to manipulate
cards, Henry can
play games with
friends and family
without having to
team up with
someone else [top
left]. Henry can also
help Jane do chores
in the kitchen
[right].
SPECTRUM.IEEE.ORG
27
operation is ultimately what will make the robot
most successful. The robot relies on “a very particular kind of autonomy, called assistive autonomy,”
Jane explains. “That is, Henry is in control of the
robot, but the robot is making it easier for Henry to
do what he wants to do.” Picking up his scratching
tool, for example, is tedious and time consuming
under manual control, because the robot has to be
moved into exactly the right position to grasp the
tool. Assistive autonomy gives Henry higher-level
control, so that he can direct Stretch to move into
the right position on its own. Stretch now has a
menu of prerecorded movement subroutines that
Henry can choose from. “I can train the robot to
perform a series of movements quickly, but I’m still
in complete control of what those movements are,”
he says.
H
enry adds that getting the robot’s
assistive autonomy to a point where it’s
functional and easy to use is the biggest
challenge right now. Stretch can autonomously navigate through the house,
and the arm and gripper can be controlled reliably
as well. But more work needs to be done on providing simple interfaces (like voice control), and
on making sure that the robot is easy to turn on
and doesn’t shut itself off unexpectedly. It is, after
all, still research hardware. Once the challenges
with autonomy, interfaces, and reliability are
addressed, Henry says, “the conversation will turn
to cost issues.”
A $20,000 price tag for a robot is substantial, and
the question is whether Stretch can become useful
enough to justify its cost for people with cognitive
and physical impairments. “We’re going to keep
iterating to make Stretch more affordable,” says
Hello Robot’s Charlie Kemp. “We want to make
VY NGUYEN/HELLO ROBOT
↓ Through Stretch,
Henry can spend
time with his
granddaughter
and play games
with her.
like what they’re contributing is important and
worthwhile? I saw Stretch being able to tap into that.
That’s huge.”
One day, Henry used Stretch to give Jane a rose.
Before that, she says, “Every time he would pick
flowers for me, I’m thanking Henry along with the
caregiver. But when Henry handed me the rose
through Stretch, there was no one else to thank but
him. And the joy in his face when he handed me that
rose was unbelievable.”
Henry has also been able to use Stretch to interact with his three-year-old granddaughter, who isn’t
quite old enough to understand his disability and
previously saw him, says Jane, as something like a
piece of furniture. Through Stretch, Henry has been
able to play games and draw pictures with his granddaughter, who calls him “Papa Wheelie.” “She
knows it’s Henry,” says Nguyen, “and the robot
helped her see him as a person who can play with
and have fun with her in a very cool way.”
The person working the hardest to transform
Stretch into a practical tool is Henry. That means
“pushing the robot to its limits to see all it can do,”
he says. While Stretch is physically capable of doing
many things (and Henry has extended those capabilities by designing custom accessories for the
robot), one of the biggest challenges for the user is
finding the right way to tell the robot exactly how to
do what you want it to do. This can be especially
difficult for people with disabilities and the elderly,
who might not be able to use a mouse and keyboard
or a game pad for controlling the robot’s multiple
degrees of freedom.
Henry collaborated with the researchers to
develop his own graphical user interface to make
manual control of Stretch easier, with multiple
camera views and large onscreen buttons. But
Stretch’s potential for partially or fully autonomous
HELLO ROBOT
robots for the home that can be used by everyone,
and we know that affordability is a requirement for
most homes.”
But even at its current price, if Stretch is able to
reduce the need for a human caregiver in some situations, the robot will start to pay for itself. Human
care is very expensive—the U.S. average is over
$5,000 per month for a home health aide, which is
simply unaffordable for many people, and a robot
that could reduce the need for human care by a few
hours a week would pay for itself within just a few
years. And this isn’t taking into account the value of
care given by relatives. Even for the Evanses, who
do have a hired caregiver, much of Henry’s daily care
falls to Jane. This is a common situation for families
to find themselves in, and it’s also where Stretch can
be especially helpful: by allowing people like Henry
to manage more of their own needs without having
to rely exclusively on someone else’s help.
Stretch does still have some significant limitations. The robot can lift only about 2 kilograms, so
it can’t manipulate Henry’s body or limbs, for example. It also has no way of going up and down stairs,
is not designed to go outside, and still requires a lot
of technical intervention. And no matter how capable
Stretch (or robots like Stretch) become, Jane Evans
is sure they will never be able to replace human caregivers, nor would she want them to. “It’s the look in
the eye from one person to another,” she says. “It’s
the words that come out of you, the emotions. The
human touch is so important. That understanding,
that compassion—a robot cannot replace that.”
Stretch may still be a long way from becoming a
consumer product, but there’s certainly interest in
it, says Nguyen. “I’ve spoken with other people who
have paralysis, and they would like a Stretch to promote their independence and reduce the amount of
assistance they frequently ask their caregivers to
← Stretch is a
relatively small
robot that one
person can easily
move, but it has
enough range of
motion to reach
from the floor to
countertop height.
provide.” Perhaps we should judge an assistive
robot’s usefulness not by the tasks it can perform
for a patient, but rather on what the robot represents
for that patient, and for their family and caregivers.
Henry and Jane’s experience shows that even a robot
with limited capabilities can have an enormous
impact on the user. As robots get more capable, that
impact will only increase.
“I definitely see robots like Stretch being in people’s homes,” says Jane. “When, is the question? I
don’t feel like it’s eons away. I think we are getting
close.” Helpful home robots can’t come soon
enough, as Jane reminds us: “We are all going to be
there one day, in some way, shape, or form.” Human
society is aging rapidly. Most of us will eventually
need some assistance with activities of daily living,
and before then, we’ll be assisting our friends and
family. Robots have the potential to ease that burden
for everyone.
And for Henry Evans, Stretch is already making
a difference. “They say the last thing to die is hope,”
Henry says. “For the severely disabled, for whom
miraculous medical breakthroughs don’t seem feasible in our lifetimes, robots are the best hope for
significant independence.” 
NOVEMBER 2023
SPECTRUM.IEEE.ORG
29
BUOYANT
BEHEMOTHS
The global race is on to tap potent winds far offshore
BY P E T E R FA I R L E Y
30
SPECTRUM.IEEE.ORG
NOVEMBER 2023
The steadiest,
strongest wind blows
over deep ocean water.
Floating wind
turbines are designed
to exploit that huge
untapped potential.
NOVEMBER 2023
SPECTRUM.IEEE.ORG
31
IN A HANGAR at the University
of Edinburgh, a triangular steel
contraption sits beside a giant
tank of water. Inside the tank, a
technician in a yellow dinghy
adjusts equipment so that the
triangled structure can be
hoisted into the water to see how
it deals with simulated waves
and currents. One day soon, a
platform 50 times as large may
float in the deep waters of the
North Sea, buoying up a massive
wind turbine to harvest the
steady, strong breezes there.
About an hour’s ride up the
coast, full-scale 3,000-tonne
behemoths already float in
Aberdeen Bay, capturing enough
wind energy to electrify nearly
35,000 Scottish households.
The prototype at the FloWave facility—
one of 10 new floating wind-power designs
tested here—is progressing fast, says Tom
Davey, who oversees testing. “Everything
you see here has been manufactured and
put in the water in the last couple months.”
There’s good reason for this hustle:
The United Kingdom wants to add 34
gigawatts of offshore wind power by
2030, en route to decarbonizing its grid
by 2035. But the shallow waters east of
London are already packed with wind
turbines. Scotland’s deeper waters are
therefore the U.K.’s next frontier. AucThis scale model of a floating
wind-turbine platform is designed to
simultaneously capture wave energy.
It’s one of 10 new designs tested at the
University of Edinburgh’s FloWave facility.
PREVIOUS PAGES: V. JONCHERAY/BW IDEOL; LEFT: PETER FAIRLEY
Equinor installed floating wind farms in Scotland and
Norway. The turbines and platforms were assembled at
deepwater ports in Norway and then towed out to sea.
JAN ARNE WOLD/WOLDCAM/EQUINOR
tions have set aside parcels for 27 floating
wind farms, with a combined capacity
exceeding 24 GW.
This rush to deep water is a global
phenomenon. To arrest the accelerating
pace of a changing climate, the world
needs a lot more clean energy to electrify
heating, transportation, and industry and
to displace fossil-fuel generation. Offshore wind power is already playing a key
role in this transition. But the steadiest,
strongest wind blows over deep water—
well beyond the 60- to 70-meter limit for
the fixed foundations that anchor traditional wind turbines to the ocean floor.
And in many places, such as North America’s deep Pacific coast, the strongest and
steadiest wind blows in the evening,
which would perfectly complement solar
energy’s daytime peaks.
Hence the push for wind platforms
that float. The Biden administration has
called for 15 GW of floating offshore wind
capacity in the United States by 2035, and
recent research suggests that the U.S.
Pacific coast could support 100 GW more
by midcentury. Ireland, South Korea, and
Taiwan are among the other countries
with bold floating wind ambitions.
The question is how to scale up the
technology to gigawatt scale. This global
debate is pitting innovation against risk.
On the innovation end are people like
Davey and the FloWave team, who’ve
already advanced several floating wind
devices to sea trials. One FloWave-tested
platform, engineered by Copen­hagenbased Stiesdal Offshore, was recently
selected for a 100-megawatt wind farm to
be built off Scotland’s northern tip in 2025.
Established tech companies, however,
argue that their more conservative
designs are ready to go today, and at
bigger scale. What the industry really
needs to drive down costs, they say, is
economies of scale. “In our view, this is
purely a deployment question,” says
Aaron Smith, chief commercial officer
for the floating wind-tech developer
Principle Power, based in Emeryville,
Calif., whose platforms support the
190-meter-high, 9.5-MW turbines operating in Aberdeen Bay.
If governments provide consistent,
long-term subsidies, industry standardization and mass production will deliver
the gigawatts, Smith says. “We have
the technology. We’re just angling for
the right market conditions to deploy
that at scale.”
NOVEMBER 2023
SPECTRUM.IEEE.ORG
33
34
SPECTRUM.IEEE.ORG
NOVEMBER 2023
challenge of supporting the towering
turbine.
To stabilize the first floating wind
farm, completed in 2017 about 50 kilometers northeast of the Aberdeen project, Norwegian energy giant Equinor
used a ballasted steel column that
extends 78 meters into the water. This
dense mass, called a spar platform,
works like the keel of a boat. Equinor
used the same design for an 88-MW,
11-turbine array—the world’s largest,
though probably not for long—completed this year in Norway. At that project, cables transfer the electricity to oil
and gas platforms, rather than delivering
the power back to shore.
For its next floating wind projects,
Equinor plans to use the more conservative semisubmersible design, a tech-
nology perfected for oil and gas
platforms. Semisubmersibles don’t go
deep the way spar platforms do; instead,
they achieve stability by extending their
buoyancy horizontally. Principle
­Power’s WindFloat is a three-sided
semisubmersible platform that is
roughly 70 meters on a side. A concrete
square variant from France’s BW Ideol
is 35 to 55 meters on a side.
Chains and anchors in the seabed
prevent these platforms from spinning
or drifting, which is crucial for minimizing the movements that would flex and
fatigue the turbines’ power cables. Some
platforms, such as WindFloat, shift ballast around to dampen wave action or
to keep the rotor perpendicular to the
wind so as to maximize energy capture.
­WindFloat moves the water ballast with
LEFT: V. JONCHERAY/BW IDEOL; RIGHT: PRINCIPLE POWER
To fully understand what
developers are up against, it
helps to know how hard it is to
deploy any kind of wind power at sea.
The 15-MW turbines being ordered
today for tomorrow’s offshore wind
farms weigh roughly 1,000 tonnes. The
foundations of traditional offshore wind
turbines are also massive steel or concrete structures that have to be embedded in the ocean floor. And installing a
turbine atop a tower that’s twice as tall
as the Statue of Liberty requires dedicated and costly vessels, which are in
short supply worldwide.
You can do without such vessels by
using a floating platform. The equipment can be fully assembled on shore
and then towed to the site. But having a
platform that floats compounds the
France’s BW Ideol uses a square of concrete for its floating
platform [far left]. Like Principle Power’s steel triangle [left],
its horizontal lines extend its buoyancy, keeping it stable.
pumps that run for about 20 minutes a
day. “You’re naturally going to be heeling
out of the wind, just like with a sailboat.
We’re shifting the water balance to compensate,” explains Smith.
Principle Power then marries conventional wind turbines to the company’s
floating platforms, making small but vital
tweaks to the turbine’s control system to
compensate for the differences between
fixed and floating conditions. For example, if a floating platform starts to tip due
to strong waves, a control system
designed for a fixed foundation may
interpret the movement as a change in
wind speed and then pitch the blades in
response. That correction could instead
amplify the rocking motion. WindFloat’s
turbine controls are tuned to prevent
such dangerous feedback.
Until four or five years ago, floating
wind developers had to sort out such
issues on their own, because most turbine manufacturers weren’t interested in
working with them. But now that developers are shopping for dozens of turbines
for gigawatt-scale floating projects, turbine manufacturers are finally devoting
engineering resources to the cause.
Thomas Choisnet, until recently
chief technology officer of BW Ideol,
says the current generation of 15-MW
turbines developed for fixed-foundation
wind farms also have specifications for
floating. “They are making sure that
everything works in this moving environment,” he says. Floating projects
thus benefit from the decades of design
optimization and manufacturing scale
that went into building today’s conventional offshore wind installations.
Beyond the technological
advantages of using a tried-andtrue approach, there’s a financial upside, Smith says. Floating wind
developers must convince risk-averse
bankers and insurers to back their projects, and it helps to be able to point to
your project’s use of established technology. In years past, offshore wind investors who backed innovative but flawed
designs suffered huge losses.
Gigawatt-scale offshore installations also require massive public and
private investments in ports and supply
chains. Consider the 960-MW Buchan
wind farm that Ideol is developing for
the Scottish North Sea. Because the
project includes a seasoned technology
provider, it is moving faster than most.
The consortium has already secured
connections to the grid, and Ideol has
secured 34 hectares east of Inverness
to manufacture its platforms.
The owners of the mothballed
Ardersier Port, which once serviced oil
and gas platforms, plan to work with
Ideol to transform it into a regional hub
that will deliver floating wind platforms
to projects across the North Sea. To produce the steel-reinforced concrete for
Ideol’s platforms, Ardersier will get a
new concrete plant, an oil-rig decommissioning facility, and the U.K.’s first
new steelworks in half a century, to recycle the rigs’ steel. The steel mill, says
Ideol, will be one of the world’s first to
replace metallurgical coal with renewable electricity and hydrogen.
Building superstable platforms like
Ideol’s and Principle Power’s to accommodate conventional turbines is expen-
sive. According to the consulting firm
BloombergNEF, recent floating projects
cost up to US $10 million per megawatt.
The resulting power is roughly three
times as expensive as generation from
fixed-bottom offshore wind. And those
high costs are hindering developers’ ability to clinch long-term power-supply
contracts with utilities. In June, energy
consultancy 4C Offshore cut its global
floating wind-power projection for 2030
by nearly a quarter compared with its
projection from a year earlier.
At the Floating Offshore Wind Turbines conference held last May, several
developers called on leading turbine manufacturers, such as Vestas and General
Electric, to adapt their hardware to help
reduce the cost of floating wind. For
example, if turbines could deal with more
motion, then floating platforms could be
smaller, and thus less expensive, says
Cédric Le Bousse, director for marine
renewable energy for the French utility
Électricité de France, which recently
installed a three-turbine floating wind
demonstration near Marseille. As it is, he
says, floating platforms must be
“­over-dimensioned” to achieve the strict
limits on movements set by the turbine
manufacturers.
Meanwhile, floating wind’s
mold-breakers are offering an
ever-expanding diversity of
technology options. At least 80 designs
for platforms or integrated platform-­
turbines now vie for the floating wind
market.
For starters, there are dozens of platform designs. There are semisubmers-
THE QUESTION IS HOW TO SCALE UP
FLOATING WIND-POWER TECHNOLOGY TO
GIGAWATT SCALE. THIS GLOBAL DEBATE
IS PITTING INNOVATION AGAINST RISK.
NOVEMBER 2023
SPECTRUM.IEEE.ORG
35
ibles that seat the turbine toward the
center of the structure, such as Stiesdal’s
tetrahedral TetraSub. That geometry distributes the rotor’s weight and torquing
forces and reduces the platform’s weight
and cost. There’s a 40,000-tonne spar
platform that replaces the steel column
with a cheaper, 285-meter-long column
of concrete.
More radical floating wind-power
designs flout decades-old engineering
assumptions. Many of these assumptions
make less sense far offshore, says Klaus
Ulrich Drechsel, an offshore-energy
engineering manager for the German
utility EnBW. “It’s important to not only
try to overcome the disadvantages but
also to take advantage of the potential
benefits of floating.”
For example, some floating turbine
configurations allow the rotor to face
downwind. Turbine makers had long
avoided doing that because it’s noisy, as
the rotating blades must repeatedly pass
through the wind’s “shadow” behind the
tower. But far offshore, the resulting
thump-thump-thump is unlikely to offend
anyone. And the wind itself can then
orient the rotor, eliminating the need for
Wind Catching Systems’ multirotor design would
be tethered to the ocean floor. Some floating wind
designs call for a completely tetherless platform.
motors and gears that keep conventional
turbines facing into the wind.
Another idea is to add more rotors
to a single tower. Multirotor turbines
can enhance production by forcing
more air to flow through the rotors. The
rotors’ counterrotation, meanwhile,
neutralizes the torquing force that tilts
single-rotor floaters to one side and
strains turbine towers.
Big corporate players are taking up
the multirotor and downwind designs.
Plenitude, a subsidiary of the Italian oil
and gas producer Eni, has bought into
EnerOcean, a Spanish firm that validated
its 12-MW twin-rotor design at FloWave.
Chinese turbine giant Mingyang Smart
Energy Group is manufacturing a floater
with dual 8.3-MW rotors, set for installation this year off Macau. EnBW is
cofunding that demonstration, in
exchange for exclusive rights to deploy
the design in Europe.
The trio of industrial Ph.D.s behind
Scottish startup Myriad Wind Energy
Systems figure two rotors can’t capture
the full benefits of multiple rotors. Their
90-meter-tall array has 12 rotors. “We’re
seeing it as kind of a ‘wind farm on a
stick,’” says Paul Pirrie, Myriad’s chief
technology officer.
Myriad uses a pivoting tree structure
to support the rotors. The frame is modular for easier transport. Integrated
tracks and lifts facilitate assembly, with
the turbine generators and rotors delivered to the base and raised into place.
Any faulty equipment, which otherwise
would be a logistical nightmare to repair
or replace out at sea, can return to the
tower’s bottom via the tracks and lifts,
with the replacement part hoisted aloft
via the same route.
Myriad hopes to have a demonstrator
installed on land in 2025. But the company is already facing competition from
Oslo startup Wind Catching Systems,
whose 126-rotor floating design is in
­prototype development with help from
General Motors.
Ultimately, floating wind
power could become completely untethered. Several
teams worldwide are now working on
wind ships, a concept first suggested by
the U.S. wind-energy pioneer William
Heronemus in 1972. He envisioned a
tetherless, self-propelled floating platform that would capture wind power, use
it to generate hydrogen, and store that
fuel for delivery to shore. (Heronemus
also launched the University of Massachusetts’ wind-engineering program,
training the engineers who launched the
U.S. wind-power industry.)
Autonomous wind ships cut out
the power cables and mooring chains
used by floating offshore wind platforms. Concepts like the UMass team’s
Wind Trawler, a modern version of
Heronemus’s wind ship, “are not depth
limited at all and so have a potentially
enormous capture area,” says James
Manwell, an engineering professor at
UMass Amherst.
Myriad Wind Energy Systems
describes its 12-rotor wind turbine
as a “wind farm on a stick.”
36
SPECTRUM.IEEE.ORG
NOVEMBER 2023
MYRIAD WIND ENERGY SYSTEMS
Number of rotors on Wind
Catching Systems’ offshore
wind-power array
WIND CATCHING SYSTEMS
Eliminating power cables and mooring
chains could also assuage some of the
concerns over offshore wind’s potential
effect on fisheries and wildlife. For example, fishing is generally banned within
wind farms to avoid entanglement of fishing gear. Such fishing-free zones tend to
enhance fisheries, providing a refuge in
which fish grow larger and reproduce.
Nevertheless, fishing interests often
oppose any limits to their freedom to fish,
arguing that restricted areas force them
to travel further. Citing such concerns,
Oregon’s governor recently called for a
pause in offshore wind preparations, even
though turbines floating off the Pacific
coast are still years away.
In the near term, the floating wind
industry faces a more intrinsic, logistical
problem. Namely, developers need ports
to start gearing up to build and launch
their massive wind machines. Scottish
Renewables, a regional industry group,
says that the U.K. “urgently” needs to
transform at least three ports into industrial hubs in order for the country to meet
its 2030 energy and emissions goals. And
yet the industry hasn’t settled on which
turbine and platform designs are best, and
so ports do not know how to gear up.
“The variables make for an absolute
minefield,” says Iain Sinclair, executive
director for renewables and energy
transition for the Edinburgh-based
Global Energy Group. Sinclair’s company owns three Scottish ports, including the Port of Nigg northeast of
Inverness, which has been identified as
one of the most promising places to
build floating wind turbines.
Back in the day, Nigg built about 40
percent of the North Sea’s oil and gas platforms. At the port’s peak in the 1970s and
1980s, 4,000 people worked there, and
petroleum fumes filled the air. Today,
you’re more likely to smell distillery
vapors wafting over the harbor—what
locals call the “angels’ share” of the Highland’s popular single malts. Nigg’s oil
terminal is shuttered, and drilling platforms visit infrequently. But there’s plenty
of bustle now, thanks to investments by
Global Energy Group that have turned
Nigg into a staging point for offshore wind
construction. When IEEE Spectrum visited, cranes were lifting enormous towers,
nacelles, and blades onto an installation
vessel, destined for a fixed-foundation
wind farm.
Sinclair is betting that building,
deploying, and maintaining floating
wind farms will ultimately dwarf the
last century’s oil and gas boom. And it
could happen fast: An independent
2021 report predicted that floating offshore wind would contribute £1.5 billion to Scotland’s economy by 2027
with only modest port upgrades, and
up to triple that amount with more strategic investments.
To determine where to focus Nigg’s
upgrades, Sinclair and his team have
assessed 57 floating wind designs and
zeroed in on a half-dozen of the most
promising. They’ve mapped those
designs onto Nigg’s existing and potential capabilities, such as manufacturing
tubular steel, assembling components in
the port’s 36,000 square meters of covered fabrication space, and pairing turbines to platforms along the harbor’s
1.2-km-long quayside.
What the floating wind industry
really needs now, says Sinclair, is sustained government support. At Nigg, that
means more than the U.K. government’s
£160 million for floating offshore wind
manufacturing announced in March,
which Scottish Renewables says “falls
woefully short.” It also means a plan to
develop Scotland’s ports, which could
cost £4 billion. The same concerns are
being voiced by floating wind proponents in the United States, France, Germany, and other countries, as they push
for their own infrastructure upgrades.
Henry Jeffrey, one of Tom Davey’s
colleagues at the University of Edinburgh, is a transplant from offshore oil
and gas engineering who now codirects
the U.K.’s Supergen Offshore Renewable
Energy R&D effort. He agrees that governments need to step up. Jeffrey says
politicians ask him all the time when
floating offshore wind technology will be
competitive.
“I say, ‘Well, it’s directly proportional
to your political will. It’s up to you to
make it happen,’” Jeffrey says. The technology is “as close and credible as government wants it to be.” 
NOVEMBER 2023
SPECTRUM.IEEE.ORG
37
The Creepy New
D I G I TA L
AFTERLIFE
I N D U S T RY
These companies
could bring you back—
without your consent
By Wendy H. Wong
Illustrations by Harry Campbell
NOVEMBER 2023
SPECTRUM.IEEE.ORG
39
I
T ’S SOMETIME IN THE N E AR F U T U R E . Your beloved father, who suffered
from Alzheimer’s for years, has died. Everyone in the family feels physically and emotionally exhausted from his long decline. Your brother raises the idea of remembering
Dad at his best through a startup “digital immortality” program called 4evru. He promises
to take care of the details and get the data for Dad ready. • After the initial suggestion, you
forget about it—until today, when 4evru emails to say that your father’s bot is available
for use. After some trepidation, you click the link and create an account. You slide on the
somewhat unwieldy VR headset and choose the augmented-reality mode. The familiar
walls of your bedroom briefly flicker in front of you.
Your father appears. It’s before his diagnosis.
He looks healthy and slightly brawny, as he did
throughout your childhood, sporting a salt-andpepper beard, a checkered shirt, and a grin. You’re
impressed with the quality of the image and animation. Thanks to the family videos your brother
gave 4evru, the bot sounds like him and moves like
he did. This “Dad” puts his weight more heavily on
his left foot, the result of a high school football
injury, just like your father.
“Hey, kiddo. Tell me something I don’t know.”
The familiar greeting brings tears to your eyes.
After a few tentative exchanges to get a feel for this
interaction—it’s weird—you go for it.
“I feel crappy, really down. Teresa broke up with
me a few weeks ago,” you say.
“Aw. I’m sorry to hear it, kiddo. Breakups are
awful. I know she was everything to you.”
Your dad’s voice is comforting. The bot’s doing
a good job conveying empathy vocally, and the face
moves like your father’s did in life. It feels soothing
to hear his full and deep voice, as it sounded before
he got sick. It almost doesn’t matter what he says as
long as he says it.
You look at the time and realize that an hour has
passed. As you start saying goodbye, your father
says, “Just remember what Adeline always says to
me when I am down: ‘Sometimes good things fall
apart so better things can come together.’”
Your ears prick up at the sound of an unfamiliar
name—your mother’s name is Frances, and no one
in your family is named Adeline. “Who,” you ask
shakily, “is Adeline?”
Over the coming weeks, you and your family discover much more about your father through his bot
than he revealed to you in life. You find out who
­Adeline—and Vanessa and Daphne—are. You find
out about some half-siblings. You find out your father
wasn’t who you thought he was, and that he reveled
in living his life in secrecy, deceiving your family, and
Editor’s note: This article is adapted from the author’s
new book, We, the Data: Human Rights in the Digital
Age (MIT Press, 2023).
40
SPECTRUM.IEEE.ORG
NOVEMBER 2023
other families. You decide, after some months of interacting with the 4evru’s version of your father, that
while you are somewhat glad to learn who your father
truly was, you’re mourning the loss of the person you
thought you knew. It’s as if he died all over again.
While 4evru is a fictional company, the technology
described isn’t far from reality. Today, a “digital
afterlife industry” is already making it possible to
create reconstructions of dead people based on the
data they’ve left behind.
Consider that Microsoft has a patent for creating
a conversational chatbot of a specific person using
their “social data.” Microsoft reportedly decided against turning this idea
into a product, but the company
didn’t stop because of legal or rightsbased reasons. Most of the 21-page
patent is highly technical and procedural, documenting how the software
and hardware system would be
designed. The idea was to train a
chatbot—that is, “a conversational
computer program that simulates
human conversation using textual
and/or auditory input channels”—
using social data, defined as “images,
voice data, social media posts, electronic messages,” and other types of
information. The chatbot would then
talk “as” that person. The bot might
have a corresponding voice, or 2D or
3D images, or both.
Although it’s notable that Big Tech
has made a foray into the field, most of
the activity isn’t coming from big corporate players. More than five years
ago, researchers identified a digital
afterlife industry of 57 firms. The current players include a company that
offers interactive memories in the
loved one’s voice (HereAfter); an entity
that sends prescheduled messages to
loved ones after the user’s death
(MyWishes); and a robotics company
that made a robotic bust of a departed
woman based on “her memories, feelings, and beliefs,” which went on to
converse with humans and even took
a college course (Hanson Robotics).
Some of us may view these options
as exciting. Others may recoil. Still
others may simply shrug. No matter
your reaction, though, you will almost
certainly leave behind digital traces.
Almost everyone who uses technology today is subject to “datafication”:
the recording, analysis, and archiving
of our everyday activities as digital
data. And the intended or unintended
Data are
essentially
forever;
we are most
certainly not.
consequences of how we use data while we’re living
has implications for every one of us after we die.
As humans, we all have to confront our own mortality. The datafication of our lives means that we
now must confront the fact that data about us will
very likely outlive our physical selves. The discussion about the digital afterlife thus raises several
important, interrelated questions. First, should we
be entitled to define our posthumous digital lives?
The decision not to persist in a digital afterlife
should be our choice. Yet could the decision to opt
out really be enforced, given how “sticky” and distributed data are? Is deletion, for which some have
advocated, even possible?
Data are essentially forever; we are most certainly not.
Many of us aren’t taking the necessary steps to
manage our digital remains. What will happen to
our emails, text messages, and photos on social
media? Who can claim them after we’re gone? Is
there something we want to preserve about ourselves for our loved ones?
Some people may prefer that their digital presence vanish with their physical body. Those who
are organized and well prepared might give their
families access to passwords and usernames in the
event of their deaths, allowing someone to track
down and delete their digital selves as much as
possible. Of course, in a way this careful preparation doesn’t really matter, since the deceased won’t
experience whatever postmortem digital versions
Digital estate planning: A checklist
 Do an inventory of your digital
assets. These may include:
• hardware like computers,
cellphones, and external drives and
the data stored within, including
files and browser history;
• data stored on the cloud;
• online accounts for things
such as email, social media,
photo and video sharing, gaming
sites, shopping sites, money
management sites, and
crypto-currency wallets;
• any websites or blogs that
you manage;
• intellectual property such as
copyrighted material and code;
• business assets such as
domain names, mailing lists, and
customer information.
 Decide what you want done with
each of these assets. Do you want
accounts deleted, or preserved
for your loved ones? Should
revenue-generating assets like
online stores be shut down, or
continue to operate under someone
else’s guidance? Write down the
plan, including necessary login
and password information.
 Name a digital executor. This
executor should be someone you
trust to carry out your wishes.
 Store the plan in a secure
location, either in digital or paper
form. Make sure your next of kin
know where your plan is and how
to access it.
 Formalize it by adding the
information about your executor
and your plan to your will. Don’t
make the plan itself part of your
will, because wills become public
records, and you don’t want
sensitive information available
to everyone.
NOVEMBER 2023
SPECTRUM.IEEE.ORG
41
of themselves are created. But for some, the idea
that someone could actually make them live again
will feel wrong.
For those who are more bullish on the technology, there are a growing number of apps to which
we can contribute while we’re alive so that our “datafied” selves might live on after we die. These products and possibilities, some creepier, some more
harmless, blur the boundaries of life and death.
Our digital profiles—our datafied selves—provide the means for a life after death and possible
social interactions outside of what we physically
took on while we were alive. As such, the boundaries
of human community are changing, as the dead can
now be more present in the lives of the living than
ever before. The impact on our autonomy and dignity hasn’t yet been adequately considered in the
context of human rights because human rights are
primarily concerned with physical life, which ends
with death. Thanks to datafication and AI, we no
longer die digitally.
We must also consider how bots—software
applications that interact with users or systems
online—might post in our stead after we’re gone. It
is indeed a curious twist if a bot uses data we generated to produce our anticipated responses in our
absence: Who is the creator of that content?
In 2015, Roman Mazurenko was hit and killed by a
car in Moscow. He died young, just on the precipice
of something new. Eugenia Kuyda met him when
they were both coming of age, and they became close
friends through a fast life of fabulous parties in
Moscow. They also shared an entrepreneurial spirit,
supporting one another’s tech startups. Mazurenko
led a vibrant life; his death left a huge hole in the
lives of those he touched.
In grief, Kuyda led a project to build a text bot
based on an open-source, machine-learning algorithm, which she trained on text messages she collected from Mazurenko’s family, friends, and her
42
SPECTRUM.IEEE.ORG
NOVEMBER 2023
own exchanges with Mazurenko during his life. The
bot learned “to be” Mazurenko, using his own words.
The data Mazurenko created in life could now continue as himself in death.
Mazurenko did not have the opportunity to consent to the ways in which data about him were used
posthumously—the data were brought back to life
by loved ones. Can we say that there was harm done
to the dead or his memory?
This act was at least a denial of autonomy. When
we’re alive, we’re autonomous and move through
the world under our own will. When we die, we no
longer move bodily through the world. According to
conventional thinking, that loss of our autonomy
also means the loss of our human rights. But can’t
we still decide, while living, what to do with our artifacts when we’re gone? After all, we have designed
institutions to ensure that the transaction of
bequeathing money or objects happens through
defined legal processes; it’s straightforward to see
if bank account balances have gotten bigger or
whose name ends up on a property deed. These are
things that we transfer to the living.
With data about us after we die, this gets complicated. These data are “us,” which is different from
our possessions. What if we don’t want to appear
posthumously in text, image, or voice? Kuyda reconstructed her friend through texts he exchanged with
her and others. There is no way to stop someone
from deploying these kinds of data once we’re dead.
But what would Mazurenko have wanted?
Can a digital
immortal
be deleted
by someone
else?
T
HE POSSIBILITY OF
creating bots based on specific persons has tremendous
implications for autonomy,
consent, and privacy. If we do
not create standards that give
the people who created the
original data the right to say
yes or no, we have taken away their choice.
If technology like the Microsoft chatbot patent is
executed, it also has implications for human dignity.
The idea of someone “bringing us back” might seem
acceptable if we think about data as merely “by-products” of people. But if data are more than what we
leave behind, if they are our identities, then we should
pause before we allow the digital reproduction of
people. Like Microsoft’s patent, Google’s attempts to
clone someone’s “mental attributes” (also patented),
Soul Machines’ “digital twins,” or startup Uneeq’s
marketing of “digital humans” to “re-create human
interaction at infinite scale” should give us pause.
Part of what drives people to consider digital
immortality is to give future generations the ability
to interact with them. To preserve us forever, however, we need to trust the data collectors and the
service providers helping us achieve that goal. We
need to trust them to safeguard those data and to
faithfully represent us going forward.
However, we can also imagine a situation where
malicious actors corrupt the data by inserting inauthentic data about a person, driving outcomes that
are different from what the person intended. There’s
a risk that our digital immortal selves will deviate
significantly from who we were, but how would we
(or anyone else) really know?
Could a digital immortal be subject to degrading treatment or interact in ways that don’t reflect
how the person behaved in real life? We don’t yet
have a human rights language to describe the
wrong this kind of transgression might be. We
don’t know if a digital version of a person is
“human.” If we treat these immortal versions of
ourselves as part of who a living person is, we
might think about extending the same protections
from ill treatment, torture, and degradation that a
living person has. But if we treat data as detritus,
is a digital person also a by-product?
There might also be technical problems with the
digital afterlife. Algorithms and computing protocols are not static, and changes could make the rendering of some kinds of data illegible. Social scientist
Carl Öhman sees the continued integrity of a digital
afterlife as largely a software concern. Because software updates can change the way data are analyzed,
the predictions generated by the AI programs that
undergird digital immortality can also change. We
may not be able to anticipate all of these different
kinds of changes when we consent.
In the 4evru scenario, the things that were revealed
about the father actually made him odious to his
family. Should digital selves and persons be curated,
and, if so, by whom? In life, we govern ourselves. In
death, data about our activities and thoughts will be
archived and ranked based not on our personal judgment but by whatever priorities are set by digital
developers. Data about us, even embarrassing data,
will be out of our immediate grasp. We might have
created the original data, but data collectors have the
algorithms to assemble and analyze those data. As
they sort through the messiness of reality, algorithms
carry the values and goals of their authors, which may
be very different from our own.
Technology itself may get in the way of digital
immortality. In the future, data format changes that
allow us to save data more efficiently may lead to
the loss of digital personas in the transfer from one
format to another. Data might be lost in the archive,
creating incomplete digital immortals. Or data might
be copied, creating the possibility of digital clones.
Digital immortals that draw their data from multiple
sources may create more realistic versions of people,
but they are also more vulnerable to possible errors,
hacks, and other problems.
A digital immortal may be programmed such
that it cannot take on new information easily. Real
people, however, do have opportunities to learn
and adjust to new information. Microsoft’s patent
does specify that other data would be consulted
Players in the digital
afterlife industry
Dozens of companies exist to help people manage their social media
accounts and other digital assets, to communicate with loved ones
after death, and to memorialize the deceased. Here is a small sampling
of the services on offer.
Bcelebrated: You can create your
own autobiographical website, to
which your “activators” can add
funeral and memorial information
when you die. Automated emails
alert your contacts and invite
them to the site.
gone, your loved ones can ask
questions and hear the responses
in your own voice.
Lifenaut: You provide a DNA
sample and fill out a “mindfile”
with biographical photos, videos,
and documents in case it someday
becomes possible to create a
“conscious analogue” of you.
Directive Communication
Systems: DCS organizes all of
your online accounts (including
“confidential accounts”) and
executes your directives to shut
them down, transfer them,
or memorialize them.
GhostMemo: If you fail to reply to
periodic “proof of life” emails from
the company, it sends out your
prewritten final messages.
HereAfter: You use an app to record
stories about your life; when you’re
A range of
products and
possibilities
blur the
boundaries
of life and
death.
The dead
can now be
more present
in the lives
of the living
than ever
before.
MyWishes: In addition to helping
you make both a traditional will and
a digital estate plan, this site lets
you schedule messages to loved
ones on dates after your death so
you can send birthday greetings
and the like.
My Wonderful Life: You can plan
your own funeral, including the
eulogists, music, and food, as well
as leave letters for loved ones.
and thus opens the way for current events to infiltrate. This could be an improvement in that the bot
won’t increasingly sound like an irrelevant relic or
a party trick. However, the more data the bot takes
in, the more it may drift away from the lived person,
toward a version that risks looking inauthentic.
What would Abraham Lincoln say about contemporary race politics? Does it matter?
And how should we think about this digital
immortal? Is digital Abe a “person” who deserves
human rights protections? Should we protect this
person’s freedom of expression, or should we shut
it down if their expression (based on the actual
person who lived in a different time) is now considered hate speech? What does it mean to protect the
right to life of a digital immortal? Can a digital
immortal be deleted?
Life after death has been a question and fascination from the dawn of civilization. Humans have
grappled with their fears of death through religious
beliefs, burial rites, spiritual movements, artistic
imaginings, and technological efforts.
Today, our data exist independent of us. Datafication has enabled us to live on, beyond our own
awareness and mortality. Without putting in place
human rights to prevent the unauthorized uses of
our posthumous selves, we risk becoming digital
immortals that others have created. 
NOVEMBER 2023
SPECTRUM.IEEE.ORG
43
How
Generative
AI Helped
Me Imagine
a Better
Robot
1
It didn’t give me schematics,
but it did boost my creativity
---
By Didem Gürdür Broo
T
his year, 2023, will probably be remembered as the
year of generative AI. It is still an open question
whether generative AI will change our lives for the
better. One thing is certain, though: New artificial-­
intelligence tools are being unveiled rapidly and will
continue for some time to come. And engineers have
much to gain from experimenting with them and
incorporating them into their design process. • That’s already happening in certain spheres. For Aston Martin’s DBR22 concept car,
designers relied on AI that’s integrated into Divergent Technologies’
digital 3D software to optimize the shape and layout of the rear subframe components. The rear subframe has an organic, skeletal look,
enabled by the AI exploration of forms. The actual components were
produced through additive manufacturing. Aston Martin says that
this method substantially reduced the weight of the components
while maintaining their rigidity. The company plans to use this same
design and manufacturing process in upcoming low-volume vehicle
models. • Other examples of AI-aided design can be found in NASA’s
space hardware, including planetary instruments, space telescopes,
and the Mars Sample Return mission. NASA engineer Ryan
McClelland says that the new AI-generated designs may “look somewhat alien and weird,” but they tolerate higher structural loads while
weighing less than conventional components do. Also, they take a
fraction of the time to design compared to traditional components.
McClelland calls these new designs “evolved structures.” The phrase
refers to how the AI software iterates through design mutations and
converges on high-performing designs.
In the author’s early attempts to
generate images of a jellyfish robot,
she used this prompt: underwater,
self-reliant, mini robots, coral reef,
ecosystem, hyper realistic.
4
2
3
By further refining her prompts, the author got better results. For [2], she used this prompt:
jellyfish robot, plastic, white background. [3] resulted from this prompt: futuristic jellyfish robot,
high detail, living under water, self-sufficient, fast, nature inspired.
5
6
As the author added more details to her prompts, she got images that aligned better with her vision of a jellyfish
robot. [4], [5], and [6] resulted from this prompt: A futuristic electrical jellyfish robot designed to be
self-sufficient and living under the sea, water or elastic glass-like material, shape shifter, technical design,
perspective industrial design, copic style, cinematic high detail, ultra-detailed, moody grading, white background.
IMAGES: DIDEM GÜRDÜR BROO/MIDJOURNEY
NOVEMBER 2023
SPECTRUM.IEEE.ORG
45
To generate an
image of a humanoid robot [1], the
author started with
this simple prompt:
Humanoid robot,
white background.
She then tried to
generate an image
of a humanoid with
cameras for eyes [2]
using this prompt:
Humanoid robot that
has camera eyes,
technical design,
add text, full body
perspective, strong
arms, V-shaped
body, cinematic
high detail, light
background.
1
In these kinds of engineering environments,
co-designing with generative AI, high-quality, structured data, and well-studied parameters can clearly
lead to more creative and more effective new
designs. I decided to give it a try.
L
ast January, I began experimenting with
generative AI as part of my work on
cyber-physical systems. Such systems
cover a wide range of applications, including smart homes and autonomous vehicles.
They rely on the integration of physical and computational components, usually with feedback loops
between the components. To develop a cyber-­
physical system, designers and engineers must work
collaboratively and think creatively. It’s a time-­
consuming process, and I wondered if AI generators
could help expand the range of design options, enable
more efficient iteration cycles, or facilitate collaboration across different disciplines.
When I began my experiments with generative
AI, I wasn’t looking for nuts-and-bolts guidance
on the design. Rather, I wanted inspiration. Initially,
I tried text generators and music generators just
for fun, but I eventually found image generators to
be the best fit. An image generator is a type of
machine-learning algorithm that can create images
based on a set of input parameters, or prompts. I
tested a number of platforms and worked to understand how to form good prompts (that is, the input
text that generators use to produce images) with
each platform. Among the platforms I tried were
Craiyon, DALL-E 2, Midjourney, NightCafé, and
46
SPECTRUM.IEEE.ORG
NOVEMBER 2023
2
Stable Diffusion. I found the combination of
Midjourney and Stable Diffusion to be the best for
my purposes.
Midjourney uses a proprietary machine-­
learning model, while Stable Diffusion makes its
source code available for free. Midjourney can be
used only with an Internet connection and offers
different subscription plans. You can download and
run Stable Diffusion on your computer and use it
for free, or you can pay a nominal fee to use it
online. I use Stable Diffusion on my local machine
and have a subscription to Midjourney.
In my first experiment with generative AI, I used
the image generators to co-design a self-reliant
­jellyfish robot. We plan to build such a robot in my
lab at Uppsala University, in Sweden. Our group
specializes in cyber-physical systems inspired by
nature. We envision the jellyfish robots collecting
microplastics from the ocean and acting as part of
the marine ecosystem.
In our lab, we typically design cyber-physical
systems through an iterative process that includes
brainstorming, sketching, computer modeling, simulation, prototype building, and testing. We start by
meeting as a team to come up with initial concepts
based on the system’s intended purpose and constraints. Then we create rough sketches and basic
CAD models to visualize different options. The most
promising designs are simulated to analyze dynamics and refine the mechanics. We then build simplified prototypes for evaluation before constructing
more polished versions. Extensive testing allows us
to improve the system’s physical features and con-
The author used the same prompt to generate these four
images of an octopus-like robot:
Futuristic electrical octopus robot, technical design,
perspective industrial design, copic style, cinematic
high detail, moody grading, white background.
The two bottom images were created several months after
the top images. They’re slightly less crude looking but
still do not resemble an octopus.
NOVEMBER 2023
SPECTRUM.IEEE.ORG
47
trol system. The process is collaborative but relies
heavily on the designers’ past experiences.
I wanted to see if using the AI image generators
could open up possibilities we had yet to imagine. I
started by trying various prompts, from vague
one-sentence descriptions to long, detailed explanations. At the beginning, I didn’t know how to ask or
even what to ask because I wasn’t familiar with the
tool and its abilities. Understandably, those initial
attempts were unsuccessful because the keywords I
chose weren’t specific enough, and I didn’t give any
information about the style, background, or detailed
requirements.
As I tried more precise prompts, the designs
started to look more in sync with my vision. I then
played with different textures and materials, until I
was happy with several of the designs.
It was exciting to see the results of my initial
prompts in just a few minutes. But it took hours to
make changes, reiterate the concepts, try new
prompts, and combine the successful elements into
a finished design.
Co-designing with AI was an illuminating experience. A prompt can cover many attributes, including
the subject, medium, environment, color, and even
mood. A good prompt, I learned, needed to be specific
because I wanted the design to serve a particular
purpose. On the other hand, I wanted to be surprised
by the results. I discovered that I needed to strike a
balance between what I knew and wanted, and what
I didn’t know or couldn’t imagine but might want. I
learned that anything that isn’t specified in the
prompt might be randomly assigned to the image by
the AI platform. And so if you want to be surprised
about an attribute, then you can leave it unsaid. But
if you want something specific to be included in the
result, then you have to include it in the prompt, and
you must be clear about any context or details that
are important to you. You can also include instructions about the composition of the image, which helps
a lot if you’re designing an engineering product.
A
s part of my investigations, I tried to see
how much I could control the co-creation
process. Sometimes it worked, but most
of the time it failed.
The text that appears on the humanoid
robot design on the top right on page 46 isn’t actual
words; it’s just letters and symbols that the image
generator produced as part of the technical drawing
aesthetic. When I prompted the AI for “technical
design,” it frequently included this pseudo language,
likely because the training data contained many
examples of technical drawings and blueprints with
similar-looking text. The letters are just visual elements that the algorithm associates with that style
of illustration. So the AI is following patterns it recognized in the data, even though the text itself is
nonsensical. This is an innocuous example of how
these generators adopt quirks or biases from their
training without any true understanding.
When I tried to change the jellyfish to an octopus,
it failed miserably—which was surprising because,
with apologies to any marine biologists reading this,
to an engineer, a jellyfish and an octopus look quite
similar. It’s a mystery why the generator produced
good results for jellyfish but rigid, alien-like, and anatomically incorrect designs for octopuses. Again, I
assume that this is related to the training datasets.
After producing several promising jellyfish robot
designs using AI image generators, I reviewed them
with my team to determine if any aspects could
inform the development of real prototypes. We discussed which aesthetic and functional elements
might translate well into physical models. For example, the curved, umbrella-shaped tops in many
images could inspire material selection for the
robot’s protective outer casing. The flowing tentacles could provide design cues for implementing the
flexible manipulators that would interact with the
marine environment. Seeing the different materials
and compositions in the AI-generated images and
the abstract, artistic style encouraged us toward
The author tried creating images of information flow in a smart city, based on this prompt: Figure that shows
the complexity of communication between different components on a smart city, white background, clean design.
48
SPECTRUM.IEEE.ORG
NOVEMBER 2023
NASA research engineer Ryan McClelland designed these 3D-printed components
using commercial AI software. He calls them “evolved structures.”
more whimsical and creative thinking about the
robot’s overall form and locomotion.
While we ultimately decided not to copy any of
the designs directly, the organic shapes in the AI art
sparked useful ideation and further research and
exploration. That’s an important outcome because
as any engineering designer knows, it’s tempting to
start to implement things before you’ve done enough
exploration. Even fanciful or impractical computer-­
generated concepts can benefit early-stage engineering design, by serving as rough prototypes, for
instance. Tim Brown, CEO of the design firm Ideo,
has noted that such prototypes “slow us down to
speed us up. By taking the time to prototype our
ideas, we avoid costly mistakes such as becoming
too complex too early and sticking with a weak idea
for too long.”
HENRY DENNIS/NASA
O
n another occasion, I used image generators to try to illustrate the complexity of
communication in a smart city.
Normally, I would start to create such
diagrams on a whiteboard and then use
drawing software, such as Microsoft Visio, Adobe
Illustrator, or Adobe Photoshop, to re-create the
drawing. I might look for existing libraries that contain sketches of the components I want to include—
vehicles, buildings, traffic cameras, city infra­­structure,
sensors, databases. Then I would add arrows to show
potential connections and data flows between these
elements. For example, in a smart-city illustration,
the arrows could show how traffic cameras send realtime data to the cloud and calculate parameters
related to congestion before sending them to connected cars to optimize routing. Developing these
diagrams requires carefully considering the different
systems at play and the information that needs to be
conveyed. It’s an intentional process focused on clear
communication rather than one in which you can
freely explore different visual styles.
I found that using an AI image generator provided more creative freedom than the drawing software does but didn’t accurately depict the complex
interconnections in a smart city. The results represent many of the individual elements effectively, but
they are unsuccessful in showing information flow
and interaction. The image generator was unable to
understand the context or represent connections.
After using image generators for several months
and pushing them to their limits, I concluded that
they can be useful for exploration, inspiration, and
producing rapid illustrations to share with my colleagues in brainstorming sessions. Even when the
images themselves weren’t realistic or feasible
designs, they prompted us to imagine new directions
we might not have otherwise considered. Even the
TEXT-TO-IMAGE
AI PLATFORMS
SUBSCRIPTION PLANS
Craiyon
Free to use on website. Monthly and yearly plans
start at US $6/month.
DALL-E 2
Free to use on website with account. When DALL-E
is built into the user’s API, images are priced
by resolution, starting at $0.16 for 256×256
resolution.
Midjourney
Requires a Discord account. Monthly and yearly
plans start at $10/month.
NightCafé
Via the website, users access the company’s
image generator as well as DALL-E 2 and Stable
Diffusion. Free to use with account. Users buy
credits by the month or in packs.
Stable
Diffusion
Free to use on website. Free download of the
model from GitHub. Monthly and yearly plans
start at $9.99/month.
NOVEMBER 2023
SPECTRUM.IEEE.ORG
49
images that didn’t accurately convey information flows still
served a useful purpose in driving productive brainstorming.
I also learned that the process of co-creating with generative
AI requires some perseverance and dedication. While it is
rewarding to obtain good results quickly, these tools become
difficult to manage if you have a specific agenda and seek a specific outcome. But human users have little control over AI-­
generated iterations, and the results are unpredictable. Of course,
you can continue to iterate in hopes that you’ll get a better result.
But at present, it’s nearly impossible to control where the iterations will end up. I wouldn’t say that the co-creation process is
purely led by humans—or not this human, at any rate.
I noticed how my own thinking, the way I communicate my
ideas, and even my perspective on the results changed throughout the process. Many times, I began the design process with a
particular feature in mind—for example, a specific background
or material. After some iterations, I found myself instead choosing designs based on visual features and materials that I had not
specified in my first prompts. In some instances, my specific
prompts did not work; instead, I had to use parameters that
increased the artistic freedom of the AI and decreased the importance of other specifications. So, the process not only allowed
me to change the outcome of the design process, but it also
allowed the AI to change the design and, perhaps, my thinking.
The image generators that I used have been updated many
times since I began experimenting, and I’ve found that the
newer versions have made the results more predictable. While
predictability is a negative if your main purpose is to see unconventional design concepts, I can understand the need for more
control when working with AI. I think in the future we will see
tools that will perform quite predictably within well-defined
constraints. More importantly, I expect to see image generators
integrated with many engineering tools, and to see people using
the data generated with these tools for training purposes.
Of course, the use of image generators raises serious ethical
issues. They risk amplifying demographic and other biases in
training data. Generated content can spread misinformation
and violate privacy and intellectual property rights. There are
many legitimate concerns about the impacts of AI generators
on artists’ and writers’ livelihoods. Clearly, there is a need for
transparency, oversight, and accountability regarding data
sourcing, content generation, and downstream usage. I believe
anyone who chooses to use generative AI must take such concerns seriously and use the generators ethically.
If we can ensure that generative AI is being used ethically,
then I believe these tools have much to offer engineers. Co-­
creation with image generators can help us to explore the design
of future systems. These tools can shift our mindsets and move
us out of our comfort zones—it’s a way of creating a little bit
of chaos before the rigors of engineering design impose order.
By leveraging the power of AI, we engineers can start to think
differently, see connections more clearly, consider future
effects, and design innovative and sustainable solutions that
can improve the lives of people around the world. 
LEARN. TRANSFORM. ADVANCE.
MIT Professional Education is a global leader in technology and engineering education
for working professionals pursuing career advancement, and for organizations seeking
to meet modern-day challenges. Our programs are offered in a range of formats—in-person
(on-campus, hybrid and live virtual), online, and through blended approaches to meet
the needs of today’s learners.
SHORT PROGRAMS
PROFESSIONAL CERTIFICATE PROGRAMS
INTERNATIONAL PROGRAMS
DIGITAL PLUS PROGRAMS
ADVANCED STUDY PROGRAM
CUSTOM PROGRAMS
To explore all of our programs, visit professional.mit.edu
or email us at professional.education@mit.edu.
50
SPECTRUM.IEEE.ORG
NOVEMBER 2023
Faculty Position in Department of Electrical,
Computer, and Systems Engineering
Case Western Reserve University, Cleveland, Ohio
The Department of Electrical, Computer, and Systems
Engineering (ECSE) at Case Western Reserve University
(CWRU) invites applications for one tenure-track faculty
position in Electrical and Computer Engineering programs
at the Assistant or Associate Professor level. Appointments
will be considered for starting dates as early as July 1, 2024.
Candidates must have a Ph.D. degree in Electrical Engineering,
Computer Engineering, or a closely related field.
The faculty search is focused on the broader area of energyefficient computing paradigms at the interface of electrical
and computer engineering. The department is particularly
interested in candidates whose focus is on neuromorphic
systems engineering, computing-in-memory, brain/bio-inspired
computing, as well as energy-efficient analog/mixed-signal/
VLSI implementation of engineered systems that emulate
the function, resiliency, and efficiency of biological nervous
systems. The department is also interested in candidates with
expertise in efficient hardware embodiment (e.g., CMOS chip,
field-programmable gate array, graphics processing unit, etc.)
of accelerators for artificial intelligence/machine learning/
neuromorphic algorithms applicable to distributed and remote
robotic systems.
Additional information about the position, department, and
application package is available at
https://engineering.case.edu/ecse/employment.
CWRU provides reasonable accommodations to applicants with
disabilities. Applicants requiring a reasonable accommodation
for any part of the application and hiring process should call
216-368-3066 or email equity@case.edu.
Faculty Positions in Computer Science
The Department of Computer Science at the National University of Singapore (NUS) invites applications for tenure-track and
educator-track positions in all areas of computer science. Candidates for Assistant Professor positions on the tenure track should
be early in their academic careers and yet demonstrate outstanding research potential, and a strong commitment to teaching.
Candidates for senior positions should have an established record of outstanding, recognized research achievements, and thought
leadership in his/her chosen area of computer science.
For Senior Lecturer and Associate Professor on the educator-track, teaching experience or relevant industry experience will be
preferred. Besides relevant background and experience, we are also looking for someone with a passion for imparting the latest
knowledge in computing to students in our programs.
The Department enjoys ample research funding, moderate teaching loads, excellent facilities, and extensive international collaborations.
We have a full range of faculty covering all major research areas in computer science and boasts a thriving PhD program that
attracts the brightest students from the region and beyond. More information is available at www.comp.nus.edu.sg/careers.
NUS is an equal opportunity employer that offers highly competitive salaries, and is situated in Singapore, an English-speaking
cosmopolitan city that is a melting pot of many cultures, both the east and the west. Singapore offers high-quality education and
healthcare at all levels, as well as very low tax rates.
Application Details: Submit the following documents (in a single PDF) online via: https://faces.comp.nus.edu.sg
•
•
•
•
•
•
A cover letter that indicates the position applied for and the main research interests
Curriculum Vitae
A teaching statement
A research statement
A diversity statement (optional)
Contact information of 3 referees
To ensure maximal consideration, please submit your application by 15 December 2023.
Job requirement: A PhD degree in Computer Science or related areas
Tenure
Tenure
Track
Track
Faculty
Faculty
- Computer
- Computer
Engineering
Engineering Tenure Track
Faculty - Computer Engineering
Electrical and Computer Engineering (ECE), University
TheThe
Department
Department
of of
Computer
Computer
Engineering
Engineering
at the
at the
Rochester
Rochester
Institute
Institute
of Technology
of Technology
invites
invites
applications
applications
for afor
tenure-track
a tenure-track
faculty
faculty
position
position
at the
at the
Assistant
Assistant
Professor
Professor
levellevel
starting
starting
in the
in the
2024-2025
2024-2025
academic
academic
year.year.
of Minnesota
Twin Engineering
Cities (https://ece.umn.edu/)
invites
The Department
of Computer
at the Rochester
for applications
a faculty for
position
in thefaculty
area of power
Institute of applications
Technology invites
a tenure-track
Both tenured
and tenure-track
candidates with
position at electronics.
the Assistant Professor
level starting
in the 2024-2025
knowledge and expertise in hardware design, modeling,
academic year.
Applicants
Applicants
mustmust
havehave
a Ph.D.
a Ph.D.
degree
degree
in Computer
in Computer
Engineering
Engineering
or or
closely
closely
related
related
discipline
discipline
by the
by the
timetime
of hire.
of hire.
Applicants must have a Ph.D. degree in Computer Engineering or
power electronics are of particular interest. This is a tenureclosely related discipline by the time of hire.
Candidates
Candidates
are are
expected
expected
to have
to have
ability
ability
to strengthen
to strengthen
computer
computer
engineering
engineering
corecore
competencies
competencies
andand
expertise
expertise
in the
in the
areas
areas
of AI/ML
of AI/ML
Systems
Systems
andand
Applications,
Applications,
HighHigh
Performance
Performance
Architectures,
Architectures,
Digital
Digital
andand
Embedded
Embedded
Systems,
Systems,
System
System
Security,
Security,
EdgeEdge
andand
Fog Fog
Computing
Computing
andand
emerging
emerging
computing
computing
paradigms
paradigms
suchsuch
as Neuromorphic
as Neuromorphic
or Quantum
or Quantum
Computing
Computing
or other
or other
closely
closely
related
related
research
research
areas.
areas.
levels.
Candidates Professor
are expected
to have ability to strengthen computer
engineering core competencies and expertise in the areas of AI/ML
ECE is committed to fostering a culturally and academically
Systems and Applications, High Performance Architectures, Digital and
diverse community; candidates who will actively contribute
Embedded Systems, System Security, Edge and Fog Computing and
to this commitment – both in identity and professional vision
emerging computing paradigms such as Neuromorphic or Quantum
- are particularly encouraged to apply. An earned doctorate
Computing or other closely related research areas.
The The
compensation
compensation
range
range
for this
for this
9-month
9-month
position
position
is a issalary
a salary
of $100K
of $100K
to $120K
to $120K
per per
year.year.
appointment.
Rank
andposition
salaryis awill
commensurate
The compensation
range for this
9-month
salarybe
of $100K
to $120K perwith
year. qualifications and experience. Applications will be
For For
more
more
information
information
andand
to apply,
to apply,
visit:
visit:
https://apptrkr.com/4634819
https://apptrkr.com/4634819
Position
Position
8216BR
8216BR
Submit
Submit
cover
cover
letter,
letter,
CV, statements
CV, statements
on teaching,
on teaching,
research
research
andand
diversity,
diversity,
andand
three
three
references.
references.
Review
Review
of applications
of applications
will will
begin
begin
on December
on December
1, 1,
2023
2023
andand
will will
continue
continue
untiluntil
the position
the position
is filled.
is filled.
For questions,
For questions,
contact
contact
ce-facultysearch@rit.edu.
ce-facultysearch@rit.edu.
and contemporary control and optimization paradigms for
track faculty position, hiring at the Assistant or Associate
in an appropriate discipline is required by the start of the
considered as they are received, and will be accepted until
For more information and to apply, visit:
the position is filled, but for full consideration, please apply
by thehttps://apptrkr.com/4634819
priority deadline of December 15, 2023.
Position 8216BR
To be considered for a position, candidates must apply
Submit coveronline.
letter, CV,
statements on
teaching, research
and diversity,information
Application
instructions
and additional
and three references.
Review
of
applications
will
begin
on
December 1,
can be found at https://z.umn.edu/ecefacultyjobs
2023 and will continue until the position is filled. For questions, contact
The University of Minnesota is an equal opportunity educator and employer.
ce-facultysearch@rit.edu.
NOVEMBER 2023
SPECTRUM.IEEE.ORG
51
HISTORY IN AN OBJECT
BY ALLISON MARSH
Herman Miller’s
Acoustic Area
Conditioner was a
white-noise machine
for corporate
workspaces.
52
SPECTRUM.IEEE.ORG
NOVEMBER 2023
In 1964, the office cubicle was
born. For that you can thank
Robert Propst, a designer at
the Herman Miller furniture
company. Four years earlier,
he had proposed a radical
alternative to the office
bullpen: the Action Office. He
envisioned it as a holistic and
integrated system designed
to increase worker efficiency
while providing an ergonomic
workspace. But by the early
1970s, Propst’s vision was
devolving into soulless cubicle
farms despised by workers
everywhere. Chief among the
complaints: noisy coworkers
and a lack of privacy.
Accordingly, Herman Miller
introduced the Acoustic Area
Conditioner in 1975, a stylish
noise-canceling globe that
perched atop a cubicle wall.
The AAC, known inside the
company as the maskitball,
emitted high frequencies from
the top of the globe and midand low-level frequencies from
the equator. Office workers
could tune the device within
preset limits.
Although the AAC was
considered effective, production
ceased in the early 1980s.
By then the Sony Walkman
had debuted, offering cubicle
dwellers a more melodic way to
tune out annoying colleagues.
FOR MORE ON THE MASKITBALL,
see spectrum.ieee.org/
pastforward-nov2023
THE HENRY FORD
White
Noise, Inc.
LIVE
LIFE ON
YOUR
TERMS.
INSURANCE
THAT FITS
YOU.
Group Term Life Insurance
engineered for you.
1-800-493-IEEE (4333)
To learn more*, visit IEEEinsurance.com/live
LIFE INSURANCE
*For information on features, costs, eligibility, renewability, limitations and exclusions.
Group Term Life Insurance is available only for residents of the U.S. (except territories), Puerto Rico and Canada (except Quebec). This is
underwritten by New York Life Insurance Company, 51 Madison Ave., New York, NY 10010 on Policy Form GMR. AMBA does not act as broker
with respect to Canadian residents and acts solely as an Administrator on behalf of New York Life.
Association Member Benefits Advisors, LLC.
In CA d/b/a Association Member Benefits & Insurance Agency
CA License #0I96562 • AR License #100114462
Program Administered by AMBA Administrators, Inc.
99604 (11/23) Copyright 2023 AMBA. All rights reserved
MATLAB
FOR AI
Boost system design and simulation with explainable and
scalable AI. With MATLAB and Simulink, you can easily train
and deploy AI models.
© The MathWorks, Inc.
mathworks.com/ai
Download