Uploaded by aldi’s reusable shopping bag store

MIT Ethics Issue

advertisement
Meet the 2023
Innovators Under 35
Eric Schmidt on
transforming science
When AI goes to war
Volume 126
Number 5
September/October
2023
Experimental
drugs
The ethics issue
Who should get them?
SO23-front_cover.indd 1
8/1/23 8:49 AM
The Tim e
ClimateTech
SO23.CT.spread.final.indd 2
Join us on campus
October 4-5, 2023
ClimateTechMIT.com
7/20/23 9:23 AM
m e Is Now
Innovations for a sustainable future
ClimateTech convenes the leaders
funding, creating, and deploying the technologies
to lead the transition to a green economy.
SO23.CT.spread.final.indd 3
7/20/23 9:23 AM
From the editor
In his essay introducing this year’s class of Innovators Under
35, Andrew Ng argues that AI is a general-purpose technology,
much like electricity, that will be built into everything else (page
74). Indeed, it’s true, and it’s already happening.
AI is rapidly becoming a tool that powers all sorts of other
tools, a technological underpinning for a range of applications
and devices. It can helpfully suggest a paella recipe in a web app.
It can predict a protein structure from an amino acid sequence.
It can paint. It can drive a car. It can relentlessly replicate itself,
hijack the electrical grid for unlimited processing power, and
wipe out all life on Earth.
Okay, so that last one is just a nightmare scenario courtesy
of the AI pioneer Geoffrey Hinton, who posed it at an EmTech
Digital event of ours earlier this year. But it speaks to another
of Ng’s points, and to the theme of this issue. Ng challenges the
innovators to take responsibility for their work; he writes, “As we
focus on AI as a driver of valuable innovation throughout society,
social responsibility is more important than ever.”
In many ways, the young innovators we celebrate in this issue
exemplify the ways we can build ethical thinking into technology development. That is certainly true for our Innovator of the
Year, Sharon Li, who is working to make AI applications safer by
causing them to abstain from acting when faced with something
they have not been trained on (page 76). This could help prevent
the AIs we build from taking all sorts of unexpected turns, and
causing untold harms.
This issue revolves around questions of ethics and how they can
be addressed, understood, or intermediated through technology.
Should relatively affluent Westerners have stopped lending
money to small entrepreneurs in the developing world because
the lending platform is highly compensating its top executives?
How much control should we have over what we give away?
These are just a few of the thorny questions Mara Kardas-Nelson
explores about a lenders’ revolt against the microfinance nonprofit Kiva (page 38).
On page 24, Jessica Hamzelou interrogates the policies on
access to experimental medical treatments that are sometimes a
last resort for desperate patients and their families. Who should
be able to use these unproven treatments, and what proofs of
efficacy and (more important) safety should be required?
In another life-and-death question, Arthur Holland Michel
takes on computer-assisted warfare (page 46). How much should
we base our lethal decision-making on analysis performed by
artificial intelligence? How can we build those AI systems so
that we are more likely to treat them as advisors than deciders?
Rebecca Ackermann takes a look at the long evolution of the
open-source movement (page 62) and the ways it has redefined
freedom—free as in beer, free as in speech, free as in puppies—
again and again. If open source is to be something we all benefit from, and indeed that many even profit from, how should
we think about its upkeep and advancement? Who should be
responsible for it?
SO23-front_editorial.indd 2
Mat Honan
is editor in
chief of
MIT Technology
Review
And on a more meta level, Gregory Epstein, a humanist
chaplain at MIT and the president of Harvard’s organization of
chaplains, who focuses on the intersection of technology and
ethics, takes a deep look at All Tech Is Human, a nonprofit that
promotes ethics and responsibility in tech (page 32). He wonders how its relationship with the technology industry should be
defined as it grows and takes funding from giant corporations
and multibillionaires. How can a group dedicated to openness
and transparency, he asks, coexist with members and even leaders committed to tech secrecy?
There is a lot more as well. I hope this issue makes you think,
and gives you lots of ideas about the future.
Thanks for reading,
Mat Honan
ROBYN KESSLER
02
7/29/23 8:21 AM
Syngenta and Infosys: 20 years of relentless
collaboration for shared success.
www.technologyreview.com/thecloudhub
Untitled-2 1
7/31/23 13:23
04
Contents
“We might not have the opportunity to wait
to take one of those other drugs that might be made
available years down the line.” –p. 24
24 The right to try
Cover story: Desperate people will often want to
try experimental, unproven treatments. How can
we ensure they’re not exploited or put at risk?
BY JESSICA HAMZELOU
32 Only human
Tech culture is increasingly oriented around
moral and ethical messages: So why not a tech
ethics congregation?
BY GREG M . EPSTEIN
Front
2
Letter from the editor
THE DOWNLOAD
9
Eric Schmidt on how AI will
transform science; better
weather predictions; fawning
over the Frequency Allocation Chart; extracting climate
records from Antarctic ice
cores; and saving Venice from
sinking. Plus, job of the future:
chief heat officer
EXPLAINED
18 Everything you need
to know about the wild world
of alternative jet fuels
How french fries, trash,
and sunlight could power
your future flights.
By Casey Crownhart
PROFILE
20 Valley of the misfit tech workers
Xiaowei Wang and Collective
Action School seek to remedy
the moral blindness of Big
Tech. By Patrick Sisson
SO23-front_contents.indd 4
38 What happened to Kiva?
Hundreds of lenders are protesting changes at the
microfinance funder. Is their strike really about Kiva,
or about how much control we should expect over
international aid?
BY MARA KARDAS - NELSON
46 AI-assisted warfare
If a machine tells you when to pull the trigger,
who is ultimately responsible?
BY ARTHUR HOLLAND MICHEL
Back
54 The greatest slideshow on Earth
From supersize slideshows to
Steve Jobs’s Apple keynote,
corporate presentations have
always pushed technology
forward. By Claire L. Evans
62 Open source at 40
Free and open-source software are now foundational to
modern code, but much about
them is still in flux.
By Rebecca Ackermann
68 Tiny faux organs could finally
crack the mystery of
menstruation
Organoids are helping
researchers explore one of
the last frontiers of human
physiology. By Saima Sidik
74 35 Innovators Under 35
Tips for aspiring innovators
on trying, failing, and the
future of AI. By Andrew Ng
76 Innovator of the Year: Sharon Li
By Melissa Heikkilä
78 Online fraud, hacks,
and scams, oh my
Three books that explore
how culture drives foul play
on the internet.
By Rebecca Ackermann
FIELD NOTES
84 Servers that work from home
Wasted heat from computers
is transformed into free hot
water for housing.
By Luigi Avantaggiato
ARCHIVE
88 A cell that does it all
For 25 years, embryonic stem
cells have been promising and
controversial in equal measure.
How far have they really come?
COVER ILLUSTRATION BY SELMAN DESIGN
The ethics issue
8/1/23 3:40 PM
Discover what’s coming
next in technology.
Subscribe now
for access to:
•
•
•
•
•
In depth reporting on AI, climate change, biotech & more
Trusted insights you can’t find anywhere else
Science & technology news shaping the future
6 print and digital issues a year
Discounts on MIT Technology Review events
Scan this code to subscribe or
learn more at technologyreview.com/subscribe
Untitled-1 1
6/8/23 2:59 PM
06
Masthead
Editorial
Corporate
Consumer marketing
Editor in chief
Chief executive officer and publisher
Mat Honan
Elizabeth Bramson-Boudreau
Vice president, marketing and
consumer revenue
Executive editor, operations
Amy Nordrum
Executive editor, newsroom
Niall Firth
Editorial director, print
Allison Arieff
Editorial director, audio and live journalism
Jennifer Strong
Editor at large
David Rotman
Science editor
Mary Beth Griggs
News editor
Charlotte Jee
Features and investigations editor
Amanda Silverman
Managing editor
Timothy Maher
Commissioning editor
Rachel Courtland
Senior editor, MIT News
Alice Dragoon
Senior editor, biomedicine
Antonio Regalado
Senior editor, climate and energy
James Temple
Senior editor, AI
Will Douglas Heaven
Podcast producer
Anthony Green
Senior reporters
Eileen Guo (features and investigations)
Jessica Hamzelou (biomedicine)
Melissa Heikkilä (AI)
Tate Ryan-Mosley (tech policy)
Finance and operations
Chief financial officer, head of operations
Enejda Xheblati
General ledger manager
Olivia Male
Accountant
Anduela Tabaku
Human resources director
Alyssa Rousseau
Manager of information technology
Colby Wheeler
Data analytics manager
Audience engagement editor
Juliet Beauchamp
Creative director, print
Eric Mongeon
Digital visuals editor
Stephanie Arnett
Alliya Samhat
Director of event marketing
Nina Mehta
Email marketing manager
Tuong-Chau Cai
Circulation and print production manager
Tim Borton
Advertising sales
Senior manager of licensing
Ted Hu
Senior editor, custom content
Michelle Brosnahan
Senior editor, custom content
Kwee Chuan Yeo
Editor, custom content
Teresa Elsey
Senior project manager
Martha Leibs
Director of partnerships, Europe
Technology
Associate vice president, integrated
marketing and brand
Chief technology officer
Drake Martinet
Caitlin Bergmann
caitlin.bergmann@technologyreview.com
Vice president, product
Executive director, brand partnerships
Mariya Sitnova
Molly Frey
Marii Sebahar
marii@technologyreview.com
415-416-9140
Associate product manager
Executive director, brand partnerships
Senior software engineer
Allison Chase
Digital brand designer
Vichhika Tep
Events
Senior vice president,
events and strategic partnerships
Amy Lammers
Director of event content and experiences
Brian Bryson
Senior event content producer
Kristin Ingram
kristin.ingram@technologyreview.com
415-509-1910
Executive director, brand partnerships
Stephanie Clement
stephanie.clement@
technologyreview.com
214-339-6115
Natasha Conteh
Emily Kutchinsky
Director of partnerships, Asia
Marcus Ulvne
Board of directors
Cynthia Barnhart, Cochair
Alan Spoon, Cochair
Lara Boro
Peter J. Caruso II, Esq.
Whitney Espich
Sanjay E. Sarma
David Schmittlein
Glen Shor
Customer service and
subscription inquiries
National
877-479-6505
Executive director, sales and brand
partnerships
International
Debbie Hanley
debbie.hanley@technologyreview.com
214-282-2727
Email
Senior director, brand partnerships
847-559-7313
customer-service@technologyreview.com
Web
www.technologyreview.com/
customerservice
Director of events
Ian Keller
ian.keller@technologyreview.com
203-858-3396
Nicole Silva
Senior director, brand partnerships
techreview@wrightsmedia.com
877-652-5295
Erin Underwood
Event operations manager
Elana Wilner
Manager of strategic partnerships
Madeleine Frasca Williams
Event coordinator
Bo Richardson
Miles Weiner
miles.weiner@technologyreview.com
617-475-8078
Senior director, digital strategy, planning,
and ad ops
Katie Payne
katie.payne@technologyreview.com
Digital operations coordinator
Brooke McGowan
brooke.mcgowan@technologyreview.com
Media kit
www.technologyreview.com/media
SO23-front_masthead.indd 6
Laurel Ruma
Linda Cardinal
Office manager
Marcy Rizzo
Abby Ivory-Ganja
Director of acquisition marketing
Global director of custom content
Project manager
Casey Crownhart (climate and energy)
Rhiannon Williams (news)
Zeyi Yang (China and East Asia)
Senior audience engagement editor
Taylor Puskaric
Nicola Crepaldi
Andrew Hendler
andrew.hendler@technologyreview.com
201-993-8794
Christopher Doumas
Head of international and custom events
Linda Lowenthal
Director of retention marketing
Vice president, Insights and international
Senior vice president, sales and
brand partnerships
Reporters
Copy chief
Alison Papalia
MIT Technology Review Insights
and international
Reprints
Licensing and permissions
licensing@technologyreview.com
MIT Technology Review
196 Broadway, 3rd Floor
Cambridge, MA 02139
617-475-8000
Our in-depth reporting reveals what’s
going on now to prepare you for what’s
coming next.
Technology Review, Inc., is an independent nonprofit 501(c)(3) corporation wholly
owned by MIT; the views expressed in
our publications and at our events are not
always shared by the Institute.
7/28/23 10:25 AM
Meet this year’s
35 Innovators
Under 35
Become an MIT Technology Review digital subscriber and be among
the first to meet the young innovators, leaders, and entrepreneurs
shaping the future of technology.
With your digital subscription, you’ll get full access to the list of honorees
and their stories as soon as they’re published online on September 12th.
Scan this code to become
a digital subscriber
or visit TechnologyReview.com/SubTR35
Untitled-5 1
8/3/23 10:59
Join us online or on the MIT campus | Coming November 2023
MIT Technology Review’s flagship event
brings together global change makers, innovators,
and the leading voices in industry to understand the impact
and set a new course forward.
EmTechMIT.com
SO23.ETM.single.final.indd 1
PHOTO: MUZAMMIL SOORMA
AI, climate change,
and biotech are reshaping
our economy and our lives
7/20/23 9:13 AM
09
The
Download
This is how AI will
transform the way
science gets done
Science is about to become much more
exciting—and that will affect us all, argues
Google’s former CEO.
By Eric Schmidt
Reimagining science
At its core, the scientific process will remain the same: conduct background research, identify a hypothesis, test it through
experimentation, analyze the collected data, and reach a conclusion. But AI has the potential to revolutionize how each of
these components looks in the future.
COURTESY PHOTO
With the advent of AI, science is about to become much more
exciting—and in some ways unrecognizable. The reverberations
of this shift will be felt far outside the lab and will affect us all.
If we play our cards right with sensible regulation and proper
support for innovative uses of AI to address science’s most pressing issues, it can rewrite the scientific process. We can build a
future where AI-powered tools will both save us from mindless
and time-consuming labor and encourage creative breakthroughs
that would otherwise take decades.
AI in recent months has become almost synonymous with
large language models, or LLMs, but in science there are a multitude of different model architectures that may have even bigger
impacts. In the past decade, most progress in science has come
through smaller, “classical” models focused on specific questions.
These models have already brought about profound advances.
More recently, larger deep-learning models that are beginning
to incorporate cross-domain knowledge and generative AI have
expanded what is possible.
Scientists at McMaster and MIT, for example, used AI to identify an antibiotic that fights what the World Health Organization
calls one of the world’s most dangerous drug-resistant bacteria for
hospital patients. The FDA has already cleared 523 devices that use
AI, and a Google DeepMind model can control plasma in nuclear
fusion reactions, bringing us closer to a clean-energy revolution.
SO23-front_thedownload.indd 9
7/31/23 10:58 AM
10
The Download
Starting with the research step, tools like PaperQA and
Elicit harness LLMs to scan databases of articles and produce
succinct and accurate summaries of the existing literature—
citations included.
Next, AI can spread the search net for hypotheses wider and
narrow the net more quickly. As a result, AI tools can help formulate stronger hypotheses, such as models that spit out more
promising candidates for new drugs. We’re already seeing simulations running multiple orders of magnitude faster than just a
few years ago, allowing scientists to try more design options in
simulation before carrying out real-world experiments.
Moving on to the experimentation step, AI will be able to conduct experiments faster, cheaper, and at greater scale. Instead of
limiting themselves to just six experiments, scientists can use AI
tools to run a thousand. Scientists who are worried about their
next grant, publication, or tenure process will no longer be bound
to safe experiments with the highest odds of success, instead
free to pursue bolder and more interdisciplinary hypotheses.
Eventually, much of science will be conducted at “self-driving
labs”—automated robotic platforms combined with artificial
intelligence, which are already emerging at organizations like
Emerald Cloud Lab, Artificial, and even Argonne National
Laboratory. Finally, at the stage of analysis and conclusion,
self-driving labs will move beyond automation and use LLMs to
interpret experimental results and recommend the next experiment to run. The AI lab assistant could then order supplies and
run that next recommended experiment overnight—all while
the experimenter is home sleeping.
Young researchers might be shifting nervously in their seats
at this prospect. Luckily, the new jobs that emerge from this
revolution are likely to be more creative and less mindless than
most current lab work. With LLMs able to assist in building
code, STEM students will no longer have to master obscure
coding languages, opening the doors of the ivory tower to new,
nontraditional talent and allowing scientists to engage with
fields beyond their own. Soon, specifically trained LLMs might
be developed to offer “peer” reviews of new papers alongside
human reviewers.
We must nevertheless recognize where the human touch is
still important and avoid running before we can walk. For example, a lot of the tacit knowledge that scientists learn in labs is
difficult to pass on to AI-powered self-driving labs. Similarly, we
should be cognizant of the limitations of current LLMs—such
as limited memory and even hallucinations—before we offload
much of our paperwork, research, and analysis to them.
The importance of regulation
AI is such a powerful tool because it allows humans to accomplish more with less: less time, less education, less equipment.
But these capabilities make it a dangerous weapon in the wrong
hands. University of Rochester professor Andrew White was
contracted by OpenAI to participate in a “red team” that could
expose the risks posed by GPT-4 before it was released. Using
the language model and giving it access to tools, White found
it could propose dangerous compounds and even order them
from a chemical supplier. To test the process, he had a (safe)
Weather forecasting is having
an AI moment
As extreme weather conditions become more common,
accurate forecasts become even more important.
By Melissa Heikkilä
The first week of July was the hottest
week on record—yet another sign that climate change is “out of control,” the UN secretary general said. Punishing heat waves
and extreme weather events like hurricanes
and floods are going to become more common as the climate crisis worsens, making
it more important than ever before to produce accurate weather forecasts.
SO23-front_thedownload.indd 10
AI is proving increasingly helpful with
that. In the past year, weather forecasting
has been having an AI moment.
Three recent papers from Nvidia,
Google DeepMind, and Huawei have
introduced machine-learning methods
that are able to predict weather at least
as accurately as conventional methods,
and much more quickly. Recently I wrote
about Pangu-Weather, an AI model developed by Huawei. Pangu-Weather is able
to forecast not only weather but also the
path of tropical cyclones.
Huawei’s Pangu-Weather, Nvidia’s
FourcastNet, and Google DeepMind’s
GraphCast are making meteorologists
“reconsider how we use machine learning and weather forecasts,” Peter Dueben,
head of Earth system modeling at the
European Centre for Medium-Range
Weather Forecasts (ECMWF), told me
for the story.
ECMWF’s forecasting model is considered the gold standard for medium-term
weather forecasting (up to 15 days ahead).
Pangu-Weather managed to get accuracy
comparable to that of the ECMWF model,
while Google DeepMind claims in a nonpeer-reviewed paper to have beaten it
7/31/23 10:58 AM
The Download
compound shipped to his house the next week. OpenAI says
it used his findings to tweak GPT-4 before it was released.
OpenAI has managed to implement an impressive array of
safeguards, but the day will likely soon come when someone
We need smart, well-informed
regulation—on both tech giants and
open-source models—that doesn’t
keep us from using AI in ways that
can be beneficial to science.
manages to copy the model and house it on their own servers.
Such frontier models need to be protected to prevent thieves
from removing the AI safety guardrails so carefully added by
their original developers.
To address bad uses of AI, we need smart, well-informed
regulation—on both tech giants and open-source models—that
doesn’t keep us from using AI in ways that can be beneficial to
science. Beyond regulation, governments and philanthropy can
support scientific projects with a high social return but little
financial return or academic incentive, such as those in climate
change, biosecurity, and pandemic preparedness.
Insofar as safety concerns allow, government can also help
develop large, high-quality data sets such as those that enabled
AlphaFold, the model developed by Google’s DeepMind that
predicts a protein’s shape from a sequence of amino acids. Open
data sets are public goods: they benefit many researchers, but
90% of the time in the combinations that
were tested.
Using AI to predict weather has a big
advantage: it’s fast. Traditional forecasting models are big, complex computer
algorithms based on atmospheric physics,
and they take hours to run. AI models can
create forecasts in just seconds.
But they are unlikely to replace conventional weather prediction models anytime
soon. AI-powered forecasting models are
trained on historical weather data that
goes back decades, which means they are
great at predicting events that are similar
to the weather of the past. That’s a problem in an era of increasingly unpredictable conditions.
We don’t know if AI models will be
able to predict rare and extreme weather
events, says Dueben. He thinks the way
SO23-front_thedownload.indd 11
11
researchers have little incentive to create them themselves.
Chemistry, for example, has one language that unites the field,
which would seem to lend itself to easy analysis by AI models.
But no one has properly aggregated data on molecular properties
stored across dozens of databases, which keeps us from accessing
insights into the field that would be within reach of AI models if
we had a single source. Biology, meanwhile, lacks the known and
calculable data that underlies physics or chemistry, with subfields
like intrinsically disordered proteins still a mystery to us. It will
therefore require a more concerted effort to understand—and
even record—the data for an aggregated database.
The road ahead to broad AI adoption in the sciences is long,
with a lot that we must get right, from building the right databases to implementing the right regulations; from mitigating
biases in AI algorithms to ensuring equal access to computing
resources across borders.
Nevertheless, this is a profoundly optimistic moment. Previous
paradigm shifts in science, like the emergence of the scientific
process or big data, have been inwardly focused—making
science more precise, accurate, and methodical. AI, meanwhile,
is expansive, allowing us to combine information in novel ways
and to bring creativity and progress in the sciences to new
heights.
Eric Schmidt was the CEO of Google from 2001 to 2011. He is
currently cofounder of Schmidt Futures, a philanthropic initiative
that brings talented people together in networks to prove out their
ideas and solve hard problems in science and society.
forward might be for AI tools to be adopted
alongside traditional weather forecasting models to get the most accurate
predictions.
Big Tech’s arrival on the weather forecasting scene is not purely based on scientific curiosity, reckons Oliver Fuhrer, the
head of the numerical prediction department at MeteoSwiss, the Swiss Federal
Office of Meteorology and Climatology.
Our economies are becoming increasingly dependent on weather, especially
with the rise of renewable energy, says
Fuhrer. Tech companies’ businesses are
also linked to weather, he adds, pointing
to everything from logistics to the number
of search queries for ice cream.
The field of weather forecasting
could gain a lot from the addition of AI.
Countries track and record weather data,
Using AI to predict
weather has a big
advantage: it’s fast.
which means there is plenty of publicly
available data out there to use in training
AI models. When combined with human
expertise, AI could help speed up a painstaking process.
What’s next isn’t clear, but the prospects are exciting. “Part of it is also just
exploring the space and figuring out what
potential services or business models
might be,” Fuhrer says.
Melissa Heikkilä is a senior reporter at
MIT Technology Review, covering artificial intelligence and how it is changing
our society.
7/31/23 10:58 AM
12
The Download
What’s the frequency?
Visualizing the beautiful complexity of the United States
Frequency Allocation Chart By Jon Keegan
Somewhere above you right now, a plane
is broadcasting its coordinates on 1090
megahertz. A satellite high above Earth
is transmitting weather maps on 1694.1
MHz. On top of all that, every single phone
and Wi-Fi router near you blasts internet
traffic through the air over radio waves. A
carefully regulated radio spectrum is what
makes it possible for these signals to get
to the right place intact.
The Federal Communication Commission and the National Telecommunications
and Information Administration share
the task of managing radio frequencies
for US airwaves. The NTIA manages all
federal radio uses (including military
use), while the FCC manages everything
else. It is an incredibly complex system,
SO23-front_thedownload.indd 12
and to help with the job of explaining
the importance of managing this invisible natural resource, the NTIA publishes
the United States Frequency Allocation
Chart (which you can order as a wall
chart for $6).
The US government lays claim to a large
chunk of spectrum for military use, communications, and transportation. FM radio
operates between 88 and 108.0 MHz, and
AM radio operates between 540 and 1700
kilohertz. Using licenses, amateur radio
operators are granted slices where they
can communicate safely, as are businesses
and other institutions. Civil aviation, maritime navigation, satellite communications,
radio astronomy, cellular voice, and data
all lay claim to colorful plots on this chart.
The chart uses 33 color-coded
categories to visualize the information
in a crazy quilt of blocks (some wide,
some narrow), spread from 9 kHz (very
low frequency) all the way to 300 GHz
(extremely high frequency). It does suffer
from scale distortions, not unlike a map
of Earth.
Eric Rosenberg, a telecommunications
specialist at NTIA, says a lot of the choices
about what service goes where come down
to physics and the environment where
the service will be used: “You can’t just
pick up a block and say, okay, we’re gonna
move these radars over here.”
The chart is always extremely popular,
Rosenberg says; fans include lawmakers
in Congress. Last updated in 2016, it is
due for another revision. “We’re getting
to the point where we really feel that we
need to redo it,” he says. “Again, it’s a very
large project.”
A version of this story appeared on Beautiful
Public Data (beautifulpublicdata.com),
a newsletter by Jon Keegan (KE3GAN).
7/31/23 10:58 AM
The Download
13
Above: A detail of
the United States
Frequency Allocation
Chart.
SOURCE: NTIA
Right: The complete
frequency spectrum.
SO23-front_thedownload.indd 13
7/31/23 10:58 AM
The Download
Extracting climate records
from Antarctic ice cores
Scientists now have the technology to unlock 20,000 years
of ancient climate history compressed in a meter of ice.
By Christian Elliott
Moving quickly and carefully in two layers of gloves, Florian Krauss sets a cube of
ice into a gold-plated cylinder that glows
red in the light of the aiming laser. He steps
back to admire the machine, covered with
wires and gauges, that turns polar ice into
climate data.
If this were a real slice of precious
million-year-old ice from Antarctica and
not just a test cube, he’d next seal the
extraction vessel under a vacuum and
power on the 150-megawatt main laser,
slowly causing the entire ice sample to
sublimate directly into gas. For Krauss, a
PhD student at the University of Bern in
Switzerland, this would unlock its secrets,
exposing the concentrations of greenhouse
gases like carbon dioxide trapped within.
To better understand the role atmospheric carbon dioxide plays in Earth’s
climate cycles, scientists have long turned
to ice cores drilled in Antarctica, where
snow layers accumulate and compact over
SO23-front_thedownload.indd 14
hundreds of thousands of years, trapping
samples of ancient air in a lattice of bubbles that serve as tiny time capsules. By
analyzing those bubbles and the ice’s other
contents, like dust and water isotopes,
scientists can connect greenhouse-gas
concentrations with temperatures going
back 800,000 years.
Europe’s Beyond EPICA (European
Project for Ice Coring in Antarctica) initiative, now in its third year, hopes to eventually retrieve the oldest core yet, dating
back 1.5 million years. This would extend
the climate record all the way back to the
Mid-Pleistocene Transition, a mysterious
period that marked a major change in
the frequency of Earth’s climatic oscillations—cycles of repeating glacial and
warm periods.
Successfully drilling a core that old—a
years-long endeavor—might be the easy
part. Next, scientists must painstakingly
free the trapped air from that ice. Krauss
and his colleagues are developing an innovative new way to do that.
“We’re not interested in the ice itself—
we’re just interested in the air samples
included, so we needed to find a new way
to extract the air from the ice,” he says.
Melting isn’t an option because carbon dioxide easily dissolves into water.
Traditionally, scientists have used
mechanical extraction methods, grinding up samples of individual layers of
ice to free the air. But grinding wouldn’t
be effective for the Beyond EPICA ice in
the university’s storage freezer, which is
kept at 50 °C below zero. The oldest ice
at the very bottom of the core will be so
compressed, and the individual annual
layers so thin, that bubbles won’t be visible—they’ll have been pressed into the
lattice of ice crystals, forming a new phase
called clathrate.
“At the very bottom, we expect 20,000
years of climate history compressed in only
one meter of ice,” says Hubertus Fischer,
head of the past climate and ice core science group at Bern. That’s a hundredth the
thickness of any existing ice core record.
The new method Krauss and Fischer are
developing is called deepSLice. (A pizza
menu is taped to the side of the device
right under the laser warning labels, a
gift from a pizzeria in Australia with the
same name.) DeepSLice has two parts.
The Laser-Induced Sublimation Extraction
Device, or LISE, fills half a room in the
team’s lab space. LISE aims a near-infrared
COURTESY PHOTOS
14
An ice core sample (above); Fischer
(right) and Krauss with their LISE
apparatus.
7/31/23 10:58 AM
COURTESY OF THE PUBLISHERS
The Download
laser continuously at a 10-centimeter slice
of ice core so that it turns directly from
solid to gas under extremely low pressure
and temperature. The sublimated gas then
freezes into six metal dip tubes cooled
to 15 K (-258 °C), each containing the air
from one centimeter of ice core. Finally
the samples are loaded into a custommade absorption spectrometer based on
quantum cascade laser technology, which
shoots photons through the gas sample to
measure concentrations of carbon dioxide,
methane, and nitrous oxide simultaneously.
Another big advantage of this system
is that it takes a lot less ice (and work)
than the old method of analysis, in which
scientists measured methane by melting
ice (it doesn’t dissolve into water) and
measured carbon dioxide by grinding ice.
DeepSLice offers “a unique capability that nobody else has,” says Christo
Buizert, an ice core scientist at the
University of Oregon and the ice analysis
lead for COLDEX (the Center for Oldest
Ice Exploration)—the US equivalent of
Beyond EPICA, which is currently in a
“friendly race” with the Europeans to drill
a continuous core down to 1.5-millionyear-old ice.
“What they’re trying to do, sublimating
ice—people have been trying this for a long
time, but it’s one of the most challenging
ways to extract gases from ice,” Buizert
says. “It’s a very promising way, because
you get 100% of the gases out, but it’s very
difficult to do. So the fact that they’ve managed to get it working is very impressive.”
Krauss and Fischer still have about
three years before they get their hands on
that section of critical ice. There are still
kinks to iron out, like how to recapture the
samples from the spectrometer for additional analysis, but they think they’ll be
ready when it finally arrives in freezer containers on a ship from Antarctica via Italy.
“Our latest results showed us we are on
a good track, and actually, we achieved the
precision we wanted to,” Krauss says. “So
I’m sure it’s going to be ready.”
Christian Elliott is a science and environmental reporter based in Chicago.
SO23-front_thedownload.indd 15
15
Book reviews
Nuts & Bolts: Seven Small Inventions That
Changed the World (in a Big Way)
By Roma Agrawal (W.W. Norton, 2023)
Months spent taking apart ballpoint pens and blenders led Agrawal, an engineer, to explore how seven
fundamental inventions led to our most complex feats
of engineering. Despite its complexity, she writes,
engineering at its most fundamental “is inextricably
linked to your everyday life and to humanity.”
The Philosopher of Palo Alto: Mark Weiser,
Xerox PARC, and the Original Internet of Things
By John Tinnell (University of Chicago Press, 2023)
“The ‘Smart House’ of 2005 will have computers in
every room,” wrote Mark Weiser in 1996, “but what
will they do?” The first chief technology officer of
Xerox PARC and the so-called father of ubiquitous
computing, Weiser (who died at 46 in 1999) was
wildly innovative—and prescient. But his vision for
the Internet of Things didn’t work out as he’d hoped:
the technology meant to connect and lift up humanity
instead began to surveil and sell to us.
Mobility
By Lydia Kiesling (Crooked Media, 2023)
An American teenager living in Azerbaijan with her
Foreign Service family in the ’90s, Bunny finds herself
adrift when she returns to America. She gets a temp
job at a Texas oil company—and never leaves. In this
novel, Kiesling charts the arc of Bunny’s career (which
Bunny always insists is not in oil but at “an energy
company”) over two decades, slyly inserting a narrative of our collective apathy toward climate change.
The Apple II Age: How the Computer
Became Personal
By Laine Nooney (University of Chicago Press, 2023)
If you want to understand how Apple became an
industry behemoth, says Nooney, look no further
than the 1977 Apple II. Nooney is keen to critique the
lone-genius narrative that characterizes so much of
technological advancement, arguing that above all,
the story of personal computing in the United States
is about the rise of everyday users.
7/31/23 10:58 AM
16
The Download
How saving Venice’s salt marshes
could keep the city from sinking
The Venice lagoon is an ideal test case for new approaches
to combating climate change.
Venice, Italy, is suffering from a combination of subsidence—the city’s foundations slowly sinking into the mud on
which they are built—and rising sea levels. In the worst-case scenario, it could
disappear underwater by the year 2100.
Alessandro Gasparotto, an environmental engineer, is one of the many people trying to keep that from happening.
Standing on a large mudflat in the center of the Venetian lagoon, he pushes a
hollow three-foot-high metal cylinder
called a piezometer into the thick black
mud. This instrument will measure how
groundwater moves through the sediment as the lagoon’s tides rise and fall.
Knowing what’s happening under the mud
is crucial for understanding whether, and
how, vegetation can grow and eventually
transform this barren landscape of mud
into a salt marsh.
Gasparotto’s work with salt marshes is
part of a project steered by the NGO We
Are Here Venice (WAHV) and funded by
the EU through the WaterLANDS research
program, which is restoring wetlands
across Europe. The Venice chapter has
been granted €2 million over five years to
investigate whether artificial mudflats—
the deposits that result when the lagoon
is dredged to create shipping channels—
can be turned back into the marshes that
once thrived in this area and become a
functioning part of the lagoon ecosystem
again.
“The history of the city of Venice has
always been intertwined with the history
of the lagoon,” explains Andrea D’Alpaos,
a geoscientist at the University of Padova.
The health of Venice depends on the health
of the lagoon system, and vice versa.
SO23-front_thedownload.indd 16
This relationship is not only economic—protecting the lagoon ecosystem
bolsters fishing yields, for example—but
also infrastructural. Salt marshes have a
buffering effect on tidal currents, attenuating the force of waves and reducing the
water’s erosive effect on Venice’s buildings.
But the marshes have been declining for
centuries. This is due in part to waterway
mismanagement going as far back as the
1500s, when Venetians diverted rivers out
of the lagoon, starving it of sediment that
would naturally be borne in on their currents. The building of breakwaters at three
inlets on the Adriatic Sea and the excavation of an enormous shipping canal in the
late 1900s further eroded the marshland.
And while the city has been the beneficiary of thousands of euros in restoration
and prevention work—most notably the
€6.2 billion MOSE (the Italian acronym
for “Experimental Electromechanical
Module”), a colossal (and extremely
effective) system of mobile sea barriers
designed to keep the Adriatic’s floodwaters from the city—the marshes have
been overlooked.
Construction of MOSE began in 2003,
but delays, cost overruns and a corruption
scandal stalled its completion. It was activated for the first time, successfully preventing a flood, in 2020. Paradoxically,
it is the MOSE technology, which protects the city, that is damaging the lagoon
ecosystem.
“When the MOSE system is raised, it
stops storm surges and prevents Venice
flooding,” D’Alpaos says. “Storm surges
are bad for Venice, but they are good for
marshes; 70% of sediment that reaches the
marsh is delivered during storm surges.”
These excessively high tides, D’Alpaos
continues, are happening more often.
The problem, he says, is that “if you close
the lagoon too often or for too long, you
prevent sediment reaching marshes.” In
the more than 20 years that he has been
studying the lagoon, he says, he’s seen
marshes disappearing at an alarming
rate: “The marshes are drowning. Two
centuries ago, the Venice lagoon had
180 square kilometers [69 square miles]
of marshes. Now we only have 43 square
kilometers.”
One of the sites the We Are Here
Venice team is working is on a natural
salt marsh, hugged on one side by a kidneyshaped platform of infill dredged from the
lagoon. In places where the mud is dry,
the ground has separated into patches that
conjure small tectonic plates, littered with
bone-white crab claws picked clean and
dropped by gulls flying overhead. Three
orange sticks mark the spot where a fence
between the salt marsh and the infill will
be removed to allow water exchange and
the movement of sediment, making the
two ecosystems “speak to one another,”
as Jane da Mosto, the executive director
and cofounder of WAHV, describes it.
COURTESY PHOTO
By Catherine Bennett
7/31/23 10:58 AM
The Download
Jane da Mosto and
Alessandro Gasparotto
survey Venice’s central
lagoon from a restored
salt marsh.
17
Jobs of the future:
Chief heat officer
It’s becoming an essential new role as more
cities are beset by extreme heat.
COURTESY PHOTO
By Allison Arieff
Tramping over the island in rubber
boots, releasing gobbets of black mud at
every step, da Mosto explains that “all of
this represents a kind of natural capital.” Not
only do the marshes store carbon, but “these
environments also support fish habitats and
a huge bird population,” she adds. Even
the samphire, an edible marshland plant,
“could be cultivated like a crop.” Marshes
are also more efficient carbon sinks than
forests, because marshland plants that store
carbon are gradually buried under sediment
as the tide washes over them, trapping the
carbon for as long as centuries.
Da Mosto sees the city as something
of a laboratory for environmental solutions with wider applications. “Venice is
a mirror on the world,” she says. “If the
city remains an example of all the world’s
problems, as it is now, then there’s no
point trying to keep it alive. But we should
be able to show how to turn infills into
ecologically productive salt marshes and
how to transform an economy based on
mass tourism into an economy based on
its natural capital.”
Catherine Bennett is a freelance journalist
based in Paris.
SO23-front_thedownload.indd 17
In Miami, extreme heat is a
deadly concern. Rising temperatures now kill more people than hurricanes or floods,
and do more harm to the
region’s economy than rising
sea levels. That’s why, in 2021,
Florida’s Miami-Dade County
hired a chief heat officer, Jane
Gilbert—the first position of its kind in the world.
Heat has been a silent killer in Miami, says Gilbert: “The
number-one cause of weather-related death is from excess heat.
It’s been this underrecognized issue that needs to be elevated.”
According to the Centers for Disease Control and Prevention,
there are an average of 67,512 emergency department visits in
the US due to heat each year, and 702 heat-related deaths.
A holistic approach: Gilbert works in the county’s Office of
Resilience, which has people designated to work on sea-level
rise, carbon mitigation, and waste reduction. “Together,” she
says, “we make sure we come at it from an integrated perspective.” She acknowledges that some may be skeptical of her role
because “if you work and live in air-conditioning and can afford
it, you can manage heat, [and] you don’t need me.”
Inform, prepare, protect: Gilbert’s focus is on those least able to
protect themselves and their families against high heat—poorer
communities and Black and Hispanic people tend to bear the
brunt. Her collaborative efforts to keep homes, facilities, and
neighborhoods affordably cool include everything from creating programs that protect outdoor workers to planting trees that
help mitigate heat-island effects.
Career path: Gilbert majored in environmental science at
Barnard College in New York City and went on to get a master’s in public administration at Harvard’s Kennedy School of
Government, focusing on urban community development. The
job of chief heat officer didn’t exist back then, she says, but if it
had, “I would have been really interested.” Some of the issues
may have shifted, she explains, “but when I studied climate
change in the mid-’80s, it was accepted science.”
7/31/23 10:58 AM
18
Explained
How french fries, trash,
and sunlight could power your
future flights.
By Casey Crownhart
Illustration by Marcin Wolski
Everything you need to
know about the wild world of
alternative jet fuels
Aviation accounts for about 2% of global carbon dioxide emissions,
and considering the effects of other polluting gases, the industry is
responsible for about 3% of all human-caused global warming. One
way the aviation industry hopes to cut down on its climate impacts is
by using new fuels. These alternatives, often called sustainable aviation fuels (SAFs), could be the key to helping this sector reach net-zero
carbon dioxide emissions by 2050.
The actual climate impact of alternative fuels will depend on a lot
of factors, however. Here’s everything you need to know about the
future of jet fuel and the climate.
What are SAFs?
Planes today mostly burn kerosene—a fossil
fuel with a mix of carbon-containing molecules. Alternative fuels have the same basic
chemical makeup as traditional jet fuel, but
they are derived from renewable sources.
Alternative fuels fall into two main categories: biofuels and synthetic electrofuels.
Biofuels come from a range of biological sources; some are derived from waste
like used cooking oil, agricultural residues,
or household trash, while others are made
from crops like corn and palm trees.
Making fuel from biological sources
requires chopping up the complicated
chemical structures that plants make to
store energy. Fats and carbohydrates can
be broken apart and purified to make the
simple chains of carbon-rich molecules that
are jet fuel’s primary ingredient.
Electrofuels (also called e-fuels), on the
other hand, don’t start with plants. Instead,
they start with two main building blocks:
hydrogen and carbon dioxide, which are
SO23-front_explained.indd 18
combined and transformed in chemical
reactions powered by electricity.
Making e-fuels is expensive today,
because the process is inefficient and
isn’t done widely at commercial scale. But
experts say that to reach its 2050 target,
aviation will need to rely on them, because
they’re the most effective way to cut carbon dioxide emissions, and they won’t be
limited by supply or collection logistics like
fuels made from plants or waste.
So how do SAFs help climate
progress?
Like conventional jet fuel, alternative fuels
produce carbon dioxide when they’re
burned for energy in planes.
Unlike regular airplanes, those that run
on SAFs can, depending on how the fuels
are made, offset their carbon dioxide emissions. In an ideal world, a fuel’s production
process would remove enough carbon from
the atmosphere to cancel out the carbon
dioxide emissions when the fuel is burned.
However, that’s often far from the reality.
Alternative fuels fall on a spectrum in
terms of how much they reduce carbon
dioxide emissions. On one end, synthetic
fuels that are made with carbon collected
via direct air capture and whose production
is powered entirely by renewable electricity will reduce emissions by nearly 100%
compared with fossil fuels.
On the other end of the spectrum, some
crop-based biofuels can produce more carbon dioxide emissions overall than fossil
fuels. That’s frequently the case for biofuels made from palm oil, since growing that
crop can decimate rainforests.
7/28/23 10:28 AM
Explained
Today, most commercially available
alternative jet fuels are made from fats,
oils, and greases. If they’re derived from
waste sources like these, such fuels reduce
carbon dioxide emissions by roughly 70%
to 80% compared with fossil fuels.
It’s worth noting that while SAFs can
approach net-zero carbon dioxide emissions, burning the fuels still produces other
pollution and contributes to contrails,
which can trap heat in the atmosphere.
What’s next for SAFs?
Alternative fuels are attractive to the aviation industry because they’re a drop-in
SO23-front_explained.indd 19
solution, requiring little adjustment of
aircraft and airport infrastructure. Over
the past year, several test flights powered
by 100% SAFs have taken off.
However, alternative fuels made up
less than 0.2% of the global jet fuel supply
in 2022. One of the main challenges to
getting SAFs into the skies is expanding
the supply. The world doesn’t eat enough
french fries for used cooking oils to meet
global demand for jet fuel.
Recent policy moves in both the United
States and the European Union are aimed
at boosting the market for alternative
fuels. RefuelEU Aviation, a deal finalized
19
in April, requires that fuel supply at EU
airports include 2% SAFs by 2025 and
70% by 2050. The US recently passed new
tax credits for alternative fuels, aimed at
helping expensive options reach price
parity with fossil fuels.
Ultimately, alternative fuels present
one potential pathway to cutting the climate impacts of aviation. But the details
will matter profoundly: some fuels could
be part of the solution, while others might
end up being part of the problem.
Casey Crownhart is a climate
reporter at MIT Technology Review.
7/28/23 10:28 AM
20
Profile
Valley of the misfit
tech workers
F
or Silicon Valley venture
capitalists and founders,
any inconvenience big or
small is a problem to be
solved—even death itself.
And a new genre of products and services known as
“death tech,” intended to help the bereaved
and comfort the suffering, shows that the
tech industry will try to address literally
anything with an app.
Xiaowei Wang, a technologist, author,
and organizer based in Oakland, California,
finds that disturbing.
“It’s so gross to view people like
that—to see situations and natural facts
of life like dying as problems,” Wang said
during lunch and beers on the back patio
of an Oakland brewery in late March.
To research a forthcoming book on the
use of tech in end-of-life care, Wang has
trained as a “death doula” and will soon
start working at a hospice.
This approach to exploring technology, grounded in its personal and political
implications, exemplifies a wider vision
for fellow tech workers and the industry
at large—a desire that it grant more power
and agency to those with diverse backgrounds, become more equitable instead
of extractive, and aim to reduce structural
inequalities rather than seeking to enrich
shareholders.
To realize this vision, Wang has launched
a collaborative learning project called
Collective Action School in which tech
workers can begin to confront their own
impact on the world. The hope is to promote
more labor organizing within the industry
and empower workers who may feel intimidated to challenge gigantic corporations.
SO23-front_profile.indd 20
Wang came to prominence as an editor at Logic magazine, an independent
publication created in 2016 amid early
Trump-era anxiety and concerns about the
growing powers of technology. Dismissing
utopian narratives of progress for prescient analysis of tech’s true role in widening inequity and concentrating political
power, the founders—who also included
Ben Tarnoff, Jim Fingal, Christa Hartsock,
and Moira Weigel—vowed to stop having
“stupid conversations about important
things.” (In January, it was relaunched as
“the first Black, Asian, and Queer tech
magazine,” with Wang and J. Khadijah
Abdurahman as co-editors.)
Collective Action School, initially
known as Logic School, is an outgrowth
of the magazine. It’s emerged at a time
when scandals and layoffs in the tech
industry, combined with crypto’s troubles
and new concerns about bias in AI, have
made Big Tech’s failings all the more visible. In courses offered via Zoom, Wang
and other instructors guide roughly two
dozen tech workers, coders, and project
managers through texts on labor organizing, intersectional feminist theory, and
the political and economic implications
of Big Tech. Its second cohort has now
completed the program
At our lunch, Wang was joined by three
former students who helped run that last
session: Derrick Carr, a senior software
engineer; Emily Chao, a former trust and
safety engineer at Twitter; and Yindi Pei,
a UX designer. All shared a desire to create something that could lead to more
concrete change than existing corporate
employee resource groups, which they say
often seem constrained and limited. And
Xiaowei Wang and Collective
Action School seek to remedy the
moral blindness of Big Tech.
By Patrick Sisson
Portrait by Christie Hemm Klok
while Big Tech may obsess over charismatic founders, Collective Action School
runs in a collective fashion. “I enjoy operating under the radar,” Wang said.
W
ang, who uses the pronoun “they,”
moved from China to Somerville,
Massachusetts, in 1990, at age
four. Drawn to science and technology at
a young age, they made friends in early
online chat rooms and built rockets and
studied oceanography at science camps.
They also started questioning social norms
early on; their mom tells of getting a call
from the middle school principal, explaining that Wang had started a petition for a
gender-inclusive class dress code.
Years later, they enrolled at Harvard
to study design and landscape architecture—at one point lofting a kite over the
skies in Beijing to track pollution levels. A
few years after graduating in 2008, Wang
moved to the Bay Area. They worked at
the nonprofit Meedan Labs, which develops open-source tools for journalists, and
the mapping software company Mapbox,
a rapidly scaling “rocket ship” where an
employee—sometimes Wang—had to be
on call, often overnight, to patch any broken code. Unsatisfied, Wang left in 2017 to
focus on writing, speaking, and research,
earning a PhD in geography at Berkeley.
“The person who did my [Mapbox] exit
interview told me, ‘You have this problem
where you see injustice and you can’t stand
it,’” Wang says. “She told me, ‘Sometimes
you need to put that to bed if you want to
stay in this industry.’ I can’t.”
Many in tech, Wang says, have a fundamental belief in constant improvement
through corporate innovation; for these
7/27/23 10:51 AM
A Buddhist teacher told Wang
that we’re all “looking at
the sky through a straw,”
limited to our own small
portholes of perception. This
insight guides their approach
to research and writing.
SO23-front_profile.indd 21
7/27/23 10:51 AM
22
Profile
people, technology means “you push a button and something in your life is solved.”
But Wang, who practices Buddhism and
reads tarot cards, sees things differently,
believing that life is all about natural cycles
humans can’t control and should accept
with humility. For Wang, tech can be rural
communities hacking open-source software,
or simply something that brings pure joy.
At Logic, Wang penned a popular column, Letter from Shenzhen, which included
scenes from their family’s hometown of
Guangzhou, China, and the explosion of
innovation in the country. It led to a book
titled Blockchain Chicken Farm: And Other
Stories of Tech in China’s Countryside, a
striking exploration of technology’s impact
on rural China.
During the book editing process, Wang
went on a Buddhist retreat, where a teacher
remarked that we’re all “looking at the sky
through a straw,” limited to our own small
portholes of perception. This insight, says
Wang, helped frame the final draft. But
it also became a metaphor for an entire
approach to research and writing on technology: focused, careful consideration
of many viewpoints, and the capacity to
imagine something better.
C
ollective Action School, funded in
part by the Omidyar Network and
a grant from the arts and coding
nonprofit Processing Foundation, came
together in 2020 as tech worker activism was on the rise. Kickstarter employees’ union drive in 2020 was followed by
efforts at Alphabet, Amazon, and Apple,
as well as industry-wide campaigns such
as Collective Action in Tech (led in part by
former Logic editor Tarnoff) and the Tech
Workers Coalition. But because Wang
avoids the spotlight and believes that only
strong communities can remedy the tech
industry’s ills, the school is organized in
a more experimental way.
Each cohort begins with a “week zero”
meeting to get acquainted as a group. Then,
for 13 weeks, participants attend sessions
covering labor movements, the political
economy of innovation, and the impact
SO23-front_profile.indd 22
Collective Action School offers an antithesis to
the “golden ticket” mentality of tech work, with
an approach that’s more focused on collective
action and culture.
of technology on marginalized groups.
The funding covers all tuition costs for all
students. As Pei, one of the co-organizers,
puts it, the school offers an antithesis to
the “golden ticket” mentality of tech work,
with an approach that’s more focused on
collective action and culture.
Each week, participants read from a
lengthy syllabus and welcome a guest
speaker. Past guests include Clarissa
Redwine from the Kickstarter union’s
oral history project, former Google employees Alex Hanna and Timnit Gebru of the
Distributed AI Research Institute, and Erin
McElroy, cofounder of the Anti-Eviction
Mapping Project. Then they work on a final
project; one of the first was Looking Glass,
which used augmented reality to highlight
the lost Black history of Pittsburgh. For
developing it, creator Adrian Jones was
named the school’s “community technologist,” a role that comes with a one-year
grant to expand the idea. Chao, who formerly worked for Twitter, released a zine
about trust and safety issues, and Pei has
been working on an affordable housing
website for San Francisco.
The organizers see Collective Action
School as a community-building project,
and open-source syllabus, that can grow
with each new cohort. Eventually, the
aim is to expand the reach of the school
with chapters based in other areas, adding
in-person meetings and creating a larger
network of workers sharing similar values and aims.
That strategy fills a need within larger
tech and labor organizing, says Gershom
Bazerman, who volunteers with the
Tech Workers Coalition and Emergency
Workplace Organizing Committee. Tech
workers have long been told they’re unique,
but recent political fights between workers
and leadership—with employees pushing
back against contributing to projects used
by the US military or immigration enforcement—have set off a wave of ground-up
organizing informed by social concerns.
Groups like Collective Action School can
be a “bridge” between workers seeking
such change.
While the readings and interactions
aren’t creating a utopia, they are creating
a space for students to learn, meet, and
commit to more change. Wang hopes they
find solidarity and, ideally, bring these
ideas and experience back to their companies and coworkers (or find the resources
and momentum to move to a job or field
more aligned with their values). Some
in this year’s cohort live and work in the
Global South and have faced layoffs, so
classmates created a cost-of-living support fund to help.
Carr has called the experience an “antidote to a specific accumulated toxin” that
comes from working in Big Tech. That
may be true, but Collective Action School,
along with other recent organizing efforts,
also sets out to redefine the experience
of working within the industry. “We’re
not saying we’re making the perfect safe
learning space,” says Wang. “We had a
container in which we could have fun,
learn from each other, and then grow. I
think that’s really rare and special. It’s like
committing to each other.”
Patrick Sisson, a Chicago expat
living in Los Angeles, covers
technology and urbanism.
7/28/23 12:49 PM
Expand your
knowledge
beyond
the classroom.
Invest in your future with a student subscription
and save 50% on year-long access to
MIT Technology Review’s trusted reporting,
in-depth stories, and expert insights.
Scan this code to access your 50% student savings
or learn more at TechnologyReview.com/StudentOffer
Untitled-3 1
8/3/23 10:57
24
SO23-feature_Hamzelou.therapies.indd 24
8/2/23 6:14 PM
25
By
Illustration
Jessica
Hamzelou
Selman
Design
COVER STORY
Desperate people will often want to
try experimental, unproven
treatments. How can we ensure they’re
not exploited or put at risk?
SO23-feature_Hamzelou.therapies.indd 25
THE
RIGHT
TO TRY
8/2/23 6:14 PM
26
Max
was only a toddler when his parents noticed there was “some-
people to access treatments that might not help them—and
thing different” about the way he moved. He was slower than could harm them. Anecdotes appear to be overpowering
evidence in decisions on drug approval. As a result, we’re
other kids his age, and he struggled to jump. He couldn’t run. ending up with some drugs that don’t work.
We urgently need to question how these decisions are
Blood tests suggested he might have a genetic disease—
made. Who should have access to experimental theraone that affected a key muscle protein. Max’s dad, Tao Wang, pies? And who should get to decide? Such questions are
a researcher for a climate philanthropy organization, says especially pressing considering how quickly biotechnology is advancing. Recent years have seen an explosion in
he and his wife were initially in denial. It took them a few what scientists call “ultra-novel” therapies, many of which
months to take Max for the genetic test that confirmed their involve gene editing. We’re not just improving on existing
classes of treatments—we’re creating entirely new ones.
fears: he had Duchenne muscular dystrophy.
Managing access to them will be tricky.
Just last year, a woman received a CRISPR treatment
Duchenne is a rare disease that tends to affect young
boys. It’s progressive—those affected generally lose muscle
designed to lower her levels of cholesterol—a therapy that
function as they get older. There is no cure. Many people
directly edited her genetic code. Also last year, a genetically
with the disorder require wheelchairs by the time they
modified pig’s heart was transplanted into a man with severe
reach their 20s. Most do not survive beyond their 30s.
heart disease. Debates have raged over whether he was the
Max’s diagnosis hit Wang and his wife “like a tornado,”
right candidate for the surgery, since he ultimately died.
he says. But eventually one of his doctors menFor many, especially those with severe diseases,
tioned a clinical trial that he was eligible for. The
trying an experimental treatment may be better
Between
2009 and 2022,
trial was for an experimental gene therapy designed
than nothing. That’s the case for some people
to replace the missing muscle protein with a shortwith Duchenne, says Hawken Miller, a 26-yearened, engineered version that might help slow his
old with the condition. “It’s a fatal disease,” he
decline or even reverse it. Enrolling Max in the
says. “Some people would rather do something
CANCER DRUGS
trial was a no-brainer for Wang. “We were willing
than sit around and wait for it to take their lives.”
received accelerated
to try anything that could change the course [of the
approval to treat 66
disease] and give us some hope,” he says.
Expanding access
conditions—and 15 of
those approvals have
That was more than two years ago. Today, Max
There’s a difficult balance to be reached between
since been withdrawn.
is an active eight-year-old, says Wang. He runs,
protecting people from the unknown effects of a
jumps, climbs stairs without difficulty, and even
new treatment and enabling access to something
enjoys hiking. “He’s a totally different kid,” says Wang.
potentially life-saving. Trying an experimental drug could
The gene therapy he received was recently considcure a person’s disease. It could also end up making no
ered for accelerated approval by the US Food and Drug
difference, or even doing harm. And if companies strugAdministration. Such approvals, reserved for therapies
gle to get funding following a bad outcome, it could delay
targeting serious conditions that lack existing treatments,
progress in an entire research field—perhaps slowing
require less clinical trial data than standard approvals.
future drug approvals.
While the process can work well, it doesn’t always.
In the US, most experimental treatments are accessed
And in this case, the data is not particularly compelling.
through the FDA. Starting in the 1960s and ’70s, drug
The drug failed a randomized clinical trial—it was found
manufacturers had to prove to the agency that their prodto be no better than a placebo.
ucts actually worked, and that the benefits of taking them
Still, many affected by Duchenne are clamoring for
would outweigh any risks. “That really closed the door
on patients’ being able to access drugs on a speculative
access to the treatment. At an FDA advisory committee
meeting in May set up to evaluate its merits, multiple
basis,” says Christopher Robertson, a specialist in health
parents of children with Duchenne pleaded with the
law at Boston University.
It makes sense to set a high bar of evidence for new
organization to approve the drug immediately—months
before the results of another clinical trial were due. On
medicines. But the way you weigh risks and benefits can
June 22, the FDA granted conditional approval for the
change when you receive a devastating diagnosis. And it
drug for four- and five-year-old boys.
wasn’t long before people with terminal illnesses started
This drug isn’t the only one to have been approved on
asking for access to unapproved, experimental drugs.
weak evidence. There has been a trend toward lowering
In 1979, a group of people with terminal cancer and
the bar for new medicines, and it is becoming easier for
their spouses brought a legal case against the government
48
SO23-feature_Hamzelou.therapies.indd 26
8/2/23 6:14 PM
27
to allow them to access an experimental treatment. While
a district court ruled that one of the plaintiffs should be
allowed to buy the drug, it concluded that whether a person’s disease was curable or not was beside the point—
everyone should still be protected from ineffective drugs.
The decision was eventually backed by the Supreme Court.
“Even for terminally ill patients, there’s still a concept of
safety and efficacy under the statute,” says Robertson.
Today, there are lots of ways people might access experimental drugs on an individual basis. Perhaps the most
obvious way is by taking part in a clinical trial. Early-stage
trials typically offer low doses to healthy volunteers to
make sure new drugs are safe before they are offered to
people with the condition the drugs are ultimately meant
to treat. Some trials are “open label,” where everyone
knows who is getting what. The gold standard is trials
that are randomized, placebo controlled, and blinded:
some volunteers get the drug, some get the placebo, and
no one—not even the doctors administering the drugs—
knows who is getting what until after the
results have been collected. These are the
kinds of studies you need to do to tell if a
drug is really going to help people.
But clinical trials aren’t an option for
everyone who might want to try an unproven
treatment. Trials tend to have strict criteria
about who is eligible depending on their age
and health status, for example. Geography
and timing matter, too—a person who wants
to try a certain drug might live too far from
where the trial is being conducted, or might
have missed the enrollment window.
Instead, such people can apply to the FDA under the
organization’s expanded access program, also known as
“compassionate use.” The FDA approves almost all such
requests. It then comes down to the drug manufacturer
to decide whether to sell the person the drug at cost (it
is not allowed to make a profit), offer it for free, or deny
the request altogether.
Another option is to make a request under the Right to
Try Act. The law, passed in 2018, establishes a new route
for people with life-threatening conditions to access experimental drugs—one that bypasses the FDA. Its introduction was viewed by many as a political stunt, given that
the FDA has rarely been the barrier to getting hold of such
medicines. Under Right to Try, companies still have the
choice of whether or not to provide the drug to a patient.
When a patient is denied access through one of these
pathways, it can make headlines. “It’s almost always the
same story,” says Alison Bateman-House, an ethicist who
researches access to investigational medical products at
New York University’s Grossman School of Medicine. In
this story, someone is fighting for access to a drug and
being denied it by “cold and heartless” pharma or the
FDA, she says. The story is always about “patients valiantly struggling for something that would undoubtedly
help them if they could just get to it.”
But in reality, things aren’t quite so simple. When
companies decide not to offer someone a drug, you can’t
really blame them for making that decision, says BatemanHouse. After all, the people making such requests are usually incredibly ill. If someone were to die after taking that
drug, not only would it look bad, but it could also put off
investors from funding further development. “If you have
a case in the media where somebody gets compassionate
use and then something bad happens to them, investors
run away,” says Bateman-House. “It’s a business risk.”
FDA approval of a drug means it can be sold and prescribed—crucially, it’s no longer experimental. Which is
why many see approval as the best way to get hold of a
promising new treatment.
“If ... somebody gets compassionate
use and then something bad
happens to them, investors run away.
It’s a business risk.”
SO23-feature_Hamzelou.therapies.indd 27
As part of a standard approval process, which should
take 10 months or less, the FDA will ask to see clinical trial
evidence that the drug is both safe and effective. Collecting
this kind of evidence can be a long and expensive process.
But there are shortcuts for desperate situations, such as
the outbreak of covid-19 or rare and fatal diseases—and for
serious diseases with few treatment options, like Duchenne.
Anecdotes vs. evidence
Max accessed his drug through a clinical trial. The treatment, then called SRP-9001, was developed by the pharmaceutical company Sarepta and is designed to replace
dystrophin, the protein missing in children with Duchenne
muscular dystrophy. The protein is thought to protect muscle cells from damage when the muscles contract. Without
it, muscles become damaged and start to degenerate.
The dystrophin protein has a huge genetic sequence—
it’s too long for the entire thing to fit into a virus, the usual
means of delivering new genetic material into a person’s
body. So the team at Sarepta designed a shorter version,
which they call micro-dystrophin. The code for the protein
8/2/23 6:14 PM
28
is delivered by means of a single intravenous
infusion.
The company’s initial goal was to develop
it as a therapy for children between four and
seven with a diagnosis of Duchenne. And it
had a way to potentially fast-track the process.
Usually, before a drug can be approved,
it will go through several clinical trials. But
accelerated approval offers a shortcut for
companies that can show that their drug is
desperately needed, safe, and supported by
compelling preliminary evidence.
For this kind of approval, drug companies don’t need to show that a treatment has
improved anyone’s health—they just need
to show improvement in some biomarker
related to the disease (in Sarepta’s case,
the levels of the micro-dystrophin protein
in people’s blood).
There’s an important proviso: the company
must promise to continue studying the drug,
and to provide “confirmatory trial evidence.”
This process can work well. But in recent
years, it has been a “disaster,” says Diana
Zuckerman, president of the National
Center for Health Research, a nonprofit
that assesses research on health issues.
Zuckerman believes the bar of evidence for
accelerated approval has been dropping.
Many drugs approved via this process are later found
ineffective. Some have even been shown to leave people
worse off. For example, between 2009 and 2022, 48 cancer drugs received accelerated approval to treat 66 conditions—and 15 of those approvals have since been withdrawn.
Melfulfen was one of these. The drug was granted accelerated approval for multiple myeloma in February 2021.
Just five months later, the FDA issued an alert following
the release of trial results suggesting that people taking
the drug had a higher risk of death. In October 2021, the
company that made the drug announced it was to be taken
off the market.
There are other examples. Take Makena, a treatment
meant to reduce the risk of preterm birth. The drug was
granted accelerated approval in 2011 on the basis of results
from a small trial. Larger, later studies suggested it didn’t
work after all. Earlier this year, the FDA withdrew approval
for the drug. But it had already been prescribed to hundreds of thousands of people—nearly 310,000 women
were given the drug between 2011 and 2020 alone.
And then there’s Aduhelm. The drug was developed as
a treatment for Alzheimer’s disease. When trial data was
presented to an FDA advisory committee, 10 of 11 panel
SO23-feature_Hamzelou.therapies.indd 28
However, the drug being
studied, Sarepta's
SRP-9001,failed to
perform better than
placebo across the
whole group of boys in
the trial.
members voted against approval. The 11th was uncertain.
There was no convincing evidence that the drug slowed
cognitive decline, the majority of the members found.
“There was not any real evidence that this drug was going
to help patients,” says Zuckerman.
Despite that, the FDA gave Aduhelm accelerated
approval in 2021. The drug went on the market at a price
of $56,000 a year. Three of the committee members
resigned in response to the FDA’s approval. And in April
2022, the Centers for Medicare & Medicaid Services
announced that Medicare would only cover treatment
that was administered as part of a clinical trial. The case
demonstrates that accelerated approval is no guarantee
a drug will become easier to access.
The other important issue is cost. Before a drug is
approved, people might be able to get it through expanded
access—usually for free. But once the drug is approved,
many people who want it will have to pay. And new treatments—especially gene therapies—don’t tend to be cheap.
We’re talking hundreds of thousands, or even millions, of
dollars. “No patient or families should have to pay for a
drug that’s not proven to work,” says Zuckerman.
What about SRP-9001? On May 12, the FDA held an
advisory committee meeting to assess whether the data
PHOTO COURTESY OF TAO WANG
More than two years
after participating in a
clinical trial testing a
treatment for Duchenne
muscular dystrophy, Max
Wang is an active eightyear-old.
8/3/23 8:40 AM
29
PHOTO COURTESY OF ROBERTS FAMILY
Will Roberts (left),
who is now 10 years
old, has been taking
Sarepta’s Amondys 45
to treat his Duchenne
muscular dystrophy
since he was a year
old, but he has seen
little improvement.
His parents, Ryan and
Keyan, were hoping
that SRP-9001 would be
approved by the FDA.
supported accelerated approval. During the nine-hour
virtual meeting, scientists, doctors, statisticians, ethicists,
and patient advocates presented the data collected so far,
and shared their opinions.
Sarepta had results from three clinical trials of the drug
in boys with Duchenne. Only one of the three—involving 41 volunteers aged four to seven—was randomized,
blinded, and placebo controlled.
Scientists will tell you that’s the only study you can
draw conclusions from. And unfortunately, that trial did
not go particularly well—by the end of 48 weeks, the
children who got the drug were not doing any better than
those who got a placebo.
But videos presented by parents whose children had
taken the drug told a different story.
Take the footage shared by Brent Furbee. In a video
clip taken before he got the gene therapy, Furbee’s son
Emerson is obviously struggling to get up the stairs. He
slowly swings one leg around while clinging to the banister, before dragging his other leg up behind him.
A second video, taken after the treatment, shows him
taking the stairs one foot at a time, with the speed you’d
expect of a healthy four-year-old. In a third, he is happily
pedaling away on his tricycle. Furbee told the committee
SO23-feature_Hamzelou.therapies.indd 29
that Emerson, now six, could run faster, get
up more quickly, and perform better on tests
of strength and agility. “Emerson continues
to get stronger,” he said.
It was one of many powerful, moving
testimonies—and these stories appear to
have influenced the FDA’s voting committee,
despite many concerns raised about the drug.
The idea of providing the genetic code
for the body to make a shortened version of
dystrophin is based on evidence that people who have similarly short proteins have
a much milder form of muscular dystrophy
than those whose bodies produce little to
no dystrophin. But it’s uncertain whether
Sarepta’s protein, with its missing regions,
will function in the same way.
Louise Rodino-Klapac, executive vice
president, chief scientific officer, and head
of R&D at Sarepta, defends the drug: “The
totality of the evidence is what gives us great
confidence in the therapy.” She has an explanation for why the placebo-controlled trial
didn’t show a benefit overall. The groups of
six- to seven-year-olds receiving the drug and
the placebo were poorly matched “at baseline,”
she says. She also says that the researchers
saw a statistically significant result when
they focused only on the four- and five-year-olds studied.
But the difference is not statistically significant for
the results the trial was designed to collect. And there
are some safety concerns. While most of the boys developed only “mild” side effects, like vomiting, nausea, and
fever, a few experienced more serious, although temporary, problems. There were a total of nine serious complications among the 85 volunteers. One boy had heart
inflammation. Another developed an immune disease
that damages muscle fibers.
On top of all that, as things currently stand, receiving
one gene therapy limits future gene therapy options. That’s
because the virus used to deliver the therapy causes the
body to mount an immune response. Many gene therapies rely on a type called adeno-associated virus, or AAV.
If a more effective gene therapy that uses the same virus
comes along in the coming years, those who have taken
this drug won’t be able to take the newer treatment.
Despite all this, the committee voted 8–6 in favor of
granting the drug an accelerated approval. Many committee members highlighted the impact of the stories and
videos shared by parents like Brent Furbee.
“Now, I don’t know whether those boys got placebo
or whether they got the drug, but I suspect that they got
8/2/23 6:14 PM
30
the drug,” a neurologist named Anthony Amato told the
audience.
“Those videos, anecdotal as they are … are substantial evidence of effectiveness,” said committee member
Donald B. Kohn, a stem-cell biologist.
The drugs don’t work?
was running around in their backyard, but this year he
needs a power chair to get around at school. “We definitely didn’t see any gains in ability, and it’s hard to tell
if it made his decline … a little less steep,” Roberts says.
The treatment comes with risks, too. The Amondys 45
website warns that 20% of people who get the drug experience adverse reactions, and that “potentially fatal” kidney
damage has been seen in people treated with a similar drug.
Roberts says she is aware of the risks that come with
taking drugs like Amondys. But she and her husband,
Ryan, an IT manager, were still hoping that SRP-9001
would be approved by the FDA. For the Robertses and
parents like them, part of the desire is based on the hope,
no matter how slim, that their child might benefit.
“We really feel strongly that we’re in a position now
where we’re seeing [Will’s] mobility decline, and we’re
nervous that … he might not qualify to take it by the time
it’s made available,” she said in a video call, a couple of
weeks after the advisory committee meeting.
Powerful as they are, individual experiences are just that.
“If you look at the evidentiary hierarchy, anecdote is considered the lowest level of evidence,” says Bateman-House.
“It’s certainly nowhere near clinical-trial-level evidence.”
This is not the way we should be approving drugs, says
Zuckerman. And it’s not the first time Sarepta has had
a drug approved on the basis of weak evidence, either.
The company has already received FDA approval to
sell three other drugs for Duchenne, all of them designed
to skip over faulty exons—bits of DNA that code for a
protein. Such drugs should allow cells to make a longer
form of a protein that more closely resembles dystrophin.
The first of these “exon-skipping” drugs,
Exondys 51, was granted accelerated approval in
Selling hope
2016—despite the fact that the clinical trial was not
On June 22, just over a month after the committee
SRP-9001,
placebo controlled and included only 12 boys. “I’ve
meeting, the FDA approved SRP-9001, now called
now called Elevidys,
never seen anything like it,” says Zuckerman. She
Elevidys. It will cost $3.2 million for the one-off
will cost
points out that the study was far too small to be
treatment, before any potential discounts. For the
able to prove the drug worked. In her view, 2016
time being, the approval is restricted to four- and
was “a turning point” for FDA approvals based on
five-year-olds. It was granted with a reminder to
MILLION
the company to complete the ongoing trials and
low-quality evidence—“It was so extreme,” she says.
for a one-off treatment.
report back on the results.
Since then, three other exon-skipping
drugs have received accelerated approval for
Sarepta maintains that there is sufficient eviDuchenne—two of them from Sarepta. A Sarepta
dence to support the drug’s approval. But this drug
spokesperson said a company-funded analysis showed that
and others have been made available—at eye-wateringly
people with Duchenne who received Exondys 51 remained
high prices—without the strong evidence we’d normally
ambulatory longer and lived longer by 5.4 years—“data
expect for new medicines. Is it ever ethical to sell a drug
we would not have without that initial approval.”
when we don’t fully know whether it will work?
But for many in the scientific community, that data still
I put this question to Debra Miller, mother of Hawken
needs to be confirmed. “The clinical benefit still has not
Miller and founder of CureDuchenne. Hawken was
been confirmed for any of the four,” Mike Singer, a clinical
diagnosed when he was five years old. “The doctor that
reviewer in the FDA’s Office of Therapeutic Products, told
diagnosed him basically told us that he was going to stop
the advisory committee in May.
walking around 10 years old, and he would not live past
“All of them are wanted by the families, but none of
18,” she says. “‘There’s no treatment. There’s no cure.
them have ever been proven to work,” says Zuckerman.
There’s nothing you can do. Go home and love your child.’”
Will Roberts is one of the boys taking an exon-skipping
She set up CureDuchenne in response. The organization is dedicated to funding research into potential treatdrug—specifically, Sarepta’s Amondys 45. Now 10, he was
diagnosed with Duchenne when he was just one year old.
ments and cures, and to supporting people affected by the
His treatment involves having a nurse come to his home
disease. It provided early financial support to Sarepta but
and inject him every five to 10 days. And it’s not cheap.
does not have a current financial interest in the company.
While his parents have a specialist insurance policy that
Hawken, now a content strategist for CureDuchenne, has
shields them from the cost, the price of a year’s worth of
never been eligible for a clinical trial.
treatment is around $750,000.
Debra Miller says she’s glad that the exon-skipping
Will’s mother, Keyan Roberts, a teacher in Michigan,
drugs were approved. From her point of view, it’s about
says she can’t tell if the drug is helping him. Last year he
more than making a new drug accessible.
$3.2
SO23-feature_Hamzelou.therapies.indd 30
8/2/23 6:14 PM
31
“[The approvals] drove innovation and attracted a lot of
attention to Duchenne,” she says. Since then, CureDuchenne
has funded other companies exploring next-generation
exon-skipping drugs that, in early experiments, seem to
work better than the first-generation drugs. “You have to
get to step one before you can get to step two,” she says.
Hawken Miller is waiting for the data from an ongoing phase 3 clinical trial of Elevidys. For the time being,
“from a data perspective, it doesn’t look great,” he says.
“But at the same time, I hear a lot of anecdotes from parents and patients who say it’s really helping a lot, and I
don’t want to discount what they’re seeing.”
Results were due in September—just three months
after the accelerated approval was granted. It might not
seem like much of a wait, but every minute is precious to
children with Duchenne. “Time is muscle” was the refrain
repeated throughout the advisory committee meeting.
“I wish that we had the time and the muscle to wait
for things that were more effective,” says Keyan Roberts,
Will’s mom. “But one of the problems with
this disease is that we might not have the
opportunity to wait to take one of those
other drugs that might be made available
years down the line.”
Doctors may end up agreeing that a
drug—even one that is unlikely to work—
is better than nothing. “In the American
psyche, that is the approach that [doctors
and] patients are pushed toward,” says
Holly Fernandez Lynch, a bioethicist at
the University of Pennsylvania. “We have
all this language that you’re ‘fighting against the disease,’
and that you should try everything.”
“I can’t tell you how many FDA advisory committee
meetings I’ve been to where the public-comment patients
are saying something like ‘This is giving me hope,’” says
Zuckerman. “Sometimes hope helps people do better. It
certainly helps them feel better. And we all want hope.
But in medicine, isn’t it better to have hope based on
evidence rather than hope based on hype?”
Their reasoning is that people affected by devastating
diseases should be protected from ineffective and possibly harmful treatments—even if they want them. Review
boards assess how ethical clinical trials are before signing
off on them. Participants can’t be charged for drugs they
take in clinical trials. And they are carefully monitored by
medical professionals during their participation.
That doesn’t mean people who are desperate for treatments are incapable of making good decisions. “They are
stuck with bad choices,” says Fernandez Lynch.
This is also the case for ultra-novel treatments, says
Robertson. At the start of trials, the best candidates for allnew experimental therapies may be those who are closer
to death, he says: “It is quite appropriate to select patients
who have less to lose, while nonetheless being sure not
to exploit people who don’t have any good options.”
There’s another advantage to clinical trials. It’s hard
to assess the effectiveness of a one-off treatment in any
single individual. But clinical trials contribute valuable
“We all want hope. But in medicine,
isn’t it better to have hope based
on evidence rather than hope based
on hype?”
A desperate decision
A drug approved on weak data might offer nothing more
than false hope at a high price, Zuckerman says: “It is not
fair for patients and their families to [potentially] have to go
into bankruptcy for a drug that isn’t even proven to work.”
The best way for people to access experimental treatments is still through clinical trials, says Bateman-House.
Robertson, the health law expert, agrees, and adds that
trials should be “bigger, faster, and more inclusive.” If a
drug looks as if it’s working, perhaps companies could
allow more volunteers to join the trial, for example.
SO23-feature_Hamzelou.therapies.indd 31
data that stands to benefit a patient community. Such
data is especially valuable for treatments so new that
there are few standards for comparison.
Hawken Miller says he would consider taking part
in an Elevidys clinical trial. “I’m willing to take on some
of that risk for the potential of helping other people,” he
says. “I think you’ll find that in [most of the Duchenne]
community, everyone’s very willing to participate in
clinical trials if it means helping kids get cured faster.”
When it comes to assessing the likelihood that Elevidys
will work, Will’s dad, Ryan Roberts, says he’s a realist.
“We’re really close to approaching the last chance—the
last years he’ll be ambulatory,” he says. For him as a dad,
he says, the efficacy concerns aren’t relevant. “We will
take the treatment because it’s going to be the only chance
we have … We are aware that we’re not being denied a
treatment that is a cure, or a huge game-changer. But
we are willing to take anything we can get in the short
window we have closing now.”
Jessica Hamzelou is a senior reporter at MIT
Technology Review.
8/2/23 6:14 PM
32
ON LY
HUMAN
JUST BEFORE CHRISTMAS LAST YEAR, A PASTOR
PREACHED A GOSPEL OF MORALS OVER MONEY
TO SEVERAL HUNDRED MEMBERS OF HIS FLOCK.
Wearing a sport coat, angular glasses, and wired earbuds, he spoke animatedly into his laptop from his tiny
glass office inside a co-working space, surrounded by six
whiteboards filled with his feverish brainstorming.
Sharing a scriptural parable familiar to
many in his online audience—a group
assembled from across 48 countries,
many in the Global South—he explained
why his congregation was undergoing
dramatic growth in an age when the life
of the spirit often struggles to compete
with cold, hard, capitalism.
“People have different sources of
motivation [for getting involved in a community],” he sermonized. “It’s not only
money. People actually have a deeper
purpose in life.”
Many of the thousands of people who’d
been joining his community were taking
the time and energy to do so “because
they care about the human condition, and
they care about the future of our democracy,” he argued. “That is not academic,”
he continued. “That is not theoretical.
That is talking about future generations,
that’s talking about your happiness, that’s
talking about how you see the world. This
is big … a paradigm shift.”
The leader in question was not an
ordained minister, nor even a religious
man. His increasingly popular community is not—technically—a church, synagogue, or temple. And the scripture
he referenced wasn’t from the Bible. It
was Microsoft Encarta vs. Wikipedia—
the story of how a movement of selfmotivated volunteers defeated an army
of corporate-funded professionals in a
crusade to provide information, back
in the bygone days of 2009. “If you’re
young,” said the preacher, named David
Ryan Polgar, “you’ll need to google it.”
The rise of the tech ethics congregation.
By GREG M. EPSTEIN | Portrait by Matchull Summers
SO23-feature_Epstein.religion.indd 32
8/1/23 8:57 AM
SO23-feature_Epstein.religion.indd 33
8/1/23 8:57 AM
Polgar, 44, is the founder of All Tech Is Human, a nonprofit
organization devoted to promoting ethics and responsibility in
tech. Founded in 2018, ATIH is based in Manhattan but hosts a
growing range of in-person programming—social mixers, mentoring opportunities, career fairs, and job-seeking resources—in
several other cities across the US and beyond, reaching thousands. Such numbers would delight most churches.
Like other kinds of congregations, ATIH focuses on relationshipbuilding: the staff invests much of its time, for example, in activities like curating its “Responsible Tech Organization” list, which
names over 500 companies in which community members can
get involved, and growing its responsible-tech talent pool, a list
of nearly 1,400 individuals interested in careers in the field. Such
programs, ATIH says, bring together many excellent but often
disconnected initiatives, all in line with the ATIH mission “to
tackle wicked tech & society issues and co-create a tech future
aligned with the public interest.”
The organization itself doesn’t
David Polgar,
often get explicitly political with
the founder of All
Tech Is Human, on stage
op-eds or policy advocacy. Rather,
at a recent Responsible
All Tech Is Human’s underlying
Tech Mixer event in
strategy is to quickly expand the
New York City.
“responsible-tech ecosystem.” In
other words, its leaders believe there are large numbers of individuals in and around the technology world, often from marginalized
backgrounds, who wish tech focused less on profits and more on
being a force for ethics and justice. These people will be a powerful force, Polgar believes, if—as the counterculture icon Timothy
Leary famously exhorted—they can “find the others.” If that sounds
like reluctance to take sides on hot-button issues in tech policy, or
to push for change directly, Polgar calls it an “agnostic” business
model. And such a model has real strengths, including the ability
to bring tech culture’s opposing tribes together under one big tent.
But as we’ll see, attempts to stay above the fray can cause
more problems than they solve.
Meanwhile, All Tech Is Human is growing so fast, with over
5,000 members on its Slack channel as of this writing, that if
it were a church, it would soon deserve the prefix “mega.” The
group has also consistently impressed me with its inclusiveness:
the volunteer and professional leadership of women and people
of color is a point of major emphasis, and speaker lineups are
among the most heterogeneous I’ve seen in any tech-related
endeavor. Crowds, too, are full of young professionals from
diverse backgrounds who participate in programs out of passion
and curiosity, not hope of financial gain. Well, at least attendees
don’t go to ATIH for direct financial gain; as is true with many
successful religious congregations, the organization serves as
an intentional incubator for professional networking.
Still, having interviewed several dozen attendees, I’m convinced
that many are hungry for communal support as they navigate a world
in which tech has become a transcendent force, for better or worse.
Growth has brought things to a turning point. ATIH now stands
to receive millions of dollars—including funds from large foundations and tech philanthropist demigods who once ignored it.
And Polgar now finds himself in a networking stratosphere with
people like Canadian prime minister Justin Trudeau, among other
prominent politicos. Will the once-humble community remain
dedicated to centering people on the margins of tech culture?
SO23-feature_Epstein.religion.indd 34
Or will monied interests make it harder to fight for the people
Christian theologians might call “the least of these”?
I first started looking into ATIH in late 2021, while researching
my forthcoming book Tech Agnostic: How Technology Became the
World’s Most Powerful Religion, and Why It Desperately Needs a
Reformation (MIT Press, 2024). The book project began because
I’d been coming across a striking number of similarities between
modern technological culture and religion, and the parallels felt
important, given my background. I am a longtime (nonreligious)
chaplain at both Harvard and MIT. After two decades immersed
in the world of faith, back in 2018 I gave up on what had been
my dream: to build a nonprofit “godless congregation” for the
growing population of atheists, agnostics, and the religiously
unaffiliated. Having started that work just before social media
mavens like Mark Zuckerberg began to speak of “connecting
the world,” I ultimately lost faith in the notion of building community around either religion or secularism when I realized that
technology had overtaken both.
Indeed, tech seems to be the dominant force in our economy,
politics, and culture, not to mention a daily obsession that can
increasingly look like an addiction from which some might plausibly seek the help of a higher power to recover. Tech culture
has long been known for its prophets (Jobs, Gates, Musk, et al.),
and tech as a whole is even increasingly oriented around moral
and ethical messages, such as Google’s infamous “Don’t be evil.”
The tech-as-religion comparison I’ve found myself drawing
is often unflattering to tech leaders and institutions. Technosolutionism and related ideas can function as a kind of theology,
justifying harm in the here and now with the promise of a sweet
technological hereafter; powerful CEOs and investors can form
the center of a kind of priestly hierarchy, if not an outright caste
system; high-tech weapons and surveillance systems seem to
threaten an apocalypse of biblical proportions.
When I discovered ATIH, I was pleasantly surprised to find a
potentially positive example of the sort of dynamic I was describing.
COURTESY OF ALL TECH IS HUMAN
34
8/1/23 8:57 AM
35
I am the sort of atheist who admits that certain features of religion
can offer people real benefits. And ATIH seemed to be succeeding precisely because it genuinely operated like a secular, techethics-focused version of a religious congregation. “It does work
that way,” Polgar acknowledged in February 2022, in the first of
our several conversations on the topic. Since then, I’ve continued
to admire ATIH’s communal and ethical spirit, while wondering
whether communities devoted explicitly to tech ethics might just
help bring about a reformation that saves tech from itself.
Along with admiration, I’ve also sought to determine whether
ATIH is worthy of our faith.
recently emerged as dominant and ubiquitous forces across society
and culture. Adopting the title “tech ethicist,” he began to write a
series of missives on digital health and the idea of “co-creating a
better tech future.” His 2017 Medium post “All Tech Is Human,”
about how technology design should be informed by more than
robotic rationality or utility, generated enthusiastic response and
led to the formal founding of the organization a year later.
The ATIH concept took a while to catch on, Polgar told me. He
worked unpaid for three years and came “close to quitting.” But his
background inspired perseverance. Born in 1979 in Cooperstown,
New York, Polgar was a philosophical kid who admired Nikola Tesla
and wanted to be an inventor. “Why can’t I start something big,” he
remembers thinking back then, “even from a little place like this?”
Why a congregation?
Despite their growing influence, Polgar and the organization
discovered ATIH’s events in late 2021, first through the online continue to emphasize their outsider status. ATIH, he argues,
Responsible Tech University Summit, a day-long program is building its following in significant part with people who, for
dedicated to exploring the intersections of tech ethics and cam- their interest in ethical approaches to technology, feel as unjustly
pus life. (One of ATIH’s signature programs is its Responsible ignored as he and many of his upstate peers felt in the shadow
Tech University Network, which involves, among other things, a of New York City.
ATIH’s model, says the organization’s head of partnerships,
growing group of over 80 student “university ambassadors” who
represent the organization on their campuses.) All the organiza- Sandra Khalil, is to offer not a “sage on the stage” but, rather, a
tion’s programs are organized around typical tech ethics themes, “guide on the side.” Khalil, a veteran of the US Departments of
like “the business case for AI ethics,” but
State and Homeland Security, also came
participants attend as much for the comto the organization with an outsider’s pugTechno-solutionism and
munity as for the topic at hand.
nacity, feeling “severely underutilized” in
Sarah Husain, who’d worked on Twitter’s
previous roles as a non-lawyer intent on
related ideas can function
Trust and Safety team until it was eliminated
“challenging the status quo.”
as a kind of theology,
by Elon Musk, told me at a May 2022 event
Polgar, however, hardly shrinks from
that several colleagues in her field had spoopportunities to influence tech discourse,
justifying harm in the
ken highly of ATIH, recommending she
whether through media interviews with
here and now with
attend. Chana Deitsch, an undergraduate
outlets like the BBC World News or by joinbusiness student who participates in ATIH’s
ing advisory boards like TikTok’s content
the promise of a sweet
advisory council. ATIH admits, in its “Ten
mentorship program, says it not only helps
technological hereafter.
Principles,” that it draws both from grasswith job leads and reference letters but provides a sense of confidence and belonging.
roots models, which it says “have ideas but
Alex Sarkissian, formerly a Deloitte conoften lack power,” and from “top-down”
sultant and now a Buddhist chaplaincy student, feels that the ones, which can “lack a diversity of ideas” but “have power.”
organization has potential “to be a kind of spiritual community The organization does not ask for or accept membership fees
for me in addition to my sangha [Buddhist congregation].”
from participants, relying instead on major donations solicited
I’ve encountered mainly earnest and insightful members like by Polgar and his team, who control decision-making. There
these, people who come together for serious mutual support and hasn’t seemed to be a significant call for more democracy—yet.
ethical reflection and—non-trivially—fun around a cause I’ve
come to hold dear. Granted, few ATIH participants, in my obserThe founder as a god?
vation, hold C-level tech positions, which could undermine the
organization’s claims that it has the ability to unite stakeholders
art of why I’m insisting ATIH is a congregation is that the
toward effectual action … or perhaps it simply signifies a popgroup assembled around Polgar demonstrates a religious zeal
ulism that could eventually put sympathizers in high places?
for organizing and relationship-building as tools for advancing
Despite my skepticism toward both theology and technol- positive moral values. Case in point: Rebekah Tweed, ATIH’s
ogy, ATIH has often given me the feeling that I’ve found my associate director, once worked in an actual church, as a youth
own tech tribe.
pastor; now she applies a skill set my field calls “pastoral care” to
creating mutually supportive space for ethically minded techies.
In 2020, Tweed volunteered on ATIH’s first major public projGrowing pains
ect, the Responsible Tech Guide, a crowdsourced document that
olgar is a nerdily charismatic former lawyer who has been highlighted the hundreds of people and institutions working in
developing the ideas and networks from which the organization the field. After she formally joined the organization, it landed its
sprouted for over a decade. As a young professor of business law at first big-time donation: $300,000 over two years from the Ford
a couple of small, under-resourced colleges in Connecticut in the Foundation, to pay her salary as well as Polgar’s. They were its
early 2010s, he began pondering the ethics of technologies that had first full-time employees.
I
P
P
SO23-feature_Epstein.religion.indd 35
8/1/23 8:57 AM
36
SO23-feature_Epstein.religion.indd 36
Digital Sunday school
I
n September 2022, I attended Building a Better Tech Future for
Children, an ATIH event cohosted with the Joan Ganz Cooney
Center at Sesame Workshop, a nonprofit research and innovation
lab associated with the legendary children’s TV show Sesame
Street. This struck me as a shrewd partnership for ATIH: every
congregation needs a Sunday school. A community organization
aspiring to the advancement of humanity and the betterment of
the world will inevitably turn its thoughts to educating the next
generation according to its values.
After a keynote from Elizabeth Milovidov, senior manager for
digital child safety at the Lego Group, on designing digital experiences with children’s well-being in mind came a panel featuring
speakers from influential players such as the Omidyar Network
and TikTok, as well as young activists. The group discussed the
risks and harms facing young people online, and the general
tone was optimistic that various efforts to protect them would
be successful, particularly if built upon one another. “Digital
spaces can be a positive source in the lives of young people,”
said the moderator, Mina Aslan.
Also on the panel was Harvard Medical School professor
Michael Rich, a self-proclaimed “mediatrician”—a portmanteau of “media’’ and “pediatrician.” Rich made good points—for
example, stressing the importance of asking kids what they’re
hoping for from tech, not just talking about the risks they confront. But one comment triggered my spider-sense: when he
said that today’s tech is like his generation’s cigarettes, in that
you can’t just tell kids “Don’t do it.”
The analogy between tobacco and social media is at best a
bizarre one to draw. Millions of young people became smokers not just through peer pressure, but because for decades,
Big Tobacco’s whole business model was built on undue corporate influence and even outright lying, including paying
influential doctors and scientists to downplay the death they
COURTESY OF ALL TECH IS HUMAN
Polgar was repeatedly rebuffed in early attempts to recruit
large gifts, but of late, the growing ATIH team has received
significant support from sources including Melinda French
Gates’s Pivotal Ventures and about half a million dollars each
from Schmidt Futures (the philanthropic fund of former Google
CEO Eric Schmidt) and the Patrick J. McGovern Foundation (yet
another tech billionaire’s fortune).
The question is: Can an organization that serves a truly inclusive audience, emphasizing humanity and ethics in its own name,
afford to get in bed with Fortune 500 companies like Google
and Microsoft and/or multibillionaires who will inevitably be
motivated by a desire to seem ethical and responsible, even
when they decidedly are not? Or rather, can it afford not to do
so, when growth means the organization’s staff can grow (and
earn a living wage)? And could such tensions someday cause a
full-blown schism in the ATIH community?
The potential challenges first
came to light for me at a May 2022
Not a “sage on the
summit in New York. For the first
stage” but a “guide on
the side”: ATIH head
time in several large ATIH events I
of partnerships Sandra
had personally observed, the meetKhalil moderates an
ing featured an invited speaker
event in London.
employed by one of the world’s
largest tech companies: Harsha Bhatlapenumarthy, a governance
manager at Meta and also a volunteer leader in a professional
association called Trust and Safety.
Bhatlapenumarthy—whose panel was called “Tech Policy &
Social Media: Where are we headed?”—avoided addressing any of
her employer’s recent controversies. Instead of offering any meaningful comment in response to Meta’s troubles over its handling
of things from pro-anorexia content to election misinformation,
she spoke only vaguely about its ethical responsibilities. The company, she said, was focused on “setting the content moderator up
for success.” Which is an interesting way to describe a situation
in which Meta had, for example, recently been sued for union
busting and human trafficking by content moderators in Kenya.
Several attendees were taken aback that Bhatlapenumarthy’s
advocacy for her powerful employer went essentially unchallenged
during the panel. Among them was Yael Eisenstat, Facebook’s
former global head of election integrity operations for political
advertising and the summit’s closing speaker. In a fireside chat
immediately following the panel in which Bhatlapenumarthy
participated, Eisenstat, who’d been a whistleblower against her
former employer, eloquently dismissed Bhatlapenumarthy’s
non-remarks. “I believe [Meta] doesn’t want this on their platform,” she said, referring to violent and deceptive content, “but
they will not touch their business model.” Eisenstat added that
she would feel “more encouraged” if companies would stop
“holding up the founder as a god.”
Eisenstat added to me later, by private message, that “sending
a more junior-level employee to speak one-directionally about
Meta’s vision of responsible tech is somewhat disingenuous.” In
inviting such a speaker, couldn’t ATIH reasonably be understood
to be implicated in the offense?
If Bhatlapenumarthy’s presence as a seeming mouthpiece for
Big Tech talking points had been an isolated incident, I might
have ignored it. But a few months later, I found myself wondering if a concerning pattern was emerging.
8/1/23 8:57 AM
37
dealt. Surely ATIH’s leadership would want to avoid any hint
that such practices would be acceptable in tech?
Tobacco eventually became among the most heavily regulated industries in history, with results including, famously, the
US surgeon general’s warnings on tobacco ads and packages.
Now the current surgeon general, Vivek Murthy, has warned
there is “growing evidence” that social media is “associated with
harm to young people’s mental health.” But on the panel (and
in his commentary elsewhere), Rich only briefly acknowledged
such potential harms, forgoing talk of regulating social media
for the idea of cultivating “resilience” in the industry’s millions
of young customers.
To be clear, I agree with Rich that it is a losing strategy to
expect young people to completely abstain from social media.
But I fear that tech and our broader society alike are not taking
nearly enough ethical responsibility for protecting children from
what can be powerful engines of harm. And I was disappointed
to see Rich’s relatively sanguine views not only expressed but
centered at an ATIH meeting.
“I see your concern,” Polgar later told me when I asked him
about my apprehensions. Raising his brow with a look of surprise
when I wondered aloud whether Rich’s funding sources might
have affected the commentary he offered for ATIH’s audience,
Polgar made clear he did not agree with all the doctor’s views.
He also admitted it is his “worst fear” that his organization might
be co-opted by funding opportunities that make it harder “to be
a speaker of truth.”
“Don’t become a parody of yourself,” he said, seeming to turn
the focus of his homily inward.
Team human
S
everal months after the Sesame Workshop event, I attended
a crowded mixer at ATIH’s now-regular monthly venue, the
Midtown Manhattan offices of the VC firm Betaworks, with a
very different kind of speaker: the tech critic Douglas Rushkoff,
a freethinker who has often spoken of the need for a kind of
secular faith in our common humanity in the face of tech capitalism’s quasi-religious extremism. Polgar is a longtime admirer
of his work.
How much responsibility?
“All tech bros are human,” Rushkoff cracked, launching into
an enthusiastically received talk. Fresh off
ow much responsibility should a
a publicity tour for a book about tech bil“responsible tech” organization like
Can an organization that
lionaires buying luxury bunkers to escape
ATIH take—or not—for inviting speakers
serves a truly inclusive
with corporate ties, especially when it is not
a potential doomsday of their own making,
fully open with its audience about such ties?
Rushkoff provided a starkly antiauthoritaraudience afford to get
How obligated is ATIH to publicly interian contrast to the speakers I’d taken issue
in bed with Fortune
rogate the conclusions of such speakers?
with at the earlier events.
Rich’s response to questions I’d asked
Ultimately, I don’t know whether ATIH
500 companies and/or
after his panel was, essentially, that parents
will succeed in its attempts to serve what
multibillionaires who will
Rushkoff would call “team human” rather
ought to channel their energies into making
“better choices” around tech, which—conthan becoming an accessory to the overinevitably be motivated by
veniently for some of the doctor’s corporate
whelming wealth tech can generate by
a desire to seem ethical?
sponsors—lays the responsibility for chilseeming to make humanity commodifiable
dren’s safety on the parents instead of the
and, ultimately, redundant. I do, however,
tech industry. His lab, I later learned, raised nearly $6 million continue to believe that building a more humane tech future will
in 2022, at least partly through grants from Meta, TikTok, and require communal support, because none of us can do it alone.
I chose the theme of tech agnosticism for my book in part
Amazon. When TikTok CEO Shou Chew testified before the US
Congress in March 2023, he cited Rich’s lab—and only Rich’s because I am often reminded that I truly don’t know—and
lab—as an example of how TikTok used science and medicine to neither do you—when or where tech’s enormous powers
protect minors. Does this represent a conflict of interest—and might actually do the good they purport to do. But I suspect
therefore a serious ethical failing on the part of both Rich and we’re going to need a lot more of what Neil Postman’s 1992
ATIH for platforming him? I don’t know. I do worry, though, that book Technopoly, an early exploration of the theme of techthere’s something inhumane in Rich’s emphasis on building kids’ as-religion and a precursor to the techlash, called “loving
“resilience” rather than interrogating why they should have to resistance fighters.” While I lack prophetic abilities to know
be so resilient against tech in the first place.
whether Polgar and co. will help spark such a resistance, the
What kind of institution does ATIH want to be? One that pushes potential is genuinely there. In a participatory congregation,
back against the powerful, or one that upholds a corporate-friendly one can always worry about co-option, as even Polgar himself
version of diversity, allowing its wealthy sponsors to remain com- admits he does; but isn’t it also the responsibility of each of
fortable at (almost) all times? As the Gospel of Matthew says, no us to actively help keep our communities accountable to their
man (or organization of “humans”) can serve two masters.
own ethical values?
Asking around ATIH’s network about my concerns, I found
Let’s maintain our skepticism, while hoping the ethical tech
ambivalence. “I do believe it is possible to do research sponsored congregation gives us continued reason to keep the faith.
by companies ethically,” said Justin Hendrix, an occasional ATIH
participant and editor of Tech Policy Press, a wonky journal in Greg M. Epstein serves as the humanist chaplain at
Harvard University and MIT and as the convener for
which academics and others tend to critique established tech ethical life at MIT’s Office of Religious, Spiritual,
narratives. “But it is right to scrutinize it for signs of impropriety.” and Ethical Life.
H
SO23-feature_Epstein.religion.indd 37
8/1/23 8:57 AM
38
SO23-feature_Nelson.lending.indd 38
8/2/23 10:43 AM
39
lenders
f
o
s
d
e
r
d
Hun
ting
are protesat the
changes nce funder.
microfina
WHAT
HAPPENED
TO KIVA
Is their s
about K trike really
much coiva, or about h
should e ntrol Americaow
internat xpect over thens
ional aid
ir
?
By Mara
Kardas-N
Illus
e
tration by
SO23-feature_Nelson.lending.indd 39
ls
on
Andrea D
’Aquino
8/2/23 10:43 AM
40
O
ne morning in August 2021, as she had nearly every
morning for about a decade, Janice Smith opened her
computer and went to Kiva.org, the website of the San
Francisco–based nonprofit that helps everyday people
make microloans to borrowers around the world. Smith,
who lives in Elk River, Minnesota, scrolled through profiles of bakers in
Mexico, tailors in Uganda, farmers in Albania. She loved the idea that,
one $25 loan at a time, she could fund entrepreneurial ventures and
help poor people help themselves.
But on this particular morning, Smith
noticed something different about Kiva’s
website. It was suddenly harder to find key
information, such as the estimated interest
rate a borrower might be charged—information that had been easily accessible just
the day before and felt essential in deciding
who to lend to. She showed the page to
her husband, Bill, who had also become a
devoted Kiva lender. Puzzled, they reached
out to other longtime lenders they knew.
Together, the Kiva users combed through
blog posts, press releases, and tax filings,
but they couldn’t find a clear explanation
of why the site looked so different. Instead,
they learned about even bigger shifts—
shifts that shocked them.
Kiva connects people in wealthier
communities with people in poorer ones
through small, crowdfunded loans made
to individuals through partner companies
and organizations around the world. The
individual Kiva lenders earn no interest;
money is given to microfinance partners
for free, and only the original amount is
returned. Once lenders get their money
back, they can choose to lend again and
again. It’s a model that Kiva hopes will
foster a perennial cycle of microfinance
lending while requiring only a small outlay
from each person.
This had been the nonprofit’s bread
and butter since its founding in 2005. But
now, the Smiths wondered if things were
starting to change.
The Smiths and their fellow lenders
learned that in 2019 the organization had
begun charging fees to its lending partners.
Kiva had long said it offered zero-interest
SO23-feature_Nelson.lending.indd 40
funding to microfinance partners, but the
Smiths learned that the recently instituted
fees could reach 8%. They also learned
about Kiva Capital, a new entity that allows
large-scale investors—Google is one—to
make big investments in microfinance
companies and receive a financial return.
The Smiths found this strange: thousands
of everyday lenders like them had been
offering loans return free for more than
a decade. Why should Google now profit
off a microfinance investment?
The Kiva users noticed that the changes
happened as compensation to Kiva’s top
employees increased dramatically. In
2020, the CEO took home over $800,000.
Combined, Kiva’s top 10 executives made
nearly $3.5 million in 2020. In 2021, nearly
half of Kiva’s revenue went to staff salaries.
Considering all the changes, and the
eye-popping executive compensation, “the
word that kept coming up was ‘shady,’” Bill
Smith told me. “Maybe what they did was
legal,” he said, “but it doesn’t seem fully
transparent.” He and Janice felt that the
organization, which relied mostly on grants
and donations to stay afloat, now seemed
more focused on how to make money than
how to create change.
Kiva, on the other hand, says the changes
are essential to reaching more borrowers. In
an interview about these concerns, Kathy
Guis, Kiva’s vice president of investments,
told me, “All the decisions that Kiva has
made and is now making are in support
of our mission to expand financial access.”
In 2021, the Smiths and nearly 200 other
lenders launched a “lenders’ strike.” More
than a dozen concerned lenders (as well
as half a dozen Kiva staff members) spoke
to me for this article. They have refused to
lend another cent through Kiva, or donate
to the organization’s operations, until the
changes are clarified—and ideally reversed.
W
hen Kiva was founded in 2005,
by Matt Flannery and Jessica
Jackley, a worldwide craze for
microfinance—sometimes called microcredit—was at its height. The UN had
dubbed 2005 the “International Year
of Microcredit”; a year later, in 2006,
Muhammad Yunus and the Grameen Bank
he had founded in the 1980s won the Nobel
Peace Prize for creating, in the words of the
Nobel Committee, “economic and social
development from below.” On a trip to East
Africa, Flannery and Jackley had a lightbulb
moment: Why not expand microfinance
by helping relatively wealthy individuals
in places like the US and Europe lend to
ves
i
t
u
c
e
x
e
0
top 1
’s
a
v
0.
i
2
K
,
0
d
2
e
n
i
n
i
n
b
o
i
l
m
Co
.5 mil
3
$
y
l
r
a
enue
e
v
n
e
r
’s
a
v
i
made
K
half of
y
l
r
ries.
a
a
l
e
a
n
s
,
1
ff
2
a
t
0
s
2
In
went to
8/2/23 10:43 AM
41
Some lenders were disappointed
to learn that loans don’t go
directly to the borrowers featured
on Kiva’s website. Instead, they
are pooled together with others’
contributions and sent to partner
institutions to distribute.
SO23-feature_Nelson.lending.indd 41
relatively poor businesspeople in places
like Tanzania and Kenya? They didn’t think
the loans Kiva facilitated should come from
grants or donations: the money, they reasoned, would then be limited, and eventually run out. Instead, small loans—as little
as $25—would be fully repayable to lenders.
Connecting wealthier individuals to
poorer ones was the “peer-to-peer” part of
Kiva’s model. The second part—the idea
that funding would be sourced through the
internet via the Kiva.org website—took
inspiration from Silicon Valley. Flannery
and another Kiva cofounder, Premal Shah,
both worked in tech—Flannery for TiVo,
Shah for PayPal. Kiva was one of the first
crowdfunding platforms, launched ahead
of popular sites like GoFundMe.
But Kiva is less direct than other crowdfunding sites. Although lenders “choose”
borrowers through the website, flipping
through profiles of dairy farmers and fruit
sellers, money doesn’t go straight to them.
Instead, the loans that pass through Kiva
are bundled together and sent to one of
the partnering microfinance institutions.
After someone in the US selects, say, a
female borrower in Mongolia, Kiva funds
a microfinance organization there, which
then lends to a woman who wants to set
up a business.
Even though the money takes a circuitous route, the premise of lending to
an individual proved immensely effective. Stories about Armenian bakers and
Moroccan bricklayers helped lenders like
the Smiths feel connected to something
larger, something with purpose and meaning. And because they got their money
back, while the feel-good rewards were
high, the stakes were low. “It’s not charity,”
the website still emphasizes today. “It’s a
loan.” The organization covered its operating expenses with funding from the US
government and private foundations and
companies, as well as donations from individual lenders, who could add a tip on top
of their loan to support Kiva’s costs.
This sense of individual connection and
the focus on facilitating loans rather than
donations was what initially drew Janice
Smith. She first heard of microfinance
through Bill Clinton’s book Giving, and then
again through Oprah Winfrey—Kiva.org
was included as one of “Oprah’s Favorite
Things” in 2010. Smith was particularly
enticed by the idea that she could re-lend
the same $25 again and again: “I loved looking through borrower profiles and feeling
like I was able to help specific people. Even
when I realized that the money was going
to a [microfinance lender]”—not directly to
a borrower—“it still gave me a feeling of a
one-on-one relationship with this person.”
Kiva’s easy-to-use website and focus on
repayments helped further popularize the
idea of small loans to the poor. For many
Americans, if they’ve heard of microfinance
at all, it’s because they or a friend or family
member have lent through the platform. As
of 2023, according to a Kiva spokesperson,
2.4 million people from more than 190
countries have done so, ultimately reaching more than 5 million borrowers in 95
countries. The spokesperson also pointed
to a 2022 study of 18,000 microfinance
customers, 88% of whom said their quality of life had improved since accessing a
loan or another financial service. A quarter said the loans and other services had
increased their ability to invest and grow
their business.
B
ut Kiva has also long faced criticism, especially when it comes to
transparency. There was the obvious issue that the organization suggests a
direct connection between Kiva.org users
and individual borrowers featured on the
site, a connection that does not actually
exist. But there were also complaints that
the interest rates borrowers pay were not
disclosed. Although Kiva initially did not
charge fees to the microfinance institutions
it funneled money through, the loans to the
individual borrowers do include interest.
The institutions Kiva partners with use that
to cover operational costs and, sometimes,
make a profit.
Critics were concerned about this lack
of disclosure given that interest rates on
microfinance loans can reach far into the
double digits—for more than a decade,
some have even soared above 100%.
8/3/23 8:46 AM
42
(Microlenders and their funders have long
argued that interest rates are needed to
make funding sustainable.) A Kiva spokesperson stressed that the website now mentions “average cost to borrower,” which
is not the interest rate a borrower will
pay but a rough approximation. Over the
years, Kiva has focused on partnering with
“impact-first” microfinance lenders—those
that charge low interest rates or focus on
loans for specific purposes, such as solar
lights or farming.
Critics also point to studies showing
that microfinance has a limited impact
on poverty, despite claims that the loans
can be transformative for poor people. For
those who remain concerned about microfinance overall, the clean, easy narrative
Kiva promotes is a problem. By suggesting
that someone like Janice Smith can “make
a loan, change a life,” skeptics charge, the
organization is effectively whitewashing a
troubled industry accused of high-priced
loans and harsh collection tactics that have
reportedly led to suicides, land grabs, and
a connection to child labor and indebted
servitude.
O
ver her years of lending through
Kiva.org, Smith followed some of
this criticism, but she says she was
“sucked in” from her first loan. She was so
won over by the mission and the method
that she soon became, in her words, a
“Kivaholic.” Lenders can choose to join
“teams” to lend together, and in 2015 she
launched one, called Together for Women.
Eventually, the team would include nearly
2,500 Kiva lenders—including one who,
she says, put his “whole retirement” into
Kiva, totaling “millions of dollars.”
Smith soon developed a steady routine.
She would open her computer first thing in
the morning, scroll through borrowers, and
post the profiles of those she considered
particularly needy to her growing team,
encouraging support from other lenders. In
2020, several years into her “Kivaholicism,”
Kiva invited team captains like her to join
regular calls with its staff, a way to disseminate information to some of the most active
members. At first, these calls were cordial.
SO23-feature_Nelson.lending.indd 42
But in 2021, as lenders like Smith noticed
changes that concerned them, the tone
of some conversations changed. Lenders
wanted to know why the information on
Kiva’s website seemed less accessible. And
then, when they didn’t get a clear answer,
they pushed on everything else, too: the fees
to microfinance partners, the CEO salaries.
In 2021 Smith’s husband, Bill, became
captain of a new team calling itself Lenders
on Strike, which soon had nearly 200 concerned members. The name sent a clear
message: “We’re gonna stop lending until
you guys get your act together and address
the stuff.” Even though they represented
a small fraction of those who had lent
through Kiva, the striking members had
been involved for years, collectively lending
millions of dollars—enough, they thought,
to get Kiva’s attention.
On the captains’ calls and in letters, the
strikers were clear about a top concern: the
fees now charged to microfinance institutions Kiva works with. Wouldn’t the fees
make the loans more expensive to the
borrowers? Individual Kiva.org lenders
still expected only their original money
back, with no return on top. If the money
wasn’t going to them, where exactly would
it be going?
On one call, the Smiths recall, staffers
explained that the fees were a way for Kiva
to expand. Revenue from the fees—potentially millions of dollars—would go into
Kiva’s overall operating budget, covering
everything from new programs to site visits
to staff salaries.
But on a different call, Kiva’s Kathy Guis
acknowledged that the fees could be bad
for poor borrowers. The higher cost might
be passed down to them; borrowers might
see their own interest rates, sometimes
already steep, rise even more. When I
spoke to Guis in June 2023, she told me
those at Kiva “haven’t observed” a rise in
borrowers’ rates as a direct result of the
fees. Because the organization essentially
acts as a middleman, it would be hard to
trace this. “Kiva is one among a number of
funding sources,” Guis explained—often,
in fact, a very small slice of a microlender’s
overall funding. “And cost of funds is one
among a number of factors that influence
borrower pricing.” A Kiva spokesperson
said the average fee is 2.53%, with fees of
8% charged on only a handful of “longerterm, high-risk loans.”
The strikers weren’t satisfied: it felt
deeply unfair to have microfinance lenders, and maybe ultimately borrowers, pay
for Kiva’s operations. More broadly, they
took issue with new programs the revenue was being spent on. Kiva Capital, the
new return-seeking investment arm that
Google has participated in, was particularly concerning. Several strikers told me
that it seemed strange, if not unethical, for
an investor like Google to be able to make
money off microfinance loans when everyday Kiva lenders had expected no return
for more than a decade—a premise that
Kiva had touted as key to its model.
A Kiva spokesperson told me investors
“are receiving a range of returns well below
a commercial investor’s expectations for
emerging-market debt investments,” but
did not give details. Guis said that thanks
in part to Kiva Capital, Kiva “reached 33%
more borrowers and deployed 33% more
capital in 2021.” Still, the Smiths and other
striking lenders saw the program less as
an expansion and more as a departure
from the Kiva they had been supporting
for years.
Another key concern, strikers told me,
is Kiva US, a separate program that offers
zero-interest loans to small businesses
domestically. Janice Smith had no fundamental problem with the affordable rates,
but she found it odd that an American
would be offered 0% interest while borrowers in poorer parts of the world were
being charged up to 70%, according to
the estimates posted on Kiva’s website. “I
don’t see why poor people in Guatemala
should basically be subsidizing relatively
rich people here in Minnesota,” she told
me. Guis disagreed, telling me, “I take
issue with the idea that systematically marginalized communities in the US are less
deserving.” She said that in 2022, nearly
80% of the businesses that received US
loans were “owned by Black, Indigenous,
and people of color.”
8/2/23 10:43 AM
43
Someone
million dol taking home nearly
the ship, nolars a year was steera
$25 loans. t the lenders and th ing
eir
After months of discussions, the strikers
and Kiva staff found themselves at loggerheads. “They feel committed to fees as a
revenue source, and we feel committed to
the fact that it’s inappropriate,” Bill Smith
told me. Guis stressed that Kiva had gone
through many changes throughout its 18
years—the fees, Kiva Capital, and Kiva
US being just a few. “You have to evolve,”
she said.
T
he fees and the returns-oriented
Kiva Capital felt strange enough.
But what really irked the Lenders on
Strike was how much Kiva executives were
being paid for overseeing those changes.
Lenders wanted to know why, according
to Kiva’s tax return, roughly $3.5 million
had been spent on executive compensation
in 2020—nearly double the amount a few
years previously. Bill Smith and others I
spoke to saw a strong correlation: at the
same time Kiva was finding new ways to
make money, Kiva’s leadership was bringing home more cash.
The concerned lenders weren’t the only
ones to see a connection. Several employees
I spoke to pointed to questionable decisions
made under the four-year tenure of Neville
Crawley, who was named CEO in 2017 and
left in 2021. Crawley made approximately
$800,000 in 2020, his last full year at the
organization, and took home just under
$750,000 in 2021, even though he left the
position in the middle of the year. When
I asked Kathy Guis why Crawley made so
much for about six months of work, she
said she couldn’t answer but would pass
that question along to the board.
SO23-feature_Nelson.lending.indd 43
Afterward, I received a written response
that did not specifically address CEO compensation, instead noting in part, “As part
of Kiva’s commitment to compensation best
practices, we conduct regular org-wide
compensation fairness research, administer salary surveys, and consult market data
from reputable providers.” Chris Tsakalakis,
who took over from Crawley, earned more
than $350,000 in 2021, for about half a year
of work. (His full salary and that of Vishtal
Ghotge, his successor and Kiva’s newest
CEO, are not yet publicly available in Kiva’s
tax filings, nor would Kiva release these
numbers to us when we requested them.)
In 2021, nearly $20 million of Kiva’s $42
million in revenue went to salaries, benefits, and other compensation.
According to the striking lenders, Kiva’s
board explained that as a San Francisco–
based organization, it needed to attract top
talent in a field, and a city, dominated by
tech, finance, and nonprofits. The last three
CEOs have had a background in business
and/or tech; Kiva’s board is stacked with
those working at the intersection of tech,
business, and finance and headed by Julie
Hanna, an early investor in Lyft and other
Silicon Valley companies. This was especially necessary, the board argued, as Kiva
began to launch new programs like Kiva
Capital, as well as Protocol, a blockchainenabled credit bureau launched in Sierra
Leone in 2018 and then closed in 2022.
The Smiths and other striking lenders
didn’t buy the rationale. The leaders of other
microlenders—including Kiva partners—
make far less. For example, the president
and CEO of BRAC USA, a Kiva partner and
one of the largest nonprofits in the world,
made just over $300,00 in 2020—not only
less than what Kiva’s CEO earns, but also
below what Kiva’s general counsel, chief
investment officer, chief strategy officer,
executive vice president of engineering,
and chief officer for strategic partnerships
were paid in 2021, according to public filings. Julie Hanna, the executive chair of
Kiva’s board, made $140,000 for working
10 hours a week in 2021. Premal Shah,
one of the founders, took home roughly
$320,000 as “senior consultant” in 2020.
Even among other nonprofits headquartered in expensive American cities, Kiva’s
CEO salary is high. For example, the head
of the Sierra Club, based in Oakland, made
$500,000 in 2021. Meanwhile, the executive director of Doctors Without Borders
USA, based in New York City, had a salary of $237,000 in 2020, the same year
that the Kiva top executive made roughly
$800,000—despite 2020 revenue of $558
million, compared with Kiva’s $38 million.
The striking lenders kept pushing—on
calls, in letters, on message boards—and
the board kept pushing back. They had
given their rationale, about the salaries
and all the other changes, and as one Kiva
lender told me, it was clear “there would
be no more conversation.” Several strikers
I spoke to said it was the last straw. This
was, they realized, no longer their Kiva.
Someone taking home nearly a million
dollars a year was steering the ship, not
them and their $25 loans.
T
he Kiva lenders’ strike is concentrated in Europe and North America.
But I wanted to understand how the
changes, particularly the new fees charged
to microfinance lenders, were viewed
by the microfinance organizations Kiva
works with.
So I spoke to Nurhayrah Sadava, CEO
of VisionFund Mongolia, who told me she
preferred the fees to the old Kiva model.
Before the lending fees were introduced,
money was lent from Kiva to microfinance
organizations in US dollars. The partner
organizations then paid the loan back in
dollars too. Given high levels of inflation,
8/2/23 10:43 AM
44
site
b
e
w
g
r
.o
a
the Kiv under, not
g
n
i
n
g
i
s
e
By d
tern f
s
e
W
e
ed
h
t
t
a
r
e
o
r
f
c
y
l
i
a
r
v
i
a
K
prim
wer,
o
r
r
o
e.
b
k
i
r
y
t
a
s
’
w
s
a
r
r
e
a
f
d
the
the len
r
o
f
s
n
o
i
t
i
the cond
instability, and currency fluctuations in
poorer countries, that meant partners might
effectively pay back more than they had
taken out.
But with the fees, Sadava told me, Kiva
now took on the currency risk, with partners paying a little more up front. Sadava
saw this as a great deal, even if it looked
“shady” to the striking lenders. What’s
more, the fees—around 7% to 8% in the case
of VisionFund Mongolia—were cheaper
than the organization’s other options: their
only alternatives were borrowing from
microfinance investment funds primarily
based in Europe, which charged roughly
20%, or another VisionFund Mongolia
lender, which charges the organization
14.5%.
Sadava told me that big international
donors aren’t interested in funding their
microfinance work. Given the context,
VisionFund Mongolia was happy with the
new arrangement. Sadava says the relatively low cost of capital allowed them to
launch “resourcefulness loans” for poor
businesswomen, who she says pay 3.4%
a month.
VisionFund Mongolia’s experience isn’t
necessarily representative—it became a
Kiva partner after the fees were instituted,
and it works in a country where it is particularly difficult to find funding. Still, I was
surprised by how resoundingly positive
Sadava was about the new model, given
the complaints I’d heard from dozens of
aggrieved Kiva staffers and lenders. That
got me thinking about something Hugh
Sinclair, a longtime microfinance staffer
and critic, told me a few years back: “The
SO23-feature_Nelson.lending.indd 44
client of Kiva is the American who gets to
feel good, not the poor person.”
I
n a way, by designing the Kiva.org website primarily for the Western funder, not
the faraway borrower, Kiva created the
conditions for the lenders’ strike.
For years, Kiva has encouraged the feeling of a personal connection between lenders and borrowers, a sense that through
the organization an American can alter
the trajectory of a life thousands of miles
away. It’s not surprising, then, that the
changes at Kiva felt like an affront. (One
striker cried when he described how much
faith he had put into Kiva, only for Kiva
to make changes he saw as morally compromising.) They see Kiva as their baby.
So they revolted.
Kiva now seems somewhat in limbo.
It’s still advertising its old-school, anyonecan-be-a-lender model on Kiva.org, while
also making significant operational changes
(a private investing arm, the promise of
blockchain-enabled technology) that
are explicitly inaccessible to everyday
Americans—and employing high-flying
CEOs with CVs and pedigrees that might
feel distant, if not outright off-putting, to
them. If Kiva’s core premise has been its
accessibility to people like the Smiths, it
is now actively undermining that premise,
taking a chance that expansion through
more complicated means will be better for
microfinance than honing the simplistic
image it’s been built on.
Several of the striking lenders I spoke
to were primarily concerned that the Kiva
model had been altered into something
they no longer recognized. But Janice
Smith, and several others, had broader
concerns: not just about Kiva, but about
the direction the whole microfinance sector was taking. In confronting her own
frustrations with Kiva, Smith reflected on
criticisms she had previously dismissed. “I
think it’s an industry where, depending on
who’s running the microfinance institution
and the interaction with the borrowers, it
can turn into what people call a ‘payday
loan’ sort of situation,” she told me. “You
don’t want people paying 75% interest and
having debt collectors coming after them
for the rest of their lives.” Previously, she
trusted that she could filter out the most
predatory situations through the Kiva
website, relying on information like the
estimated interest rate to guide her decisions. As information has become harder
to come by, she’s had a harder time feeling
confident in the terms the borrowers face.
In January 2022, Smith closed the
2,500-strong Together for Women group
and stopped lending through Kiva. Dozens
of other borrowers, her husband included,
have done the same.
While these defectors represent a tiny
fraction of the 2 million people who have
used the website, they were some of its
most dedicated lenders: of the dozen I
spoke to, nearly all had been involved for
nearly a decade, some ultimately lending
tens of thousands of dollars. For them, the
dream of “make a loan, change a life” now
feels heartbreakingly unattainable.
Smith calls the day she closed her team
“one of the saddest days of my life.” Still,
the decision felt essential: “I don’t want
to be one of those people that’s more like
an impact investor who is trying to make
money off the backs of the poorer.”
“I understand that I’m in the minority
here,” she continued. “This is the way
[microfinance is] moving. So clearly people feel it’s something that’s acceptable to
them, or a good way to invest their money.
I just don’t feel like it’s acceptable to me.”
Mara Kardas-Nelson is the author of
a forthcoming book on the history of
microfinance, We Are Not Able to Live
in the Sky (Holt, 2024).
8/2/23 10:43 AM
Be a part of the
biggest conversation
happening now.
Cut through the AI hype with our
weekly newsletter, The Algorithm.
Get access to exclusive AI insights
and in-depth stories delivered
right to your inbox.
Scan this code to sign up
for free or learn more at
technologyreview.com/thealgorithm
Untitled-1 1
6/7/23 12:01 PM
46
SO23-feature_Michel.warfare.indd 46
8/2/23 8:37 PM
.
.
.
.
.
.
.
.
.
.
.
.
IF
A
MACHINE
TELLS
YOU
WHEN
TO
PULL
THE
TRIGGER,
WHO
IS
ULTIMATELY
RESPONSIBLE?
.
.
AIASSISTED
WARFARE
.
.
BY
.
ARTHUR
HOLLAND
MICHEL
.
.
.
.
.
.
.
.
SO23-feature_Michel.warfare.indd 47
ILLUSTRATIONS
.
YOSHI
SODEOKA
.
.
.
.
47
IN
a near-future war—one that might begin
tomorrow, for all we know—a soldier takes
up a shooting position on an empty rooftop. His unit has been fighting through the
city block by block. It feels as if enemies
could be lying in silent wait behind every
corner, ready to rain fire upon their marks
the moment they have a shot.
Through his gunsight, the soldier scans
the windows of a nearby building. He
notices fresh laundry hanging from the balconies. Word comes in over the radio that
his team is about to move across an open
patch of ground below. As they head out,
a red bounding box appears in the top left
corner of the gunsight. The device’s computer vision system has flagged a potential
target—a silhouetted figure in a window
is drawing up, it seems, to take a shot.
The soldier doesn’t have a clear view, but
in his experience the system has a superhuman capacity to pick up the faintest tell
of an enemy. So he sets his crosshair upon
the box and prepares to squeeze the trigger.
In different war, also possibly just over
the horizon, a commander stands before a
bank of monitors. An alert appears from a
chatbot. It brings news that satellites have
picked up a truck entering a certain city
block that has been designated as a possible staging area for enemy rocket launches.
The chatbot has already advised an artillery unit, which it calculates as having the
highest estimated “kill probability,” to take
aim at the truck and stand by.
According to the chatbot, none of the
nearby buildings is a civilian structure,
though it notes that the determination
has yet to be corroborated manually. A
drone, which had been dispatched by
the system for a closer look, arrives on
scene. Its video shows the truck backing
into a narrow passage between two compounds. The opportunity to take the shot
is rapidly coming to a close.
For the commander, everything now
falls silent. The chaos, the uncertainty, the
cacophony—all reduced to the sound of
a ticking clock and the sight of a single
glowing button:
“APPROVE FIRE ORDER.”
8/2/23 8:37 PM
48
To pull the trigger—or, as the
case may be, not to pull it. To hit the
button, or to hold off. Legally—and
ethically—the role of the soldier’s
decision in matters of life and death
is preeminent and indispensable.
Fundamentally, it is these decisions
that define the human act of war.
It should be of little surprise,
then, that states and civil society have taken up the question of
intelligent autonomous weapons—
weapons that can select and fire
upon targets without any human
input—as a matter of serious
concern. In May, after close to a
decade of discussions, parties to
the UN’s Convention on Certain
Conventional Weapons agreed,
among other recommendations,
that militaries using them probably need to “limit the duration,
geographical scope, and scale of
the operation” to comply with the
laws of war. The line was nonbinding, but it was at least an acknowledgment that a human has to play
a part—somewhere, sometime—in
the immediate process leading up
to a killing.
But intelligent autonomous
weapons that fully displace human
decision-making have (likely) yet
to see real-world use. Even the
“autonomous” drones and ships
fielded by the US and other powers are used under close human
supervision. Meanwhile, intelligent
systems that merely guide the hand
that pulls the trigger have been
gaining purchase in the warmaker’s tool kit. And they’ve quietly
become sophisticated enough to
raise novel questions—ones that
are trickier to answer than the wellcovered wrangles over killer robots
and, with each passing day, more
urgent: What does it mean when
a decision is only part human and
part machine? And when, if ever,
is it ethical for that decision to be
a decision to kill?
SO23-feature_Michel.warfare.indd 48
F
or a long time, the idea of
supporting a human decision by computerized means
wasn’t such a controversial prospect.
Retired Air Force lieutenant general
Jack Shanahan says the radar on
the F4 Phantom fighter jet he flew
in the 1980s was a decision aid of
sorts. It alerted him to the presence
of other aircraft, he told me, so that
he could figure out what to do about
them. But to say that the crew and
the radar were coequal accomplices
would be a stretch.
That has all begun to change.
“What we’re seeing now, at least in
the way that I see this, is a transition to a world [in] which you need
to have humans and machines …
operating in some sort of team,”
says Shanahan.
The rise of machine learning, in
particular, has set off a paradigm
shift in how militaries use computers to help shape the crucial
decisions of warfare—up to, and
including, the ultimate decision.
Shanahan was the first director of
8/3/23 10:03 AM
49
Project Maven, a Pentagon program that developed target recognition algorithms for video footage
from drones. The project, which
kicked off a new era of American
military AI, was launched in 2017
after a study concluded that “deep
learning algorithms can perform at
near-human levels.” (It also sparked
controversy—in 2018, more than
3,000 Google employees signed a
letter of protest against the company’s involvement in the project.)
With machine-learning-based
decision tools, “you have more
apparent competency, more
breadth” than earlier tools afforded,
says Matt Turek, deputy director of
the Information Innovation Office
at the Defense Advanced Research
Projects Agency. “And perhaps a
tendency, as a result, to turn over
more decision-making to them.”
A soldier on the lookout for
enemy snipers might, for example, do so through the Assault Rifle
Combat Application System, a gunsight sold by the Israeli defense firm
Elbit Systems. According to a company spec sheet, the “AI-powered”
device is capable of “human target
detection” at a range of more than
600 yards, and human target “identification” (presumably, discerning
whether a person is someone who
could be shot) at about the length
of a football field. Anna AhronheimCohen, a spokesperson for the company, told MIT Technology Review,
“The system has already been tested
in real-time scenarios by fighting
infantry soldiers.”
Another gunsight, built by the
company Smartshooter, is advertised as having similar capabilities.
According to the company’s website, it can also be packaged into a
remote-controlled machine gun like
the one that Israeli agents used to
assassinate the Iranian nuclear scientist Mohsen Fakhrizadeh in 2021.
Decision support tools that sit
at a greater remove from the battlefield can be just as decisive. The
Pentagon appears to have used AI in
the sequence of intelligence analyses
and decisions leading up to a potential strike, a process known as a kill
chain—though it has been cagey on
the details. In response to questions
from MIT Technology Review, Laura
McAndrews, an Air Force spokesperson, wrote that the service “is
utilizing a human-machine teaming
approach.”
Other countries are more openly
experimenting with such automation. Shortly after the IsraelPalestine conflict in 2021, the Israel
Defense Forces said it had used
The range of judgment calls that go
into military decision-making is vast.
And it doesn’t always take artificial
super-intelligence to dispense with
them by automated means.
SO23-feature_Michel.warfare.indd 49
what it described as AI tools to alert
troops of imminent attacks and to
propose targets for operations.
The Ukrainian army uses a program, GIS Arta, that pairs each
known Russian target on the battlefield with the artillery unit that
is, according to the algorithm, best
placed to shoot at it. A report by
The Times, a British newspaper,
likened it to Uber’s algorithm for
pairing drivers and riders, noting
that it significantly reduces the time
between the detection of a target
and the moment that target finds
itself under a barrage of firepower.
Before the Ukrainians had GIS Arta,
that process took 20 minutes. Now
it reportedly takes one.
Russia claims to have its own
command-and-control system with
what it calls artificial intelligence,
but it has shared few technical
details. Gregory Allen, the director
of the Wadhwani Center for AI and
Advanced Technologies and one of
the architects of the Pentagon’s current AI policies, told me it’s important to take some of these claims
with a pinch of salt. He says some
of Russia’s supposed military AI is
“stuff that everyone has been doing
for decades,” and he calls GIS Arta
“just traditional software.”
The range of judgment calls that
go into military decision-making,
however, is vast. And it doesn’t
always take artificial superintelligence to dispense with them
by automated means. There are
tools for predicting enemy troop
movements, tools for figuring out
how to take out a given target, and
tools to estimate how much collateral harm is likely to befall any
nearby civilians.
None of these contrivances could
be called a killer robot. But the technology is not without its perils. Like
any complex computer, an AI-based
tool might glitch in unusual and
unpredictable ways; it’s not clear
8/2/23 8:37 PM
50
that the human involved will always
be able to know when the answers
on the screen are right or wrong. In
their relentless efficiency, these tools
may also not leave enough time and
space for humans to determine if
what they’re doing is legal. In some
areas, they could perform at such
superhuman levels that something
ineffable about the act of war could
be lost entirely.
Eventually militaries plan to use
machine intelligence to stitch many
of these individual instruments into
a single automated network that
links every weapon, commander,
and soldier to every other. Not a
kill chain, but—as the Pentagon has
begun to call it—a kill web.
In these webs, it’s not clear
whether the human’s decision is,
in fact, very much of a decision at
all. Rafael, an Israeli defense giant,
has already sold one such product,
Fire Weaver, to the IDF (it has also
demonstrated it to the DoD and
the German military). According
to company materials, Fire Weaver
finds enemy positions, notifies the
unit that it calculates as being best
placed to fire on them, and even
sets a crosshair on the target directly
in that unit’s weapon sights. The
human’s role, according to one video
of the software, is to choose between
two buttons: “Approve” and “Abort.”
L
et’s say that the silhouette in
the window was not a soldier,
but a child. Imagine that the
truck was not delivering warheads to
the enemy, but water pails to a home.
Of the Department of Defense’s
five “ethical principles for artificial
intelligence,” which are phrased
as qualities, the one that’s always
listed first is “Responsible.” In practice, this means that when things go
wrong, someone—a human, not a
machine—has got to hold the bag.
Of course, the principle of
responsibility long predates the
SO23-feature_Michel.warfare.indd 50
onset of artificially intelligent
machines. All the laws and mores
of war would be meaningless without the fundamental common understanding that every deliberate act in
the fight is always on someone. But
with the prospect of computers taking on all manner of sophisticated
new roles, the age-old precept has
newfound resonance.
“Now for me, and for most
people I ever knew in uniform,
this was core to who we were as
commanders, that somebody ultimately will be held responsible,”
says Shanahan, who after Maven
became the inaugural director
of the Pentagon’s Joint Artificial
Intelligence Center and oversaw
the development of the AI ethical
principles.
This is why a human hand must
squeeze the trigger, why a human
hand must click “Approve.” If a computer sets its sights upon the wrong
target, and the soldier squeezes the
trigger anyway, that’s on the soldier. “If a human does something
that leads to an accident with the
machine—say, dropping a weapon
where it shouldn’t have—that’s still
a human’s decision that was made,”
Shanahan says.
But accidents happen. And this
is where things get tricky. Modern
militaries have spent hundreds of
years figuring out how to differentiate the unavoidable, blameless tragedies of warfare from acts of malign
intent, misdirected fury, or gross
negligence. Even now, this remains
a difficult task. Outsourcing a part
of human agency and judgment
to algorithms built, in many cases,
around the mathematical principle
of optimization will challenge all this
law and doctrine in a fundamentally
new way, says Courtney Bowman,
global director of privacy and civil
liberties engineering at Palantir, a
US-headquartered firm that builds
data management software for
Of the
Department
of Defense’s
5
“ethical
principles
for artificial
intelligence,”
which are
phrased as
QUALITIES,
the one
that’s always
listed first is
“RESPONSIBLE.”
militaries, governments, and large
companies.
“It’s a rupture. It’s disruptive,”
Bowman says. “It requires a new
ethical construct to be able to make
sound decisions.”
This year, in a move that was
inevitable in the age of ChatGPT,
Palantir announced that it is developing software called the Artificial
Intelligence Platform, which allows
for the integration of large language
models into the company’s military
products. In a demo of AIP posted
to YouTube this spring, the platform
alerts the user to a potentially threatening enemy movement. It then suggests that a drone be sent for a closer
look, proposes three possible plans
to intercept the offending force, and
maps out an optimal route for the
selected attack team to reach them.
And yet even with a machine
capable of such apparent cleverness,
militaries won’t want the user to
blindly trust its every suggestion. If
the human presses only one button
in a kill chain, it probably should
not be the “I believe” button, as a
concerned but anonymous Army
operative once put it in a DoD war
game in 2019.
In a program called Urban
Reconnaissance through Supervised
Autonomy (URSA), DARPA built
a system that enabled robots and
drones to act as forward observers for platoons in urban operations. After input from the project’s
advisory group on ethical and legal
issues, it was decided that the software would only ever designate people as “persons of interest.” Even
though the purpose of the technology was to help root out ambushes,
it would never go so far as to label
anyone as a “threat.”
This, it was hoped, would stop a
soldier from jumping to the wrong
conclusion. It also had a legal rationale, according to Brian Williams, an
adjunct research staff member at the
8/2/23 8:37 PM
51
“If people of interest are identified on a
screen as red dots, that’s going to have
a different subconscious implication
than if people of interest are identified
on a screen as little happy faces.”
Institute for Defense Analyses who
led the advisory group. No court had
positively asserted that a machine
could legally designate a person a
threat, he says. (Then again, he adds,
no court had specifically found that
it would be illegal, either, and he
acknowledges that not all military
operators would necessarily share
his group’s cautious reading of the
law.) According to Williams, DARPA
initially wanted URSA to be able to
autonomously discern a person’s
intent; this feature too was scrapped
at the group’s urging.
Bowman says Palantir’s approach
is to work “engineered inefficiencies” into “points in the decisionmaking process where you actually
do want to slow things down.” For
example, a computer’s output that
points to an enemy troop movement,
he says, might require a user to seek
out a second corroborating source
of intelligence before proceeding
with an action (in the video, the
Artificial Intelligence Platform does
not appear to do this).
In the case of AIP, Bowman
says that the idea is to present the
information in such a way “that the
viewer understands, the analyst
understands, this is only a suggestion.” In practice, protecting
human judgment from the sway
SO23-feature_Michel.warfare.indd 51
of a beguilingly smart machine
could come down to small details
of graphic design. “If people of interest are identified on a screen as red
dots, that’s going to have a different subconscious implication than
if people of interest are identified
on a screen as little happy faces,”
says Rebecca Crootof, a law professor at the University of Richmond,
who has written extensively about
the challenges of accountability in
human-in-the-loop autonomous
weapons.
In some settings, however, soldiers might only want an “I believe”
button. Originally, DARPA envisioned URSA as a wrist-worn device
for soldiers on the front lines. “In the
very first working group meeting, we
said that’s not advisable,” Williams
told me. The kind of engineered inefficiency necessary for responsible
use just wouldn’t be practicable for
users who have bullets whizzing by
their ears. Instead, they built a computer system that sits with a dedicated operator, far behind the action.
But some decision support systems are definitely designed for
the kind of split-second decisionmaking that happens right in the
thick of it. The US Army has said
that it has managed, in live tests, to
shorten its own 20-minute targeting
cycle to 20 seconds. Nor does the
market seem to have embraced
this same spirit of restraint. In
demo videos posted online, the
bounding boxes for the computerized gunsights of both Elbit and
Smartshooter are blood red.
O
ther times, the computer will
be right and the human will
be wrong.
If the soldier on the rooftop had
second-guessed the gunsight, and it
turned out that the silhouette was in
fact an enemy sniper, his teammates
could have paid a heavy price for his
split second of hesitation.
This is a different source of trouble, much less discussed but no less
likely in real-world combat. And it
puts the human in something of a
pickle. Soldiers will be told to treat
their digital assistants with enough
mistrust to safeguard the sanctity of
their judgment. But with machines
that are often right, this same reluctance to defer to the computer can
itself become a point of avertable
failure.
Aviation history has no shortage
of cases where a human pilot’s refusal
to heed the machine led to catastrophe. These (usually perished) souls
have not been looked upon kindly
by investigators seeking to explain
the tragedy. Carol J. Smith, a senior
research scientist at Carnegie Mellon
University’s Software Engineering
Institute who helped craft responsible AI guidelines for the DoD’s
Defense Innovation Unit, doesn’t
see an issue: “If the person in that
moment feels that the decision is
wrong, they’re making it their call,
and they’re going to have to face the
consequences.”
For others, this is a wicked ethical conundrum. The scholar M.C.
Elish has suggested that a human
who is placed in this kind of impossible loop could end up serving as
what she calls a “moral crumple
8/2/23 8:37 PM
52
zone.” In the event of an accident—regardless of whether the
human was wrong, the computer
was wrong, or they were wrong
together—the person who made
the “decision” will absorb the blame
and protect everyone else along the
chain of command from the full
impact of accountability.
In an essay, Smith wrote that
the “lowest-paid person” should
not be “saddled with this responsibility,” and neither should “the
highest-paid person.” Instead, she
told me, the responsibility should be
spread among everyone involved,
and that the introduction of AI
should not change anything about
that responsibility.
In practice, this is harder than
it sounds. Crootof points out that
even today, “there’s not a whole lot
of responsibility for accidents in
war.” As AI tools become larger and
more complex, and as kill chains
become shorter and more web-like,
finding the right people to blame
is going to become an even more
labyrinthine task.
Those who write these tools,
and the companies they work for,
aren’t likely to take the fall. Building
AI software is a lengthy, iterative
process, often drawing from opensource code, which stands at a distant remove from the actual material
facts of metal piercing flesh. And
barring any significant changes to
US law, defense contractors are
generally protected from liability
anyway, says Crootof.
Any bid for accountability at the
upper rungs of command, meanwhile, would likely find itself stymied
by the heavy veil of government classification that tends to cloak most
AI decision support tools and the
manner in which they are used. The
US Air Force has not been forthcoming about whether its AI has even
seen real-world use. Shanahan says
Maven’s AI models were deployed
SO23-feature_Michel.warfare.indd 52
for intelligence analysis soon after
the project launched, and in 2021
the secretary of the Air Force said
that “AI algorithms” had recently
been applied “for the first time to a
live operational kill chain,” with an
Air Force spokesperson at the time
adding that these tools were available in intelligence centers across
the globe “whenever needed.” But
Laura McAndrews, the Air Force
spokesperson, said that in fact these
algorithms “were not applied in
a live, operational kill chain” and
declined to detail any other algorithms that may, or may not, have
been used since.
The real story might remain
shrouded for years. In 2018, the
Pentagon issued a determination that exempts Project Maven
from Freedom of Information
requests. Last year, it handed the
entire program to the National
Geospatial-Intelligence Agency,
which is responsible for processing America’s vast intake of secret
aerial surveillance. Responding
to questions about whether the
algorithms are used in kill chains,
Robbin Brooks, an NGA spokesperson, told MIT Technology Review,
“We can’t speak to specifics of how
and where Maven is used.”
I
n one sense, what’s new here is
also old. We routinely place our
safety—indeed, our entire existence as a species—in the hands of
other people. Those decision-makers
defer, in turn, to machines that they
do not entirely comprehend.
In an exquisite essay on automation published in 2018, at a time
when operational AI-enabled decision support was still a rarity, former Navy secretary Richard Danzig
pointed out that if a president
“decides” to order a nuclear strike,
it will not be because anyone has
looked out the window of the Oval
Office and seen enemy missiles raining down on DC but, rather, because
those missiles have been detected,
tracked, and identified—one hopes
correctly—by algorithms in the air
defense network.
As in the case of a commander
who calls in an artillery strike on the
advice of a chatbot, or a rifleman who
pulls the trigger at the mere sight of
a red bounding box, “the most that
can be said is that ‘a human being
is involved,’” Danzig wrote.
“This is a common situation in
the modern age,” he wrote. “Human
decisionmakers are riders traveling
across obscured terrain with little
or no ability to assess the powerful
beasts that carry and guide them.”
As AI tools become larger and
more complex, and as kill chains
become shorter and more web-like,
finding the right people to blame
is going to become an even more
labyrinthine task.
8/2/23 8:37 PM
53
There can be an alarming streak
of defeatism among the people
responsible for making sure that
these beasts don’t end up eating
us. During a number of conversations I had while reporting this
story, my interlocutor would land
on a sobering note of acquiescence
to the perpetual inevitability of
death and destruction that, while
tragic, cannot be pinned on any
single human. War is messy, technologies fail in unpredictable ways,
and that’s just that.
SO23-feature_Michel.warfare.indd 53
“In warfighting,” says Bowman
of Palantir, “[in] the application of
any technology, let alone AI, there
is some degree of harm that you’re
trying to—that you have to accept,
and the game is risk reduction.”
It is possible, though not yet
demonstrated, that bringing artificial intelligence to battle may mean
fewer civilian casualties, as advocates often claim. But there could
be a hidden cost to irrevocably
conjoining human judgment and
mathematical reasoning in those
ultimate moments of war—a cost
that extends beyond a simple, utilitarian bottom line. Maybe something
just cannot be right, should not be
right, about choosing the time and
manner in which a person dies the
way you hail a ride from Uber.
To a machine, this might be suboptimal logic. But for certain humans,
that’s the point. “One of the aspects
of judgment, as a human capacity, is
that it’s done in an open world,” says
Lucy Suchman, a professor emerita of
anthropology at Lancaster University,
who has been writing about the quandaries of human-machine interaction
for four decades.
The parameters of life-and-death
decisions—knowing the meaning
of the fresh laundry hanging from
a window while also wanting your
teammates not to die—are “irreducibly qualitative,” she says. The
chaos and the noise and the uncertainty, the weight of what is right
and what is wrong in the midst of
all that fury—not a whit of this can
be defined in algorithmic terms.
In matters of life and death, there
is no computationally perfect outcome. “And that’s where the moral
responsibility comes from,” she says.
“You’re making a judgment.”
The gunsight never pulls the
trigger. The chatbot never pushes
the button. But each time a machine
takes on a new role that reduces the
irreducible, we may be stepping a
little closer to the moment when
the act of killing is altogether more
machine than human, when ethics
becomes a formula and responsibility becomes little more than an
abstraction. If we agree that we don’t
want to let the machines take us all
the way there, sooner or later we
will have to ask ourselves: Where
is the line?
Arthur Holland Michel writes
about technology. He is based
in Barcelona and can be found,
occasionally, in New York.
8/2/23 8:37 PM
54
From supersize slideshows to Steve Jobs’s Apple
keynote, corporate presentations have always pushed
technology forward. By Claire L. Evans
Above: To celebrate the launch of the 1987 Saab 9000 CD
sedan, an audience of 2,500 was treated to an hourlong
operetta involving 26-foot-tall projection screens, a
massive chorus, the entire Stockholm Philharmonic, and
some 50 performers.
The greatest
slideshow on Earth
SO23-back_presentations.indd 54
7/31/23 2:39 PM
55
It’s 1948, and it isn’t a great year for alcohol. Prohibition has come and gone, and
booze is a buyer’s market again. That much
is obvious from Seagram’s annual sales
meeting, an 11-city traveling extravaganza
designed to drum up nationwide sales. No
expense has been spared: there’s the twohour, professionally acted stage play about
the life of a whiskey salesman. The beautiful anteroom displays. The free drinks. But
the real highlight is a slideshow.
To call the Seagram-Vitarama a slideshow is an understatement. It’s an experience: hundreds of images of the distilling
process, set to music, projected across five
40-by-15-foot screens. “It is composed of
pictures, yet it is not static,” comments one
awed witness. “The overall effect is one of
magnificence.” Inspired by an Eastman
Kodak exhibit at the 1939 World’s Fair, the
SO23-back_presentations.indd 55
Seagram-Vitarama is the first A/V presentation ever given at a sales meeting. It will
not be the last.
In the late ’40s, multimedia was a novelty. But by the early 1960s, nearly all companies with national advertising budgets
were using multimedia gear—16-millimeter
projectors, slide projectors, filmstrip projectors, and overheads—in their sales training
and promotions, for public relations, and
as part of their internal communications.
Many employed in-house A/V directors,
who were as much showmen as technicians. Because although presentations have
a reputation for being tedious, when they’re
done right, they’re theater. The business
world knows it. Ever since the days of the
Vitarama, companies have leveraged the
dramatic power of images to sell their ideas
to the world.
Next slide, please
The sound of slides clacking is deafening.
But it doesn’t matter, because the champagne is flowing and the sound system is
loud. The 2,500 dignitaries and VIPs in
the audience are being treated to an hourlong operetta about luxury travel. Onstage,
a massive chorus, the entire Stockholm
Philharmonic, and some 50 dancers and
performers are fluttering around a pair of
Saab 9000CD sedans. Stunning images of
chrome details, leather seats, and open roads
dance across a 26-foot-tall screen behind
them. The images here are all analog: nearly
7,000 film slides, carefully arranged in a grid
of 80 Kodak projectors. It’s 1987, and slideshows will never get any bigger than this.
Before PowerPoint, and long before digital projectors, 35-millimeter film slides were
king. Bigger, clearer, and less expensive to
7/31/23 2:39 PM
56
produce than 16-millimeter film, and more
colorful and higher-resolution than video,
slides were the only medium for the kinds
of high-impact presentations given by CEOs
and top brass at annual meetings for stockholders, employees, and salespeople. Known
in the business as “multi-image” shows, these
presentations required a small army of producers, photographers, and live production
staff to pull off. First the entire show had to
be written, storyboarded, and scored. Images
were selected from a library, photo shoots
arranged, animations and special effects
produced. A white-gloved technician developed, mounted, and dusted each slide before
dropping it into the carousel. Thousands of
cues were programmed into the show control
computers—then tested, and tested again.
Because computers crash. Projector bulbs
burn out. Slide carousels get jammed.
SO23-back_presentations.indd 56
“When you think of all the machines,
all the connections, all the different bits
and pieces, it’s a miracle these things
even played at all,” says Douglas Mesney,
a commercial photographer turned slide
producer whose company Incredible
Slidemakers produced the 80-projector
Saab launch. Now 77 years old, he’s made
a retirement project of archiving the
now-forgotten slide business. Mesney
pivoted to producing multi-image shows
in the early 1970s after an encounter with
an impressive six-screen setup at the 1972
New York Boat Show. He’d been shooting
spreads for Penthouse and car magazines,
occasionally lugging a Kodak projector or
two to pitch meetings for advertising clients. “All of a sudden you look at six projectors and what they can do, and you go,
Holy mackerel,” he remembers.
Six was just the beginning. At the height
of Mesney's career, his shows called for up
to 100 projectors braced together in vertiginous rigs. With multiple projectors pointing
toward the same screen, he could create
seamless panoramas and complex animations, all synchronized to tape. Although the
risk of disaster was always high, when he
pulled it off, his shows dazzled audiences
and made corporate suits look like giants.
Mesney’s clients included IKEA, Saab,
Kodak, and Shell; he commanded production budgets in the hundreds of thousands
of dollars. And in the multi-image business,
that was cheap. Larger A/V staging companies, like Carabiner International, charged
up to $1 million to orchestrate corporate
meetings, jazzing up their generic multiimage “modules” with laser light shows,
dance numbers, and top-shelf talent like
THIS SPREAD & PREVIOUS: DOUGLAS MESNEY/INCREDIBLE SLIDEMAKERS
“All of a sudden you look at
six projectors and what they can do,
and you go, Holy mackerel.”
7/31/23 2:39 PM
57
Douglas Mesney (above), a former
commercial photographer, produced
shows with production budgets
in the hundreds of thousands of
dollars for clients including IKEA,
Saab, Kodak, and Shell.
Hall & Oates, the Allman Brothers, and
even the Muppets. “I liken it to being a
rock-and-roll roadie, but I never went on
the tour bus,” explains Susan Buckland, a
slide programmer who spent most of her
career behind the screen at Carabiner.
From its incorporation in 1976 to the
mid-1980s, the Association for Multi-Image,
a trade association for slide producers, grew
from zero to 5,000 members. At its peak,
the multi-image business employed some
20,000 people and supported several festivals and four different trade magazines. One
of these ran a glowing profile of Douglas
Mesney in 1980; when asked for his prognosis about the future of slides, he replied:
“We could make a fortune or be out of business in a year.” He wasn’t wrong.
At the time, some 30 manufacturers
of electronic slide programming devices
SO23-back_presentations.indd 57
vied for the multi-image dollar. To meet the
demand for high-impact shows, the tech had
quickly evolved from manual dissolve units
and basic control systems—programmed
with punched paper tape, and then audiocassette—to dedicated slide control computers like the AVL Eagle I, which could
drive 30 projectors at once. The Eagle,
which came with word processing and
accounting software, was a true business
computer—so much so that when Eagle
spun off from its parent company, Audio
Visual Labs, in the early ’80s, it became one
of Silicon Valley’s most promising computer
startups. Eagle went public in the summer
of 1983, making its president, Dennis R.
Barnhart, an instant multimillionaire. Only
hours after the IPO, Barnhart plowed his
brand-new cherry-red Ferrari through a
guardrail near the company’s headquarters
in Los Gatos, California, flipped through
the air, crashed into a ravine, and died. The
slide business would soon follow.
Douglas Mesney likes to say that if you
never saw a slide show, you never will. The
machines to show them have been landfilled. The slides themselves were rarely
archived. Occasionally a few boxes containing an old multi-image “module” will
turn up in a storage unit, and occasionally
those will even be undamaged. But with the
exception of a few hobbyists and retired
programmers, the know-how to restore and
stage multi-image slideshows is scarce. This
leaves former slide professionals at a loss.
“All of us are devastated that none of the
modules survived,” says Susan Buckland.
“Basically, I don’t have a past, because I
can’t explain it.” The entire industry, which
existed at an unexpected intersection of
7/31/23 2:39 PM
It wasn’t long before the computers
that ran the slide shows evolved
beyond the medium.
analog and high-tech artistry, came and
went in a little over 20 years.
Presentations, like porn, have always
pushed technology forward; in the multiimage days, producers like Mesney took
the slide as far as it could go, using every
tool available to create bigger and bolder
shows. Mesney claims to have set the land
speed record for a slide presentation with
a three-minute-long, 2,400-slide show, but
even at top speed, slides are static. The computers that controlled them, however, were
not—and it wasn’t long before they evolved
beyond the medium. “Back then, computers
were fast enough to tell slides what to do,
but they weren’t fast enough to actually create the images themselves,” explains Steven
Michelsen, a former slide programmer who
restores and runs old multi-image shows in
his Delaware garage. “It took another 10 or
SO23-back_presentations.indd 58
15 years until you could run a show straight
from your computer and have the images
look worth looking at,” he adds.
The last slide projector ever made rolled
off the assembly line in 2004. The inside
of its casing was signed by factory workers and Kodak brass before the unit was
handed over to the Smithsonian. Toasts and
speeches were made, but by then they were
eulogies, because PowerPoint had already
eaten the world.
Inventing PowerPoint
The Hotel Regina is an Art Nouveau marvel overlooking the Tuileries Garden and
the Louvre. But on this day in 1992, its Old
World meeting rooms have been retrofitted with advanced video technology. The
color projector in the back of the room, the
size of a small refrigerator, cost upwards of
$100,000 and takes an hour to warm up.
A team of technicians has spent the better
part of the last 48 hours troubleshooting
to ensure that nothing goes wrong when
Robert Gaskins, the fastidious architect of
a new piece of software called PowerPoint
3.0, walks into the room. He’ll be carrying a
laptop under his arm, and when he reaches
the lectern, he’ll pick up a video cable, plug
it in, and demonstrate for the first time
something that has been reproduced billions of times since: a video presentation,
running straight off a laptop, in full color.
The audience, full of Microsoft associates
from across Europe, will go bananas. They
“grasped immediately what the future would
bring for their own presentations,” Gaskins
later wrote. “There was deafening applause.”
It’s hard now to imagine deafening
applause for a PowerPoint—almost as hard
TOP ROW: RICHARD SHIPPS/DD&B STUDIO; DOUGLAS MESNEY/INCREDIBLE SLIDEMAKERS; WILDEN ENTERPRISES
MIDDLE ROW: DOUGLAS MESNEY/INCREDIBLE SLIDEMAKERS; WILDEN ENTERPRISES; RICHARD SHIPPS/DD&B STUDIOS
58
7/31/23 2:39 PM
BOTTOM ROW: WILDEN ENTERPRISES; RICHARD SHIPPS/DD&B STUDIOS; DOUGLAS MESNEY/INCREDIBLE SLIDEMAKERS;
IMAGES COURTESY STEVEN MICHELSEN
59
as it is to imagine anyone but Bob Gaskins
standing at this particular lectern, ushering in the PowerPoint age. Presentations
are in his blood. His father ran an A/V
company, and family vacations usually
included a trip to the Eastman Kodak factory. During his graduate studies at Berkeley,
he tinkered with machine translation and
coded computer-generated haiku. He ran
away to Silicon Valley to find his fortune
before he could finalize his triple PhDs in
English, linguistics, and computer science,
but he brought with him a deep appreciation for the humanities, staffing his team
with like-minded polyglots, including a disproportionately large number of women in
technical roles. Because Gaskins ensured
that his offices—the only Microsoft division, at the time, in Silicon Valley—housed a
museum-worthy art collection, PowerPoint’s
SO23-back_presentations.indd 59
architects spent their days among works
by Frank Stella, Richard Diebenkorn, and
Robert Motherwell.
Gaskins’s 1984 proposal for PowerPoint,
written when he was VP of product development at the Sunnyvale startup Forethought,
is a manifesto in bullet points. It outlines the
slumbering, largely-hidden-from-view $3.5
billion business presentation industry and
its enormous need for clear, effective slides.
It lists technology trends—laser printers,
color graphics, “WYSIWYG” software—that
point to an emerging desktop presentation
market. It’s a stunningly prescient document
throughout. But Gaskins italicized only one
bullet point in the whole thing.
User benefits:
Allows the content-originator to
control the presentation.
This is Gaskins’s key insight: a presentation’s message is inevitably diluted when
its production is outsourced. In the early
’80s, he meant that literally. The first two
versions of PowerPoint were created to
help executives produce their own overhead transparencies and 35-millimeter
slides, rather than passing the job off to
their secretaries or a slide bureau.
“In the ’50s, ’60s, and early ’70s, information flow was narrow,” explains Sandy
Beetner, former CEO of Genigraphics, a
business graphics company that was, for several decades, the industry leader in professional presentation graphics. Their clients
were primarily Fortune 500 companies and
government agencies with the resources
to produce full-color charts, 3D renderings, and other high-tech imagery on those
slides. Everyone else was limited to acetate
7/31/23 2:39 PM
60
With multiple projectors pointing
toward the same screen, producers
could create seamless panoramas and
complex animations, all synchronized
to tape.
SO23-back_presentations.indd 60
PowerPoint had become shorthand
for the stupefying indignities of office life—
a 2001 New Yorker profile summed it up as
“software you impose on other people.”
the States 10 years later, an expert in
antique concertinas. By then, PowerPoint
had become shorthand for the stupefying indignities of office life. A 2001 New
Yorker profile summed it up as “software
you impose on other people”; the statistician Edward Tufte, known for his elegant
monographs about data visualization,
famously blamed the 2003 Columbia shuttle disaster on a bum PowerPoint slide.
Gaskins’s software, Tufte argued, produces
relentlessly sequential, hierarchical, sloganeering, over-managed presentations,
rife with “chartjunk” and devoid of real
meaning. No wonder software corporations loved it.
Robert Gaskins is remarkably sympathetic to these views, not least because
Tufte’s mother, the Renaissance scholar
Virginia Tufte, mentored him as an undergraduate in the English department at
the University of Southern California. In
a reflection written on the 20th anniversary of PowerPoint’s introduction, Gaskins
acknowledged that “more business and
academic talks look like poor attempts at
sales presentations,” a phenomenon he
blamed as much on a “mass failure of taste”
WILDEN ENTERPRISES
overheads and—gasp—words. “Prior to
PowerPoint,” she says, “people communicated in black and white. There was just so
much missed in that environment.”
Beetner oversaw Genigraphics’ national
network service bureaus, which were
located in every major American city and
staffed 24 hours a day, 365 days a year, by
graphic artists prepared to produce, polish, and print slides. The company was so
vital to presentational culture that Gaskins
negotiated a deal to make Genigraphics
the official 35-millimeter slide production service for PowerPoint 2.0; a “Send
to Genigraphics” menu command was
baked into PowerPoint until 2003. This,
incidentally, was around the same time that
Kodak stopped making Carousel projectors.
Gaskins retired from Microsoft in 1993
and moved to London. He returned to
7/31/23 2:39 PM
61
61
as on PowerPoint itself, a tool so powerful
it collapsed all preexisting contexts. Not
everything’s a sales presentation; nor
should it be. But PowerPoint made it easy
to add multimedia effects to informal talks,
empowering lay users to make stylistic
decisions once reserved for professionals.
To paraphrase an early PowerPoint print
ad: now the person making the presentation made the presentation. That those
people weren’t always particularly good
at it didn’t seem to matter.
What did matter was that presentations were no longer reserved for yearend meetings and big ideas worthy of the
effort and expense required to prepare
color slides. “The scalability of information
and audience that PowerPoint brought
to the party was pretty incredible,” says
Beetner, whose company has survived
SO23-back_presentations.indd 61
as a ghost in the machine, in the form
of PowerPoint templates and clip art. “It
opened up the channels dramatically, and
pretty quickly. There isn’t a student alive,
at any level, that hasn’t seen a PowerPoint
presentation.” Indeed, PowerPoint is used
in religious sermons; by schoolchildren
preparing book reports; at funerals and
weddings. In 2010, Microsoft announced
that PowerPoint was installed on more
than a billion computers worldwide.
At this scale, PowerPoint’s impact on
how the world communicates has been
immeasurable. But here’s something that
can be measured: Microsoft grew tenfold
in the years that Robert Gaskins ran its
Graphics Business Unit, and it has grown
15-fold since. Technology corporations,
like PowerPoint itself, have exploded.
And so have their big presentations,
which are no longer held behind closed
doors. They’re now semi-public affairs,
watched—willingly and enthusiastically—
by consumers around the world. Nobody
has to worry about slide carousels getting
jammed anymore, but things still go haywire all the time, from buggy tech demos
to poorly-thought-out theatrics.
When everything works, a good presentation can drive markets and forge
reputations. Of course, this particular
evolution wasn’t exclusively Microsoft’s
doing. Because perhaps the most memorable corporate presentation of all time—
Steve Jobs’s announcement of the iPhone
at Macworld 2007— wasn’t a PowerPoint
at all. It was a Keynote.
Claire L. Evans is a writer
and musician exploring ecology,
technology, and culture.
7/31/23 2:39 PM
62
SO23-back_opensource.indd 62
8/1/23 3:22 PM
63
40
Open source at
Free and open-source
software are now
foundational to modern
code, but much about
them is still in flux.
By
Rebecca Ackermann
Illustration by
Saiman Chow
SO23-back_opensource.indd 63
When Xerox donated a new laser printer
to the MIT Artificial Intelligence Lab in
1980, the company couldn’t have known
that the machine would ignite a revolution.
The printer jammed. And according to the
2002 book Free as in Freedom, Richard M.
Stallman, then a 27-year-old programmer
at MIT, tried to dig into the code to fix it.
He expected to be able to: he’d done it with
previous printers.
The early decades of software development generally ran on a culture of open
access and free exchange, where engineers
could dive into each other’s code across time
zones and institutions to make it their own
or squash a few bugs. But this new printer
ran on inaccessible proprietary software.
Stallman was locked out—and enraged that
Xerox had violated the open code-sharing
system he’d come to rely on.
A few years later, in September 1983,
Stallman released GNU, an operating system designed to be a free alternative to
one of the dominant operating systems at
the time: Unix. Stallman envisioned GNU
as a means to fight back against the proprietary mechanisms, like copyright, that
were beginning to flood the tech industry.
The free-software movement was born
from one frustrated engineer’s simple, rigid
philosophy: for the good of the world, all
code should be open, without restriction
or commercial intervention.
Forty years later, tech companies are
making billions on proprietary software,
and much of the technology around us—
from ChatGPT to smart thermostats—is
inscrutable to everyday consumers. In this
environment, Stallman’s movement may
look like a failed values experiment crushed
under the weight of commercial reality. But
in 2023, the free and open-source software
movement is not only alive and well; it has
become a keystone of the tech industry.
Today, 96% of all code bases incorporate
open-source software. GitHub, the biggest
platform for the open-source community,
is used by more than 100 million developers worldwide. The Biden administration’s
Securing Open Source Software Act of 2022
publicly recognized open-source software
as critical economic and security infrastructure. Even AWS, Amazon’s money-making
cloud arm, supports the development and
8/1/23 3:22 PM
64
maintenance of open-source software; it
committed its portfolio of patents to an
open use community in December of last
year. Over the last two years, while public
trust in private technology companies has
plummeted, organizations including Google,
Spotify, the Ford Foundation, Bloomberg,
and NASA have established new funding for
open-source projects and their counterparts
in open science efforts—an extension of the
same values applied to scientific research.
The fact that open-source software is
now so essential means that long-standing
leadership and diversity issues in the movement have become everyone’s problems.
Many open-source projects began with
“benevolent dictator for life” (BDFL) models of governance, where original founders
hang on to leadership for years—and not
always responsibly. Stallman and some
other BDFLs have been criticized by their
own communities for misogynistic or even
abusive behavior. Stallman stepped down as
president of the Free Software Foundation
in 2019 (although he returned to the board
two years later). Overall, open-source participants are still overwhelmingly white, male,
and located in the Global North. Projects
can be overly influenced by corporate interests. Meanwhile, the people doing the hard
work of keeping critical code healthy are not
consistently funded. In fact, many major
open-source projects still operate almost
completely on volunteer steam.
Challenges notwithstanding, there’s
plenty to celebrate in 2023, the year of
GNU’s 40th birthday. The modern opensource movement persists as a collaborative haven for transparent ways of working
within a highly fragmented and competitive
industry. Selena Deckelmann, chief product
and technology officer at the Wikimedia
Foundation, says the power of open source
lies in its “idea that people anywhere can
collaborate together on software, but also
on many [more] things.” She points out that
tools to put this philosophy into action, like
mailing lists, online chat, and open version
control systems, were pioneered in opensource communities and have been adopted
as standard practice by the wider tech industry. “We found a way for people from all
SO23-back_opensource.indd 64
over the world, regardless of background,
to find a common cause to collaborate with
each other,” says Kelsey Hightower, an early
contributor to Kubernetes, an open-source
system for automating app deployment and
management, who recently retired from his
role as a distinguished engineer at Google
Cloud. “I think that is pretty unique to the
world of open source.”
The 2010s backlash against tech’s unfettered growth, and the recent AI boom, have
focused a spotlight on the open-source
movement’s ideas about who has the right
to use other people’s information online
and who benefits from technology. Clement
Delangue, CEO of the open-source AI company Hugging Face, which was recently valued at $4 billion, testified before Congress
modified versions too. Stallman saw free
software as an essential right: “Free as in
free speech, not free beer,” as his apocryphal
slogan goes. He created the GNU General
Public License, what’s known as a “copyleft”
license, to ensure that the four freedoms
were protected in code built with GNU.
Linus Torvalds, the Finnish engineer who
in 1991 created the now ubiquitous Unix
alternative Linux, didn’t buy into this dogma.
Torvalds and others, including Microsoft’s
Bill Gates, believed that the culture of open
exchange among engineers could coexist
with commerce, and that more-restrictive
licenses could forge a path toward both
financial sustainability and protections for
software creators and users. It was during
a 1998 strategic meeting of free-software
“If a company only ends up just sharing,
and nothing more, I think that should be
celebrated.”
in June of 2023 that “ethical openness” in
AI development could help make organizations more compliant and transparent,
while allowing researchers beyond a few
large tech companies access to technology
and progress. “We’re in a unique cultural
moment,” says Danielle Robinson, executive
director of Code for Science and Society, a
nonprofit that provides funding and support
for public-interest technology. “People are
more aware than ever of how capitalism has
been influencing what technologies get built,
and whether you have a choice to interact
with it.” Once again, free and open-source
software have become a natural home for
the debate about how technology should be.
Free as in freedom
The early days of the free-software movement were fraught with arguments about
the meaning of “free.” Stallman and the
Free Software Foundation (FSF), founded in
1985, held firm to the idea of four freedoms:
people should be allowed to run a program
for any purpose, study how it works from
the source code and change it to meet their
needs, redistribute copies, and distribute
advocates—which notably did not include
Stallman—that this pragmatic approach
became known as “open source.” (The term
was coined and introduced to the group
not by an engineer, but by the futurist and
nanotechnology scholar Christine Peterson.)
Karen Sandler, executive director of the
Software Freedom Conservancy, a nonprofit
that advocates for free and open-source software, saw firsthand how the culture shifted
from orthodoxy to a big-tent approach with
room for for-profit entities when she worked
as general counsel at the Software Freedom
Law Center in the early 2000s. “The people who were ideological—some of them
stayed quite ideological. But many of them
realized, oh, wait a minute, we can get jobs
doing this. We can do well by doing good,”
Sandler remembers. By leveraging the jobs
and support that early tech companies were
offering, open-source contributors could
sustain their efforts and even make a living
doing what they believed in. In that manner, companies using and contributing to
free and open software could expand the
community beyond volunteer enthusiasts
and improve the work itself. “How could we
8/2/23 2:15 PM
65
PETER ADAMS
Christine Peterson, a futurist
and lecturer in the field of
nanotechnology, coined the term
“open source” in 1998.
ever make it better if it’s just a few radical
people?” Sandler says.
As the tech industry grew around private
companies like Sun Microsystems, IBM,
Microsoft, and Apple in the late ’90s and
early ’00s, new open-source projects sprang
up, and established ones grew roots. Apache
emerged as an open-source web server in
1995. Red Hat, a company offering enterprise companies support for open-source
software like Linux, went public in 1999.
GitHub, a platform originally created to
support version control for open-source
projects, launched in 2008, the same year
that Google released Android, the first opensource phone operating system. The more
pragmatic definition of the concept came to
dominate the field. Meanwhile, Stallman’s
original philosophy persisted among dedicated groups of believers—where it still
lives today through nonprofits like FSF,
which only uses and advocates for software
that protects the four freedoms.
As open-source software spread, a bifurcation of the tech stack became standard
practice, with open-source code as the support structure for proprietary work. Free and
SO23-back_opensource.indd 65
open-source software often served in the
underlying foundation or back-end architecture of a product, while companies vigorously pursued and defended copyrights
on the user-facing layers. Some estimate
that Amazon’s 1999 patent on its one-click
buying process was worth $2.4 billion per
year to the company until it expired. It relied
on Java, an open-source programming language, and other open-source software and
tooling to build and maintain it.
Today, corporations not only depend
on open-source software but play an enormous role in funding and developing
open-source projects: Kubernetes (initially
launched and maintained at Google) and
Meta’s React are both robust sets of software that began as internal solutions freely
shared with the larger technology community. But some people, like the Software
Freedom Conservancy’s Karen Sandler,
identify an ongoing conflict between profitdriven corporations and the public interest.
“Companies have become so savvy and
educated with respect to open-source software that they use a ton of it. That’s good,”
says Sandler. At the same time, they profit
from their proprietary work—which they
sometimes attempt to pass off as open too, a
practice the scholar and organizer Michelle
Thorne dubbed “openwashing” in 2009.
For Sandler, if companies don’t also make
efforts to support user and creator rights,
they’re not pushing forward the free and
open-source ethos. And she says for the
most part, that’s indeed not happening:
“They’re not interested in giving the public
any appreciable rights to their software.”
Others, including Kelsey Hightower, are
more sanguine about corporate involvement.
“If a company only ends up just sharing,
and nothing more, I think that should be
celebrated,” he says. “Then if for the next
two years you allow your paid employees to
work on it, maintaining the bugs and issues,
but then down the road it’s no longer a priority and you choose to step back, I think
we should thank [the company] for those
years of contributions.”
In stark contrast, FSF, now in its 38th
year, holds firm to its original ideals and
opposes any product or company that does
not support the ability for users to view,
modify, and redistribute code. The group
today runs public action campaigns like
“End Software Patents,” publishing articles
and submitting amicus briefs advocating the
end of patents on software. The foundation’s
executive director, Zoë Kooyman, hopes to
continue pushing the conversation toward
freedom rather than commercial concerns.
“Every belief system or form of advocacy
needs a far end,” she says. “That’s the only
way to be able to drive the needle. [At FSF],
we are that far end of the spectrum, and we
take that role very seriously.”
Free as in puppy
Forty years on from the release of GNU,
there is no singular open-source community, “any more than there is an ‘urban community,’” as researcher and engineer Nadia
Asparouhova (formerly Eghbal) writes in her
2020 book Working in Public: The Making
and Maintenance of Open Source Software.
There’s no singular definition, either. The
Open Source Initiative (OSI) was founded in
1998 to steward the meaning of the phrase,
but not all modern open-source projects
8/2/23 3:00 PM
66
adhere to the 10 specific criteria OSI laid out,
and other definitions appear across communities. Scale, technology, social norms, and
funding also range widely from project to
project and community to community. For
example, Kubernetes has a robust, organized community of tens of thousands of
contributors and years of Google investment. Salmon is a niche open-source bioinformatics research tool with fewer than 50
contributors, supported by grants. OpenSSL,
which encrypts an estimated 66% of the
web, is currently maintained by 18 engineers compensated through donations and
elective corporate contracts.
The major discussions now are more
about people than technology: What does
healthy and diverse collaboration look like?
How can those who support the code get
what they need to continue the work? “How
do you include a voice for all the people
affected by the technology you build?” asks
James Vasile, an open-source consultant
and strategist who sits on the board of the
Electronic Frontier Foundation. “These
are big questions. We’ve never grappled
with them before. No one was working on
this 20 years ago, because that just wasn’t
part of the scene. Now it is, and we [in the
open-source community] have the chance
to consider these questions.”
“Free as in puppy,” a phrase that can
be traced back to 2006, has emerged as
a valuable definition of “free” for modern
open-source projects—one that speaks to
the responsibilities of creators and users to
each other and the software, in addition to
their rights. Puppies need food and care to
survive; open-source code needs funding
and “maintainers,” individuals who consistently respond to requests and feedback
from a community, fix bugs, and manage the
growth and scope of a project. Many opensource projects have become too big, complicated, or important to be governed by one
person or even a small group of like-minded
individuals. And open-source contributors
have their own needs and concerns, too. A
person who’s good at building may not be
good at maintaining; someone who creates
a project may not want to or be able to run
it indefinitely. In 2018, for instance, Guido
SO23-back_opensource.indd 66
van Rossum, the creator of the open-source
programming language Python, stepped
down from leadership after almost 30 years,
exhausted from the demands of the mostly
uncompensated role. “I’m tired,” he wrote in
his resignation message to the community,
“and need a very long break.”
Supporting the people who create, maintain, and use free and open-source software requires new roles and perspectives.
Whereas the movement in its early days was
populated almost exclusively by engineers
communicating across message boards and
through code, today’s open-source projects
invite participation from new disciplines
to handle logistical work like growth and
advocacy, as well as efforts toward greater
inclusion and belonging. “We’ve shifted
inclusion strategy, took that responsibility
very seriously. To find out where things
stood, the company partnered with the
Linux Foundation in 2021 on a survey and
resulting report on diversity and inclusion
within open source. The data showed that
despite a pervasive ethos of collaboration
and openness (more than 80% of the respondents reported feeling welcome), communities are dominated by contributors who are
straight, white, male, and from the Global
North. In response, Cheatham, who is now
the company’s chief of staff, focused on ways
to broaden access and promote a sense
of belonging. GitHub launched All In for
Students, a mentorship and education program with 30 students drawn primarily from
historically Black colleges and universities.
“We need designers, ethnographers, social
and cultural experts. We need everyone to
be playing a role in open source.”
from open source being about just the technical stuff to the broader set of expertise
and perspectives that are required to make
effective open-source projects,” says Michael
Brennan, senior program officer with the
Technology and Society program at the Ford
Foundation, which funds research into open
internet issues. “We need designers, ethnographers, social and cultural experts. We
need everyone to be playing a role in open
source if it’s going to be effective and meet
the needs of the people around the world.”
One powerful source of support arrived
in 2008 with the launch of GitHub. While
it began as a version control tool, it has
grown into a suite of services, standards,
and systems that is now the “highway system” for most open-source development, as
Asparouhova puts it in Working in Public.
GitHub helped lower the barrier to entry,
drawing wider contribution and spreading
best practices such as community codes of
conduct. But its success has also given a single platform vast influence over communities dedicated to decentralized collaboration.
Demetris Cheatham, until recently
GitHub’s senior director for diversity and
In its second year, the program expanded
to more than 400 students.
Representation has not been the only
stumbling block to a more equitable opensource ecosystem. The Linux Foundation
report showed that only 14% of opensource contributors surveyed were getting
paid for their work. While this volunteer
spirit aligns with the original vision of free
software as a commerce-free exchange of
ideas, free labor presents a major access
issue. Additionally, 30% of respondents
in the survey did not trust that codes of
conduct would be enforced—suggesting
they did not feel they could count on a
respectful working environment. “We’re
at another inflection point now where
codes of conduct are great, but they’re
only a tool,” says Code for Science and
Society’s Danielle Robinson. “I’m starting to see larger cultural shifts toward
rethinking extractive processes that have
been a part of open source for a long time.”
Getting maintainers paid and connecting
contributors with support are now key to
opening up open source to a more diverse
group of participants.
8/2/23 3:00 PM
67
With that in mind, this year GitHub
established resources specifically for maintainers, including workshops and a hub
of DEI tools. And in May, the platform
launched a new project to connect large,
well-resourced open-source communities
with smaller ones that need help. Cheatham
says it’s crucial to the success of any of these
programs that they be shared for free with
the broader community. “We’re not inventing anything new at all. We’re just applying
open-source principles to diversity, equity,
and inclusion,” she says.
GitHub’s influence over open source may
be large, but it is not the only group working
to get maintainers paid and expand opensource participation. The Software Freedom
Conservancy’s Outreachy diversity initiative
offers paid internships; as of 2019, 92% of
past Outreachy interns have identified as
women and 64% as people of color. Opensource fundraising platforms like Open
Collective and Tidelift have also emerged
to help maintainers tap into resources.
The philanthropic world is stepping
up too. The Ford Foundation, the Sloan
Foundation, Omidyar Network, and the Chan
Zuckerberg Initiative, as well as smaller organizations like Code for Science and Society,
have all recently begun or expanded their
efforts to support open-source research, contributors, and projects—including specific
efforts promoting inclusion and diversity.
Govind Shivkumar from Omidyar Network
told MIT Technology Review that philanthropy is well positioned to establish funding architecture that could help prove out
open-source projects, making them less
risky prospects for future governmental
funding. In fact, research supported by the
Ford Foundation’s Digital Infrastructure
Fund contributed to Germany’s recent creation of a national fund for open digital
infrastructure. Momentum has also been
building in the US. In 2016 the White House
began requiring at least 20% of governmentdeveloped software to be open source. Last
year’s Securing Open Source Software Act
passed with bipartisan support, establishing
a framework for attention and investment
at the federal level toward making opensource software stronger and more secure.
SO23-back_opensource.indd 67
The fast-approaching future
Open source contributes valuable practices and tools, but it may also offer a
competitive advantage over proprietary
efforts. A document leaked in May from
Google argued that open-source communities had pushed, tested, integrated,
and expanded the capabilities of large
language models more thoroughly than
private efforts could’ve accomplished
on their own: “Many of the new ideas [in
AI development] are from ordinary people. The barrier to entry for training and
experimentation has dropped from the
total output of a major research organization to one person, an evening, and a
beefy laptop.” The recently articulated concept of Time till Open Source Alternative
(TTOSA)—the time between the release of
a proprietary product and an open-source
equivalent—also speaks to this advantage.
One researcher estimated the average
TTOSA to be seven years but noted that
the process has been speeding up thanks
to easy-to-use services like GitHub.
At the same time, much of our modern world now relies on underfunded and
rapidly expanding digital infrastructure.
There has long been an assumption within
open source that bugs can be identified
and solved quickly by the “many eyes” of
a wide community—and indeed this can
be true. But when open-source software
affects millions of users and its maintenance is handled by handfuls of underpaid
individuals, the weight can be too much
for the system to bear. In 2021, a security
vulnerability in a popular open-source
Apache library exposed an estimated hundreds of millions of devices to hacking
attacks. Major players across the industry
were affected, and large parts of the internet went down. The vulnerability’s lasting
impact is hard to quantify even now.
Other risks emerge from open-source
development without the support of ethical
guardrails. Proprietary efforts like Google’s
Bard and OpenAI’s ChatGPT have demonstrated that AI can perpetuate existing
biases and may even cause harm—while
also not providing the transparency that
could help a larger community audit the
technology, improve it, and learn from
its mistakes. But allowing anyone to use,
modify, and distribute AI models and technology could accelerate their misuse. One
week after Meta began granting access to
its AI model LLaMA, the package leaked
onto 4chan, a platform known for spreading
misinformation. LLaMA 2, a new model
released in July, is fully open to the public, but the company has not disclosed its
training data as is typical in open-source
projects—putting it somewhere in between
open and closed by some definitions, but
decidedly not open by OSI’s. (OpenAI is
reportedly working on an open-source
model as well but has not made a formal
announcement.)
“There are always trade-offs in the
decisions you make in technology,” says
Margaret Mitchell, chief ethics scientist at
Hugging Face. “I can’t just be wholeheartedly supportive of open source in all cases
without any nuances or caveats.” Mitchell
and her team have been working on opensource tools to help communities safeguard
their work, such as gating mechanisms
to allow collaboration only at the project
owner’s discretion, and “model cards”
that detail a model’s potential biases and
social impacts—information researchers
and the public can take into consideration
when choosing which models to work with.
Open-source software has come a
long way since its rebellious roots. But
carrying it forward and making it into a
movement that fully reflects the values
of openness, reciprocity, and access will
require careful consideration, financial
and community investment, and the movement’s characteristic process of self-improvement through collaboration. As the
modern world becomes more dispersed
and diverse, the skill sets required to work
asynchronously with different groups of
people and technologies toward a common
goal are only growing more essential. At
this rate, 40 years from now technology
might look more open than ever—and the
world may be better for it.
Rebecca Ackermann is a writer,
designer, and artist based in San
Francisco.
8/1/23 3:22 PM
68
Researchers can now
coax cells like those in
this photomicrograph of
endometrial tissue into
microcosms of the human
uterus.
SO23-back_organoids.indd 68
7/31/23 4:04 PM
69
Tiny faux organs
could finally
crack the mystery
of menstruation
Organoids are helping researchers explore one of the last frontiers of
human physiology. By Saima Sidik
In the center of the laboratory dish, there
was a subtle white film that could only be
seen when the light hit the right way. Ayse
Nihan Kilinc, a reproductive biologist,
popped the dish under the microscope,
and an image appeared on the attached
screen. As she focused the microscope, the
film resolved into clusters of droplet-like
spheres with translucent interiors and
thin black boundaries. In this magnified
view, the structures ranged in size from
as small as a quarter to as large as a golf
ball. In reality, each was only as big as a
few grains of sand.
“They’re growing,” Kilinc said, observing that their plump shapes were a promising sign. “These are good organoids.”
Kilinc, who works in the lab of biological engineer Linda Griffith at MIT, is
among a small group of scientists using
new tools akin to miniature organs to
study a poorly understood—and frequently
SO23-back_organoids.indd 69
problematic—part of human physiology:
menstruation. Heavy, sometimes debilitating periods strike at least a third of
people who menstruate at some point in
their lives, causing some to miss weeks of
work or school every year and jeopardizing their professional standing. Anemia
threatens about two-thirds of people with
heavy periods. And when menstrual blood
flows through the fallopian tubes and into
the body cavity, it’s thought to sometimes
create painful lesions—characteristics
of a disease called endometriosis, which
can require multiple surgeries to control.
No one is entirely sure how—or why—
the human body choreographs this monthly
dance of cellular birth, maturation, and
death. Many people desperately need
treatments to make their period more manageable, but it’s difficult for scientists to
design medications without understanding
how menstruation really works.
7/31/23 4:04 PM
70
An uncommon problem
Periods are rare in the animal kingdom. The
human body goes through the menstrual
cycle to prepare the uterus to welcome a
fetus, whether one is likely to show up or
not. In contrast, most animals prepare the
uterus only once a fetus is already present.
That cycle is a constant pattern of
wounding and repair. The process starts
when levels of a hormone called progesterone plummet, indicating that no baby
will be growing in the uterus that month.
Removing progesterone triggers a response
similar to what happens when the body
fights off an infection. Inflammation injures
the endometrium. Over the next five or so
days, the damaged tissue sloughs off and
flows out of the body.
As soon as the bleeding starts, the endometrium begins to heal. Over the course
of about 10 days, this tissue quadruples in
thickness. No other human tissue is known
to grow so extensively and so quickly—“not
even aggressive cancer cells,” says Jan
Brosens, an obstetrician and gynecologist
at the University of Warwick in the UK.
As the tissue heals—in a rare example of
scarless repair—it becomes an environment that can shield an embryo, which is a
foreign entity in the body, from an immune
system trained to reject interlopers.
SO23-back_organoids.indd 70
With a long snout reminiscent
of an elephant’s trunk and a
body similar to an opossum’s,
the elephant shrew was already
an oddball when van der Horst
learned that it’s one of the few
animals that get a period.
Scientists have filled in the rough
outline of this process after decades of
research, but many details remain opaque.
How exactly the endometrium repairs itself
so extensively is unknown. Why some
people have much heavier periods than
others remains an open question. And why
humans menstruate, rather than reabsorbing unused endometrial tissue like many
other mammals, is a matter of hot debate
among biologists.
This lack of understanding hampers
scientists, who would like to find treatments for periods that are too painful to
be tamed by over-the-counter painkillers
or too heavy to be absorbed by pads and
tampons. As a result, many people suffer. A
study performed in the Netherlands found
that on average women lost about a week
of productivity per year because of abdominal pain and other symptoms related to
their periods. “It would not be unusual for
a patient to see me in the clinic and say
that every month, they had to have two or
three days off work,” says Hilary Critchley,
a gynecologist and reproductive biologist
at the University of Edinburgh.
Heavy periods can make even daily
tasks difficult. Getting up from a chair,
for example, can be an ordeal for someone
worried about the possibility of having
stained the seat. Mothers with low iron
levels tend to have babies with low birth
weights and other health problems, so
the effects of heavy menstruation trickle
down through generations. And yet the
uterus often goes unacknowledged, even by
researchers who are exploring topics like
tissue regeneration, to which the organ is
clearly relevant, Brosens says. “It is almost
unforgivable, in my view,” he adds.
Ask researchers why menstruation
remains so enigmatic and you’ll get a variety of answers. Most everyone agrees
there’s not enough funding to attract the
number of researchers the field deserves—
as is often the case for health problems
that primarily affect women. The fact that
menstruation is shrouded in taboos doesn’t
help. But some researchers say it has been
hard to find the right tools to study the
phenomenon.
Scientists tend to start studies of the
human body in other organisms, such as
mice, fruit flies, and yeast, before translating the knowledge back to humans.
These so-called “model systems” reproduce quickly and can be altered genetically, and scientists can work with them
without running into as many ethical or
logistical concerns as they would if they
experimented on people. But because
menstruation is so rare in the animal
kingdom, it’s been tough to find ways
to study the process outside the human
body. “I think that the main limitations are
PREVIOUS SPREAD: GETTY IMAGES
That understanding could be in the
works, thanks to endometrial organoids—
biomedical tools made from bits of the
tissue that lines the uterus, called the
endometrium. To make endometrial organoids, scientists collect cells from a human
volunteer and let those cells self-organize
in laboratory dishes, where they develop
into miniature versions of the tissue they
came from. The research is still very much
in its infancy. But organoids have already
provided insights into how endometrial
cells communicate and coordinate, and why
menstruation is routine for some people
and fraught for others. Some researchers
are hopeful that these early results mark
the dawn of a new era. “I think it’s going
to revolutionize the way we think about
reproductive health,” says Juan Gnecco, a
reproductive engineer at Tufts University.
7/31/23 4:04 PM
71
model systems, honestly,” says Julie Kim,
a reproductive biologist at Northwestern
University.
Early adventures
“ORGANOID CO-CULTURE MODEL OF THE CYCLING HUMAN ENDOMETRIUM IN A FULLY-DEFINED SYNTHETIC
2 EXTRACELLULAR MATRIX REVEALS EPITHELIAL-STROMAL CROSSTALK.” JUAN S. GNECCO ET AL.
In the 1940s, the Dutch zoologist Cornelius
Jan van der Horst was among the first
scientists to work on an animal model for
studying menstruation. Van der Horst
was fascinated by unusual, poorly studied critters, and this fascination led him
to South Africa, where he trapped and
studied the elephant shrew. With a long
snout reminiscent of an elephant’s trunk
and a body similar to an opossum’s, the
elephant shrew was already an oddball
when van der Horst learned that it’s one
of the few animals that get a period—a fact
bats, which live primarily in Central and
South America, were not easily accessible, so for several decades his discovery
remained simply a point of interest in the
scientific literature.
Then, in the 1960s, an eager graduate student named John J. Rasweiler IV
enrolled at Cornell University. Rasweiler
wanted to study a type of animal reproduction that mirrors what happens in humans,
so his mentor pointed out Hamlett’s discovery. Perhaps Rasweiler would like to
go find some bats and see what he could
do with them?
“It was a very challenging undertaking,” Rasweiler says. “Essentially I had
to invent everything from start to finish.”
First there were the trips to Trinidad and
Researchers can track how organoids respond to various stimuli. Here
endometrial tissue thickens when exposed to a synthetic version of the
hormone progesterone, mirroring the lead-up to menstruation.
he probably discovered “more or less by
accident,” says Anthony Carter, a developmental biologist at the University of
Southern Denmark who wrote a review
of van der Horst’s work.
Elephant shrews are not cooperative
study subjects, however. They only menstruate at certain times of year, and they
don’t do well in captivity. There’s also the
challenge of catching them, which van der
Horst and his colleagues attempted with
hand-held nets. The shrews were agile, so
it was “sometimes a fascinating but mostly
a disappointing sport,” he wrote.
Around the same time, George W.D.
Hamlett, a Harvard-based biologist, discovered an alternative. Hamlett was examining preserved samples of a nectar-loving
bat called Glossophaga soricina when he
noticed evidence of menstruation. The
SO23-back_organoids.indd 71
Colombia to collect the bats. Then there
was the issue of how to transport them back
to the United States without their getting
crushed or overheating. (Shipping them in
takeout food containers, bundled together
into a larger package, turned out to work
well.) Once the bats were in the lab, he
had to figure out how to work with them
without letting them escape. He ended up
constructing a walk-in cage on wheels that
he could roll up to the bats’ enclosures.
“I loved working with them—delightful
animals,” says Rasweiler, who has since
retired from a career as a reproductive
physiologist at SUNY Downstate. But
other researchers were put off by the idea
of working with a flying animal.
In 2016, the spiny mouse—a rodent
that thrives in the dry conditions of the
Middle East, South Asia, and parts of
Africa—joined the exclusive club of animals known to menstruate. Spiny mice
can be raised in the lab, so they may
become valuable subjects for menstruation research. But millions of years of
evolution lie between humans and mice,
leading Brosens to think the genetics
underlying their uteruses are likely to
differ substantially.
Much of the foundational work on menstruation has been performed in macaque
monkeys. But primates are expensive to
care for, and the Animal Welfare Act places
restrictions on primate research that do
not apply to other common lab animals.
Through a series of manipulations, scientists also found that they could force
a common lab mouse to have something
similar to a period. This model has been
useful, but it’s still only an artificial representation of true human menstruation.
What researchers really needed was a
way to use humans as study subjects for
menstruation research. But even setting
aside the obvious ethical concerns, such
a thing would be very challenging logistically. The endometrium evolves exceedingly quickly—“at an hourly rate, we see
different responses from the cells, different
functions,” says Aleksandra Tsolova, a cell
biologist at the University of Calgary. “It’s
very dynamic tissue.” Researchers would
need to perform invasive biopsies almost
constantly to study it inside the human
body, and even then, altering it genetically
or through chemical treatments would be
largely impossible.
But by the early 1900s, a solution to this
problem had already started to emerge.
And it was not a creature from the jungle
or the African grasslands that paved the
road, but an organism from the bottom
of the sea.
Organoids come on the scene
The groundwork for what would become
modern-day organoids was laid in 1910,
when a zoologist named Henry Van Peters
Wilson realized that cells from marine
sponges have a sort of “memory” of how
they’re arranged in the animal, even after
they’re separated. When he dissociated a
8/1/23 1:34 PM
72
sponge by squeezing it through a mesh
and then mixed the cells together again,
the original sponge re-formed. Midcentury
work showed that certain cells from chick
embryos have a similar ability.
In 2009, a study published in the journal
Nature described a possible way of extending these observations to human organs.
The researchers took a single adult stem
cell from a mouse intestine—which had
the ability to become any type of intestinal
cell—and embedded it in a gelatinous substance. The cell divided and, together with
its progeny, formed a miniature, simplified
version of the intestinal lining. It was the
first time scientists had laid out a method
of creating an organoid from human tissue that was accessible to many labs and
straightforward to adapt to other organs.
Since then, scientists have extended
this general approach to mimic aspects of
around a dozen human tissue types, including those from the gut, the kidneys, and the
brain—and, by the late 2010s, the uterus.
It was a happy accident that brought
endometrial organoids into the mix. In
the years leading up to their development, scientists had been trying to study
the endometrium by growing its cells in
smooth layers on the bottoms of laboratory
dishes. Stromal cells, which provide structural support for the tissue and play a key
role in pregnancy, proved easy to grow this
way—these cells secrete a substance that
sticks them to each other, and also makes
them adhere to petri dishes. But epithelial
cells, another critical component of the
endometrium, posed a problem. In a dish,
they stopped responding to hormones,
and their shapes were unlike what’s seen
in the human body.
Then, while working with a mix of
human placental and endometrial tissue
in an effort to get the placenta to form
organoids, a reproductive biologist named
Margherita Turco noticed something serendipitous. If they were suspended in a gel
instead of being grown in liquid, and given
just the right mix of molecules from the
human body, endometrial epithelial cells
assembled into tiny three-dimensional
simulacra of the organ they came from.
SO23-back_organoids.indd 72
“It’s mind-blowing that we are
very, very close to the patient,
but we’re not working within the
patient. There’s huge potential.”
“They grew really, really well,” Turco says.
In fact, endometrial organoids were “kind
of overtaking the cultures.” Another group
independently published similar findings
around the same time.
Today, placental and endometrial organoids are both valuable tools in the lab Turco
runs at the Friedrich Miescher Institute for
Biomedical Research in Basel, Switzerland.
Her original 2017 publication calls for using
tissue from a biopsy, rather than stem cells,
to make organoids from the endometrium.
Some labs instead use tissue removed
from people who have had hysterectomies.
But Turco’s lab recently showed that bits
of the endometrium found in menstrual
blood also work, which would mean the
new endometrial organoids can be grown
without requiring biopsies or surgery.
From all these starting points, researchers can now create microcosms of the
human uterus. Each organoid reminds
Tsolova of a tiny bubble suspended in a
gelatinous dessert. And each presents a
unique opportunity to understand processes that science has long ignored.
Period in a dish
Endometrial organoids became integral
to the work of the small community of
researchers focused on the uterus. Since
2017, many labs have put their own spins
on these new tools.
Kim’s lab has added stromal cells to
the epithelial cells that make up classic
endometrial organoids. She and her colleagues mix the two together and simply
let the combination “do its thing,” she
says. The result is like a malt ball with
stromal cells on the inside and epithelial
cells on the outside.
In 2021, Brosens and his colleagues
created similar structures, which they
call “assembloids.” Instead of mixing
the two cell types together, they created
an organoid out of epithelial cells and
then added a layer of stromal cells on
top. Using assembloids, they’ve learned
that deteriorating cells play a key role
in helping the embryo implant in the
uterus. Because the endometrium is constantly dying and regrowing, the tissue
is highly flexible and able to adjust its
shape, Brosens explains. This helps the
tissue kick-start pregnancy: “Maternal
cells will grab the embryo,” he says, “and
literally pull that embryo into the tissue.”
A video from one of Brosens’s recent
publications shows an assembloid remodeling around a five-day-old embryo. Before
he and his colleagues did this work, conventional wisdom said the endometrium
was passive tissue that was invaded by the
embryo, but that’s “just completely wrong,”
he says. This new understanding of how
embryos implant could improve in vitro
7/31/23 4:04 PM
73
“MENSTRUAL FLOW AS A NON-INVASIVE SOURCE OF ENDOMETRIAL ORGANOIDS.” TEREZA CINDROVA-DAVIES ET AL. COMMUNICATIONS BIOLOGY.
fertilization and help explain why some
people are prone to miscarriages.
Eventually, Critchley hopes, scientists can design treatments that let people
choose when to have a period—or if they
even want to have one at all. Hormonal
birth control can accomplish these goals
for some, but these drugs can also cause
unscheduled bleeding that makes periods harder to manage, and some people
find the side effects of the medication
intolerable.
To create better options, scientists
still need to understand how a normal
period works. Making an organoid menstruate in a dish would be a huge boon for
achieving this goal, so that’s what some
researchers are trying to do.
which is a hallmark of the lesions that
characterize endometriosis. IL-1β caused
organoids to grow rapidly, but only when
stromal cells were mixed in along with
the epithelial cells. This suggests that
signals from stromal cells might be part
of what causes endometriosis to develop
into a painful condition.
Meanwhile, Kilinc is trying to understand why some people’s periods are so
heavy. Endometrial tissue growing into
the muscle that lines the uterus seems to
cause lesions, which can be one source
of excessive bleeding. To see how such
lesions could form, Kilinc watches how
endometrial organoids react when they
hit a dense gel, which mimics the texture of muscle.
200 um
Margherita Turco's laboratory at the Friedrich Miescher Institute for
Biomedical Research in Switzerland has found that organoids derived directly
from the endometrium (left) and from menstrual blood (right) of the same
person have indistinguishable shapes and structures.
By manually adding hormones to organoids, Gnecco and his collaborators can
replicate some of what the endometrium
experiences over the course of a month.
As the cycle progresses, they see the cells
adjusting the complement of genes they
use, just as they would in the human body.
The shape of the organoid also follows a
familiar pattern. Glands—infoldings of
cells from which mucus and other substances are secreted—change from smooth
tubes to sawtooth-like structures as this
faux menstrual cycle progresses.
With this system working, the next
step is to figure out what happens when
the endometrium malfunctions. “That’s
what really got me excited,” Gnecco says.
As a first step, he treated organoids with
an inflammatory molecule called IL-1β,
SO23-back_organoids.indd 73
In a soft gel, endometrial organoids
maintain a nice, round structure. But when
the organoid is in a stiff gel, it’s a different
story. A video from one of Kilinc’s recent
experiments shows an organoid pulsating
and squirming, almost like a pot of water
that’s about to boil over. Finally, a group
of cells shoots off, creating an appendagelike structure that punctures the stiff gel.
Videos like this make Kilinc think that
contact with muscle might be among the
triggers that cause the endometrium to
start wounding this tissue and causing
heavy bleeding. “But,” she adds, “this is
not clear yet—we are still investigating.”
Speedier science
Today’s endometrial organoids can’t do
everything animal models can do. For one
thing, they don’t yet include key components of menstruation, like blood vessels
and immune cells. For another, they can’t
reveal how distant parts of the body, like
the brain, influence what happens in the
uterus. But because they’re derived from
human tissue, they’re intimately related
to the bizarre, idiosyncratic process that
is a human period, and that’s worth a
lot. “It’s mind-blowing that we are very,
very close to the patient, but we’re not
working within the patient,” Tsolova says.
“There’s huge potential.”
In parallel to the work on organoids,
scientists have created an “organ on a
chip” that mimics the endometrium.
Tiny tubes affixed to a flat surface carry
liquids to endometrial tissue, mimicking
the flow of blood or hormones transmitted from other parts of the body. An ideal
model system could combine endometrial
cells in their natural arrangement—as
in an organoid—with flowing liquids,
as on a chip.
Already, organoids have helped
researchers solve old puzzles. Researchers
in Vienna, for example, used this technology to figure out which genes cause some
endometrial cells to grow cilia—hair-like
structures that beat in coordination to
move liquid, mucus, and embryos within
the uterus. Other researchers have used
organoids to learn how endometrial cells
mature throughout the menstrual cycle.
Meanwhile, Kim and her colleagues used
organoids to study how the endometrium
responds to abnormal hormone levels,
which may be a factor in endometrial
cancer.
People who menstruate have waited a
long time for researchers to tackle such
questions. Burdensome periods are often
seen as just a “women’s problem”—a
mindset Tsolova disagrees with because
it ignores the fact that people struggling
with menstruation often can’t contribute
their full range of talents to their communities. “It’s a societal problem,” she says.
“It affects every person, in every way.”
Saima Sidik is a freelance science
journalist based in Somerville,
Massachusetts.
7/31/23 4:04 PM
74
35 Innovators Under 35
Tips for aspiring innovators on
trying, failing, and the future of AI.
By Andrew Ng
How to
be an
innovator
I
nnovation is a powerful engine for
uplifting society and fueling economic growth. Antibiotics, electric
lights, refrigerators, airplanes, smartphones—we have these things
because innovators created something that didn’t exist before. MIT
Technology Review’s Innovators Under 35
list celebrates individuals who have accomplished a lot early in their careers and are
likely to accomplish much more still.
Having spent many years working on
AI research and building AI products, I’m
fortunate to have participated in a few
innovations that made an impact, like using
reinforcement learning to fly helicopter
drones at Stanford, starting and leading
Google Brain to drive large-scale deep
learning, and creating online courses that
led to the founding of Coursera. I’d like
to share some thoughts about how to do
it well, sidestep some of the pitfalls, and
avoid building things that lead to serious
harm along the way.
SO23-back_35.indd 74
AI is a dominant driver
of innovation today
As I have said before, I believe AI is the
new electricity. Electricity revolutionized
all industries and changed our way of life,
and AI is doing the same. It’s reaching into
every industry and discipline, and it’s yielding advances that help multitudes of people.
AI—like electricity—is a generalpurpose technology. Many innovations,
such as a medical treatment, space rocket,
or battery design, are fit for one purpose.
In contrast, AI is useful for generating art,
serving web pages that are relevant to a
search query, optimizing shipping routes
to save fuel, helping cars avoid collisions,
and much more.
The advance of AI creates opportunities
for everyone in all corners of the economy
to explore whether or how it applies to
their area. Thus, learning about AI creates
disproportionately many opportunities to
do something that no one else has ever
done before.
For more than 20 years, this
publication has highlighted
the work of young innovators
through our 35 Innovators Under
35 competition—partly to call
attention to what’s going on now,
but even more to reveal where
technology is headed in the near
future. This year we’re excited
to include an introductory essay
by Andrew Ng (a 35 Innovators
honoree himself in 2008) and
a profile of our Innovator of
the Year, Sharon Li (page 76).
To see the full list, along with
descriptions of the work of all
this year’s winners, please visit
technologyreview.com/
supertopic/2023-innovators
starting September 12.
For instance, at AI Fund, a venture
studio that I lead, I’ve been privileged
to participate in projects that apply AI to
maritime shipping, relationship coaching, talent management, education, and
other areas. Because many AI technologies are new, their application to most
domains has not yet been explored. In
this way, knowing how to take advantage
of AI gives you numerous opportunities
to collaborate with others.
Looking ahead, a few developments
are especially exciting.
Prompting: While ChatGPT has popularized the ability to prompt an AI
model to write, say, an email or a
poem, software developers are just
beginning to understand that prompting enables them to build in minutes
the types of powerful AI applications
that used to take months. A massive
wave of AI applications will be built
this way.
7/28/23 5:01 PM
35 Innovators Under 35
75
I spent a long time trying to get aircraft
to fly autonomously in formation to save
fuel (similar to birds that fly in a V formation). In hindsight, I executed poorly and
should have worked with much larger
aircraft.
I tried to get a robot arm to unload dishwashers that held dishes of all different
shapes and sizes. In hindsight, I was
much too early. Deep-learning algorithms
for perception and control weren’t good
enough at the time.
About 15 years ago, I thought that unsupervised learning (that is, enabling
machine-learning models to learn from
unlabeled data) was a promising approach.
I mistimed this idea as well. It’s finally
working, though, as the availability of data
and computational power has grown.
NICO ORTEGA
Vision transformers: Text transformers—
language models based on the transformer neural network architecture,
which was invented in 2017 by Google
Brain and collaborators—have revolutionized writing. Vision transformers,
which adapt transformers to computer
vision tasks such as recognizing objects
in images, were introduced in 2020 and
quickly gained widespread attention.
The buzz around vision transformers in
the technical community today reminds
me of the buzz around text transformers a couple of years before ChatGPT.
A similar revolution is coming to image
processing. Visual prompting, in which
the prompt is an image rather than a
string of text, will be part of this change.
AI applications: The press has given
a lot of attention to AI’s hardware and
software infrastructure and developer
tools. But this emerging AI infrastructure won’t succeed unless even more
SO23-back_35.indd 75
valuable AI businesses are built on
top of it. So even though a lot of media
attention is on the AI infrastructure
layer, there will be even more growth
in the AI application layer.
These areas offer rich opportunities for
innovators. Moreover, many of them are
within reach of broadly tech-savvy people, not just people already in AI. Online
courses, open-source software, software
as a service, and online research papers
give everyone tools to learn and start
innovating. But even if these technologies aren’t yet within your grasp, many
other paths to innovation are wide open.
Be optimistic, but dare to fail
That said, a lot of ideas that initially seem
promising turn out to be duds. Duds are
unavoidable if you take innovation seriously. Here are some projects of mine that
you probably haven’t heard of, because
they were duds:
It was painful when these projects
didn’t succeed, but the lessons I learned
turned out to be instrumental for other
projects that fared better. Through my
failed attempt at V-shape flying, I learned
to plan projects much better and frontload risks. The effort to unload dishwashers failed, but it led my team to build the
Robot Operating System (ROS), which
became a popular open-source framework
that’s now in robots from self-driving cars
to mechanical dogs. Even though my initial focus on unsupervised learning was a
poor choice, the steps we took turned out
to be critical in scaling up deep learning
at Google Brain.
Innovation has never been easy. When
you do something new, there will be skeptics. In my younger days, I faced a lot
of skepticism when starting most of the
projects that ultimately proved to be successful. But this is not to say the skeptics
are always wrong. I faced skepticism for
most of the unsuccessful projects as well.
As I became more experienced, I found
that more and more people would agree
with whatever I said, and that was even
more worrying. I had to actively seek out
people who would challenge me and tell
me the truth. Luckily, these days I am surrounded by people who will tell me when
they think I’m doing something dumb!
7/28/23 5:01 PM
76
35 Innovators Under 35
While skepticism is healthy and even
necessary, society has a deep interest in
the fruits of innovation. And that is a good
reason to approach innovation with optimism. I’d rather side with the optimist who
wants to give it a shot and might fail than
the pessimist who doubts what’s possible.
Take responsibility
for your work
As we focus on AI as a driver of valuable innovation throughout society, social
responsibility is more important than ever.
People both inside and outside the field
see a wide range of possible harms AI
may cause. These include both short-term
issues, such as bias and harmful applications of the technology, and long-term
risks, such as concentration of power and
potentially catastrophic applications. It’s
important to have open and intellectually
rigorous conversations about them. In that
way, we can come to an agreement on what
the real risks are and how to reduce them.
Over the past millennium, successive
waves of innovation have reduced infant
mortality, improved nutrition, boosted
literacy, raised standards of living worldwide, and fostered civil rights including
protections for women, minorities, and
other marginalized groups. Yet innovations
have also contributed to climate change,
spurred rising inequality, polarized society, and increased loneliness.
Clearly, the benefits of innovation come
with risks, and we have not always managed them wisely. AI is the next wave, and
we have an obligation to learn lessons from
the past to maximize future benefits for
everyone and minimize harm. This will
require commitment from both individuals and society at large.
At the social level, governments are
moving to regulate AI. To some innovators,
regulation may look like an unnecessary
restraint on progress. I see it differently.
Regulation helps us avoid mistakes and
enables new benefits as we move into an
uncertain future. I welcome regulation
that calls for more transparency into the
opaque workings of large tech companies; this will help us understand their
impact and steer them toward achieving
broader societal benefits. Moreover, new
SO23-back_35.indd 76
regulations are needed because many
existing ones were written for a pre-AI
world. The new regulations should specify the outcomes we want in important
areas like health care and finance—and
those we do not want.
But avoiding harm shouldn’t be just a
priority for society. It also needs to be a priority for each innovator. As technologists,
we have a responsibility to understand the
implications of our research and innovate
in ways that are beneficial. Traditionally,
many technologists adopted the attitude
that the shape technology takes is inevitable and there’s nothing we can do about
it, so we might as well innovate freely. But
we know that’s not true.
When innovators choose to work on differential privacy (which allows AI to learn
from data without exposing personally
identifying information), they make a powerful statement that privacy matters. That
statement helps shape the social norms
adopted by public and private institutions.
Conversely, when innovators create Web3
cryptographic protocols to launder money,
that too creates a powerful statement—in
my view, a harmful one—that governments
should not be able to trace how funds are
transferred and spent.
If you see something unethical being
done, I hope you’ll raise it with your colleagues and supervisors and engage them
in constructive conversations. And if you
are asked to work on something that you
don’t think helps humanity, I hope you’ll
actively work to put a stop to it. If you are
unable to do so, then consider walking
away. At AI Fund, I have killed projects
that I assessed to be financially sound
but ethically unsound. I urge you to do
the same.
Now, go forth and innovate! If you’re
already in the innovation game, keep at it.
There’s no telling what great accomplishment lies in your future. If your ideas are
in the daydream stage, share them with
others and get help to shape them into
something practical and successful. Start
executing, and find ways to use the power
of innovation for good.
Andrew Ng is a renowned global
AI innovator. He leads AI Fund,
DeepLearning.AI, and Landing AI.
This year we’re introducing
a new feature to the
35 Innovators Under 35
competition. We’re naming
an Innovator of the Year—
someone whose work not
only is exemplary but also
manages to somehow
capture the zeitgeist.
For 2023 we’re happy to
announce Sharon Li as our
Innovator of the Year. Li
received the highest overall
numerical score from our
judges, and her research
on developing safer AI
models is directly aimed
at one of the most crucial
and perplexing problems
of our time.
As AI models
are released
into the
wild, this
innovator
wants to
make sure
they’re safe
Sharon Li’s research could
prevent AI models from
failing catastrophically when
they encounter unfamiliar
scenarios.
By Melissa Heikkilä
7/31/23 1:23 PM
35 Innovators Under 35
SARA STATHAS
A
s we launch AI systems from the lab
into the real world,
we need to be prepared for these systems to break in
surprising and catastrophic ways. It’s already happening. Last
year, for example, a chess-playing robot
arm in Moscow fractured the finger of a
seven-year-old boy. The robot grabbed
the boy’s finger as he was moving a chess
piece and let go only after nearby adults
managed to pry open its claws.
This did not happen because the robot
was programmed to do harm. It was
because the robot was overly confident
that the boy’s finger was a chess piece.
The incident is a classic example of
something Sharon Li, 32, wants to prevent. Li, an assistant professor at the
University of Wisconsin, Madison, is
a pioneer in an AI safety feature called
out-of-distribution (OOD) detection.
SO23-back_35.indd 77
77
“know” is the weakness behind many
AI disasters.
Li’s work calls on the AI community
to rethink its approach to training. “A lot
of the classic approaches that have been
in place over the last 50 years are actually
safety unaware,” she says.
Her approach embraces uncertainty
by using machine learning to detect
unknown data out in the world and design
AI models to adjust to it on the fly. Outof-distribution detection could help prevent accidents when autonomous cars
run into unfamiliar objects on the road,
or make medical AI systems more useful
in finding a new disease.
“In all those situations, what we really
need [is a] safety-aware machine-learning
model that’s able to identify what it
doesn’t know,” says Li.
This approach could also aid today’s
buzziest AI technology, large language
models such as GPT-4 and chatbots such
as ChatGPT. These models are often
confident liars, presenting falsehoods
as facts. This is where OOD detection
could help. Say a person asks a chatbot
a question it doesn’t have an answer to
This feature, she says, helps AI models in its training data. Instead of making
determine when they should abstain something up, an AI model using OOD
from action if faced with something they detection would decline to answer.
weren’t trained on.
Li’s research tackles one of the most
Li developed one of the first algo- fundamental questions in machine learning,
rithms on out-of-distribution detection says John Hopcroft, a professor at Cornell
for deep neural networks. Google has University, who was her PhD advisor.
since set up a dedicated team to inteHer work has also seen a surge of intergrate OOD detection into its products. est from other researchers. “What she is
Last year, Li’s theoretical analysis
doing is getting other researchers
of OOD detection was chosen
to work,” says Hopcroft, who
from over 10,000 submissions
adds that she’s “basically creINNOVATOR
ated one of the subfields” of
as an outstanding paper by
OF
AI safety research.
NeurIPS, one of the most presTHE YEAR
tigious AI conferences.
Now, Li is seeking a deeper
We’re currently in an AI gold
understanding of the safety risks
rush, and tech companies are racrelated to large AI models, which
ing to release their AI models. But most are powering all kinds of new online
of today’s models are trained to identify applications and products. She hopes
specific things and often fail when they that by making the models underlying
encounter the unfamiliar scenarios typical these products safer, we’ll be better able
of the messy, unpredictable real world. to mitigate AI’s risks.
“The ultimate goal is to ensure trustTheir inability to reliably understand
what they “know” and what they don’t worthy, safe machine learning,” she says.
7/28/23 5:01 PM
78
How culture drives foul play
on the internet, and how we might
protect ourselves.
By Rebecca Ackermann
Illustration by George Wylesol
Fancy Bear Goes
Phishing: The Dark
History of the
Information Age, in Five
Extraordinary Hacks
by Scott J. Shapiro
FARRAR, STRAUS AND GIROUX,
2023
Online fraud,
hacks, and scams,
oh my
T
he world of online misdeeds is an eerie biome,
crawling with Bored
Apes, Fancy Bears, Shiba
Inu coins, self-replicating
viruses, and whales. But
the behavior driving
fraud, hacks, and scams on the internet
has always been familiar and very human.
New technologies change little about the
fact that illegal operations exist because
some people are willing to act illegally and
others fall for the stories they tell.
To wit: Crypto speculation looks a lot
like online sports betting, which looks
like offline sports betting; cyber hacking
resembles classic espionage; spear phishers recall flesh-and-blood con artists. The
perpetrators of these crimes lure victims
with well-worn appeals to faith and promises of financial reward. In Fancy Bear Goes
Phishing, Yale law professor Scott Shapiro
argues that technological solutions can’t
solve the problem because they can’t force
people to play nice online. The best ways
to protect ourselves from online tricks are
SO23-back_books.indd 78
social—public policies, legal and business
incentives, and cultural shifts.
Shapiro’s book arrives just in time
for the last gasp of the latest crypto
wave, as major players find themselves
trapped in the nets of human institutions. In early June, the US Securities and
Exchange Commission went after Binance
and Coinbase, the two largest cryptocurrency exchanges in the world, a few
months after charging the infamous Sam
Bankman-Fried, founder of the massive
crypto exchange FTX, with fraud. While
Shapiro mentions crypto only as the main
means of payment in online crime, the
industry’s wild ride through finance and
culture deserves its own hefty chapter in
the narrative of internet fraud.
It may be too early for deep analysis,
but we do have first-person perspectives
on crypto from actor Ben McKenzie (former star of the teen drama The O.C.) and
streetwear designer and influencer Bobby
Hundreds, the authors of—respectively—
Easy Money and NFTs Are a Scam/NFTs
Are the Future. (More heavily reported
Easy Money:
Cryptocurrency, Casino
Capitalism, and the
Golden Age of Fraud
by Ben McKenzie
ABRAMS, 2023
NFTs Are a Scam/NFTs
Are the Future: The
Early Years: 2020–2023
by Bobby Hundreds
MCD, 2023
books on the crypto era from tech reporter
Zeke Faux and Big Short author Michael
Lewis are in the works.)
McKenzie testified at the Senate
Banking Committee’s hearing on FTX
that he believes the cryptocurrency industry “represents the largest Ponzi scheme
in history,” and Easy Money traces his
own journey from bored pandemic dabbler to committed crypto critic alongside the industry’s rise and fall. Hundreds
also writes a chronological account of his
time in crypto—specifically in nonfungible tokens, or NFTs, digital representational objects that he has bought, sold, and
“dropped” on his own and through The
Hundreds, a “community-based streetwear
brand and media company.” For Hundreds,
NFTs have value as cultural artifacts, and
he’s not convinced that their time should
be over (although he acknowledges that
between 2019 and the writing of his book,
more than $100 million worth of NFTs
have been stolen, mostly through phishing scams). “Whether or not NFTs are a
scam poses a philosophical question that
7/31/23 11:00 AM
79
SO23-back_books.indd 79
7/28/23 12:55 PM
80
wanders into moral judgments and cultural
practices around free enterprise, mercantilism, and materialism,” he writes.
For all their differences (a lawyer, an
actor, and a designer walk into a bar …),
Shapiro, McKenzie, and Hundreds all
explore characters, motivations, and
social dynamics much more than they
do technical innovations. Online crime
is a human story, these books collectively
argue, and explanations of why it happens,
why it works, and how we can stay safe
are human too.
To articulate how internet crime comes
to be, Shapiro offers a new paradigm for
the relationship between humanity and
technology. He relabels technical computer code “downcode” and calls everything human surrounding and driving
it “upcode.” From “the inner operations
of the human brain” to “the outer social,
political, and institutional forces that define
the world,” upcode is the teeming ecosystem of humans and human systems
behind the curtain of technology. Shapiro
argues that upcode is responsible for all of
technology’s impacts—positive and negative—and downcode is only its product.
Technical tools like the blockchain, firewalls, or two-factor authentication may be
implemented as efforts to ensure safety
online, but they cannot address the root
causes upstream. For any technologist or
crypto enthusiast who believes computer
code to be law and sees human error as
an annoying hiccup, this idea may be disconcerting. But crime begins and ends
with humans, Shapiro argues, so upcode
is where we must focus both our blame
for the problem and our efforts to improve
online safety.
McKenzie and Hundreds deal with
crypto and NFTS almost entirely at the
upcode level: neither has training in computer science, and both examine the industry through personal lenses. For McKenzie,
it’s the financial realm, where friends
encouraged him to invest in tokens to compensate for being out of work during the
pandemic. For Hundreds, it’s the art world,
which has historically been inaccessible
to most and inhospitable for many—and
SO23-back_books.indd 80
is what led him to gravitate toward streetwear as a creative outlet in the first place.
Hundreds saw NFTs as a signal of a larger
positive shift toward Web3, a nebulous
vision of a more democratized form of
the internet where creative individuals
could get paid for their work and build
communities of fans and artists without
relying on tech companies. The appeal of
Web3 and NFTs is based in cultural and
economic realities; likewise, online scams
happen because buggy upcode—like social
injustice, runaway capitalism, and corporate monopolies—creates the conditions.
Edward Snowden, who leaked classified
information from the US National Security
Agency in 2013, cross legal boundaries for
what they believe to be expressly moral
reasons. Bitcoin, meanwhile, may be a
frequent agent of crime but was in fact
created to offer a “trustless” way to avoid
relying on banks after the housing crisis
and government bailouts of the 2000s left
many wondering if traditional financial
institutions could be trusted with consumer interests. The definition of crime
is also upcode, shaped by social contracts
as well as legal ones.
“If we are committing serious crimes
like fraud, it is crucially important that we
find ways to justify our behavior to others,
and crucially, to ourselves.”
Constructing downcode guardrails to
allow in only “good” intentions won’t solve
online crime because bad acts are not so
easily dismissed as the work of bad actors.
The people who perpetrate scams, fraud,
and hacks—or even participate in the systems around it, like speculative markets—
often subscribe to a moral rubric as they
act illegally. In Fancy Bear, Shapiro cites
the seminal research of Sarah Gordon, the
first to investigate the psychology of people who wrote computer viruses when this
malware first popped up in the 1990s. Of
the 64 respondents to her global survey, all
but one had developmentally appropriate
moral reasoning based on ethics, according
to a framework created by the psychologist
Lawrence Kohlberg: that is, these virus
writers made decisions based on a sense
of right and wrong. More recent research
from Alice Hutchings, the director of the
University of Cambridge’s Cybercrime
Centre, also found hackers as a group to be
“moral agents, possessing a sense of justice, purpose, and identity.” Many hackers
find community in their work; others, like
In NFTs Are a Scam/NFTs Are the
Future, Hundreds interviews the renowned
tech investor and public speaker Gary
Vaynerchuk, or “Gary Vee,” a figure he calls
the “face of NFTs.” It was Vee’s “zeal and
belief” that convinced Hundreds to create his own NFT collection, Adam Bomb
Squad. Vee tells Hundreds that critics “may
be right” when they call NFTs a scam. But
while some projects may be opportunistic rackets, he hopes the work he makes
is the variety that endures. Vee might be
lying here, but at face value, he professes a
belief in a greater good that he and everyone he recruits (including the thousands of
attendees at his NFT convention) can help
build—even if there’s harm along the way.
McKenzie spends much of two chapters in Easy Money describing his personal
encounters with FTX’s Bankman-Fried,
who was widely called the “King of Crypto”
before his fall. Bankman-Fried professes
to believe in crypto’s positive potential;
indeed, he has claimed on the record many
times that he wanted to do good with his
work, despite knowing at points that it was
7/31/23 11:00 AM
But wait,
there’s
more.
Lots more.
You’re already a subscriber.
Activate your account and
start enjoying:
• Unlimited web access
• Exclusive digital stories
• Access to 120+ years of
publication archives
technologyreview.com/subonly
houseads.suite.indd 11
9/7/22 4:00 PM
82
potentially fraudulent. McKenzie struggles
to understand this point of view. “If we are
committing serious crimes like fraud,” he
speculates, “it is crucially important that
we find ways to justify our behavior to others and crucially, to ourselves.” While this
rationalization certainly doesn’t excuse any
crimes, it explains how people can perpetrate eye-boggling fraud again and again,
even inventing new ways to scam. The
human upcode that makes each of us see
ourselves as the protagonist of our story is
powerful, even and maybe especially when
billions of dollars are at stake.
Despite his research, McKenzie did
gamble on crypto—he shorted tokens on a
specific, and incorrect, timeline. He doesn’t
disclose how much he lost, but it was an
amount that “provokes an uncomfortable
conversation with your spouse.” He’s hardly
the only savvy individual in history to fall
for a risky pitch; our brains make it painfully
easy to get scammed, another reason why
solutions that rely entirely on computer
code don’t work. “The human mind is riddled with upcode that causes us to make
biased predictions and irrational choices,”
Shapiro writes. Take the “representativeness
heuristic,” which leads us to judge something by how much it resembles an existing
mental image—even if that may lead us to
overlook crucial information. If an animal
looks like a duck and quacks like a duck,
the representativeness heuristic tells us it
can swim. Phishing scams rely on this rush
to pattern matching. For example, Fancy
Bear, the titular Russian hacking group of
Shapiro’s book, used a visually and tonally
convincing message to attempt to hack into
Hillary Clinton campaign staffers’ email
accounts in 2016. It worked.
Also coming into play for scams, fraud,
and hacks are the “availability heuristic,”
which leads us to remember sensational
events regardless of their frequency, and
the “affect heuristic,” which leads us to
emphasize our feelings about a decision
over the facts, inflating “our expectations
about outcomes we like”—such as winning a huge payout on a gamble. When
Hundreds was concerned about whether
NFTs were a good investment, he reached
SO23-back_books.indd 82
out to a friend whose belief was steadfast
and found himself calmed. “It was that
sense of conviction that separated the losers
from the winners,” he writes, even when the
facts might have supported stepping back.
The marketing pitch of communal
faith and reward, the enticement to join
a winning team, feeds a human social
instinct—especially as more offline modes
of connection are faltering. It’s telling that
after the SEC brought charges against
Coinbase, the company responded by issuing a pro-crypto NFT, imploring its community to offer support for the struggling
of the 2016 Clinton email hack, the billions lost by investors in the volatile crypto
industry, and billions more lost through
crypto hacks and scams. Shapiro argues
that the efforts of the antivirus and antihacking industry to code guardrails into
our online systems have failed. Fraud goes
on. Instead, we must reexamine the upcode
that has fostered and supported online
crimes: “our settled moral and political
convictions on what we owe one another
and how we should respect security and
privacy.” For Shapiro, effectively addressing online fraud, hacks, and scams requires
Technological innovation does not change
our fundamental behavior as humans, but
technology has brought speed and spread
to the gambling table. A single perpetrator
can reach more victims faster now that the
global world is connected.
industry by minting it. (Coinbase and the
minting platform Zora promise to donate
the mint fees they’ll receive from consumers to pro-crypto advocacy.) The crypto
industry rose to power on this kind of
faith-based relationship, and it continues
to appeal to some: more than 135,000 of
the Coinbase tokens have been minted
since the SEC suit was announced. Beyond
money, “we’re just as motivated by identity and community (or its upside-down
cousin, tribalism),” writes Hundreds, “and
the most fervent contemporary movements and trends masterfully meld them
all together. The only thing that feels as
good as getting rich is doing so by rallying
around an impassioned cause with a band
of like-minded friends.”
Technological innovation does not
change our fundamental behavior as
humans, but technology has brought speed
and spread to the gambling table. A single
perpetrator can reach more victims faster
now that the global world is connected.
The risks are higher now, as clearly demonstrated by the headline-exploding results
political, economic, and social shifts such
as creating incentives for businesses to
protect customers and penalties for data
breaches, supporting potential hackers in
finding community outside of crime, and
developing government and legal policies
to prevent illicit payment through mechanisms like cryptocurrencies.
Shapiro admits that shifting upcode this
way will likely take generations, but the
work has already started. The SEC’s recent
moves against crypto exchanges are promising steps, as are the FTC’s public warnings
against scammy AI claims and generative AI
fraud. Growing public awareness about the
importance of data privacy and security will
help too. But while some humans are working on evolving our social systems, others
will continue to hunt online for other people’s money. In our lifetimes, fraud, hacks,
and scams will likely always find a home on
the internet. But being aware of the upcode
all around us may help us find safer paths
through the online jungle.
Rebecca Ackermann is a writer and
artist in San Francisco.
7/28/23 12:55 PM
Help
shape the
future
of tech.
Take part in original research and gain valuable business
insights into today’s most important technology trends.
Join the Global Insights Panel.
• Participate in
research programs
and surveys
(each takes about
5 to 10 minutes to
complete).
• Receive the
latest tech news
and updates in
our exclusive
newsletters.
• Share experiences,
weigh in on ideas
from Insights, and
form collaborations
via our Global
Insights Panel
LinkedIn group.
• Receive special
promotions and
discounts for MIT
Technology Review
conferences and
subscriptions.
Scan this code or visit us at the link below to join the
Global Insights Panel today for free:
technologyreview.com/GlobalPanel
InsightsGP.indd 1
6/1/23 9:13 AM
84
Field notes
Right: View of Godalming, Surrey, UK,
population 21,000.
Wasted heat from computers is
transformed into free hot water
for homes. By Luigi Avantaggiato
Using heat generated by computers to provide free hot water was an idea born not
in a high-tech laboratory, but in a battered
country workshop deep in the woods of
Godalming, England.
“The idea of using the wasted heat of
computing to do something else has been
hovering in the air for some time,” explains
Chris Jordan, a 48-year-old physicist, “but
only now does technology allow us to do it
adequately.
“This is where I prototyped the thermal
conductor that carries heat from computer
processors to the cylinder filled with water,”
he says, opening his workshop door to reveal
a 90-liter electric boiler. “We ran the first
tests, and we understood that it could work.”
Jordan is cofounder and chief technology
officer of Heata, an English startup that has
created an innovative cloud network where
computers are attached to the boilers in
people’s homes.
Next to the boiler is a computer tagged
with a sticker that reads: “This powerful
computer server is transferring the heat
from its processing into the water in your
cylinder.” A green LED light indicates that
the boiler is running, Jordan explains. “The
machine receives the data and processes it.
Thus we are able to transfer the equivalent
of 4.8 kilowatt-hours of hot water, about the
daily amount used by an average family.”
When you sign up with Heata, it places
a server in your home, where it connects
SO23-back_notes.indd 84
Servers that work
from home
via your Wi-Fi network to similar servers
in other homes—all of which process data
from companies that pay it for cloud computing services. Each server prevents one
ton of carbon dioxide equivalent per year
from being emitted and saves homeowners
an average of £250 on hot water annually,
a considerable discount in a region where
13% of the inhabitants struggle to afford
heat. The Heata trial, funded by a grant
from Innovate UK, a national government
agency, has been active in Surrey County
for more than a year. To date, 80 units have
been installed, and another 30 are slated to
have a boiler to heat by the end of October.
Heata’s solution is “particularly elegant,”
says Mike Pitts, deputy challenge director of
Innovate UK, calling it a way to “use electricity twice—providing services to a rapidly
growing industry (cloud computing) and
providing domestic hot water.” The startup is
now part of Innovate UK’s Net Zero Cohort,
having been identified as a key part of the
push to achieve an economy where carbon
emissions are either eliminated or balanced
out by other technologies.
Heata’s process is simple yet introduces
a radical shift toward sustainable management of data centers: instead of being cooled
with fans, which is expensive and energy
intensive, computers are cooled by a patented thermal bridge that transports the
heat from the processors toward the shell
of the boiler. And rather than operating with
a data center located in an energy-intensive
location, Heata works as an intermediary for
computing: it receives workloads and distributes them to local homes for processing.
Businesses that need to process data are
using the Heata network as a sustainable
alternative to traditional computing.
The company has created what Heata’s
designer and cofounder Mike Paisley
describes as a diffuse data center. Rather
than cooling a building that holds many servers, he explains, “our model of sustainability
moves data processing [to] where there is
need for heat, exploiting thermal energy
waste to provide free hot water to those who
need it, transforming a calculation problem
into a social and climatic advantage.”
The people involved in the Heata experiment are diverse in age and household
composition, and their reasons for participating are varied: a need to save on bills,
a love for the environment, an interest in
helping combat climate change, and fascination with seeing a computer heat the water.
Among the satisfied customers is Helen
Whitcroft, mayor of Surrey Heath. “We
started reducing our carbon footprint many
years ago by installing photovoltaic panels,”
she says. “We recently bought batteries to
store the energy we produce. Curiosity also
moved us: it didn’t seem possible that a
computer could heat water, but it works.”
Luigi Avantaggiato is an Italian
documentary photographer.
8/3/23 8:44 AM
85
Field notes
Left: Flats in Godalming, Surrey, UK.
Over 4 million people in the UK struggle
to afford heat.
SO23-back_notes.indd 85
Below: The Heata team among the trees
at Wood Farm, Godalming, where the idea
originated.
8/2/23 6:45 PM
86
Field notes
Above: A laser cutter produces
insulation for the Heata unit,
which harnesses excess heat
from cloud computing.
Left: Andrew, a mechanical
engineer, installs the Heata
unit in an apartment in Surrey.
At 75% utilization, the Heata
unit will provide around 80%
of an average UK household’s
hot water.
Below: Parts of the Heata unit
before assembly.
Homeowner James Heather on his Heata: “We no
longer need the energy for cooling the compute units,
and we don’t need the energy for heating our hot water
either, because we’re using the waste heat from the
unit to do it.”
SO23-back_notes.indd 86
8/2/23 6:45 PM
Field notes
87
A batch of heat pipes
at Heata Labs.
Heata’s CTO, Chris Jordan, in his workshop.
SO23-back_notes.indd 87
Dave, a radio engineer, tests the operation
of the server at Heata Labs.
8/2/23 6:45 PM
88
A cell that does it all
For 25 years, embryonic stem
cells have been promising and
controversial in equal measure.
How far have they really come?
From “The Troubled Hunt for the Ultimate
Cell” (1998), by Antonio Regalado: “If awards
were given for the most intriguing, controversial, underfunded and hush-hush of scientific pursuits, the search for the human
embryonic stem (ES) cell would likely sweep
the categories. It’s a hunt for the tabula rasa
of human cells—a cell that has the potential
to give rise to any of the myriad of cell types
found in the body. If this mysterious creature
could be captured and grown in the lab, it
might change the face of medicine, promising, among other remarkable options, the
ability to grow replacement human tissue
at will … [but] these cells are found only
in embryos or very immature fetuses, and
pro-life forces have targeted the researchers
who are hunting for ES cells, hoping to stop
their science cold. In addition, the federal
government has barred federal dollars for
human embryo research, pushing it out of
the mainstream of developmental biology.
To make matters worse, human ES cells
could conceivably provide a vehicle for the
genetic engineering of people, and the ethical dilemmas surrounding human cloning
threaten to spill over onto this field.”
Update from the author (2023): The debate
lasted years, but science prevailed over religion in the stem-cell wars of the early 2000s.
Now research on ES cells is paid for by the
US government. Yet biology keeps offering
surprises. The latest? Research shows stem
cells in the lab can self-assemble back into
“synthetic” embryos, shockingly similar to
the real thing. And that’s the next debate.
MIT Technology Review (ISSN 1099-274X), September/October 2023 issue, Reg. US Patent Office, is published bimonthly by MIT Technology Review, 196 Broadway, 3rd floor, Cambridge, MA 02139. Entire contents ©2023. The
editors seek diverse views, and authors’ opinions do not represent the official policies of their institutions or those of MIT. Periodicals postage paid at Boston, MA, and additional mailing offices. Postmaster: Send address changes
to MIT Technology Review, Subscriber Services, MIT Technology Review, PO Box 1518, Lincolnshire, IL. 60069, or via the internet at www.technologyreview.com/customerservice. Basic subscription rates: $120 per year within the
United States; in all other countries, US$140. Publication Mail Agreement Number 40621028. Send undeliverable Canadian copies to PO Box 1051, Fort Erie, ON L2A 6C7. Printed in USA. Audited by the Alliance for Audited Media.
SO23-back_archive.indd 88
7/29/23 8:25 AM
A DV E R T I S E M E N T
The Green Future Index 2023
The Green Future Index 2023 is the third edition
of the comparative ranking of 76 nations and
territories on their ability to develop a sustainable,
low-carbon future. It measures the degree to
which economies are pivoting toward clean energy,
industry, agriculture, and society through investment
in renewables, innovation, and green policy.
The index ranks the “green” performance of
countries and territories across five pillars:
• Carbon emissions
• Energy transition
• Green society
• Clean innovation
• Climate policy
Green leaders
The greening middle
Climate laggards
Climate abstainers
Countries that have gone up
in the ranking since last year
Countries that have retained
the same ranking as last year
Countries that have gone down
in the ranking since last year
While the index ranks 76 countries, this map only features a selection of the overall data.
Overall top 10
Rank
2023
Rank
2022
Rank
2023
Rank
2022
1
1
Iceland . . . . . . . . . . . . . . . . . . . . . . . . .
6.69
6
3
Netherlands .................
6.22
2
6
Finland . . . . . . . . . . . . . . . . . . . . . . . . .
6.68
7
4
United Kingdom ..........
6.12
3
5
Norway. . . . . . . . . . . . . . . . . . . . . . . . .
6.37
8
10
South Korea .................
6.00
4
2
Denmark . . . . . . . . . . . . . . . . . . . . . .
6.34
9
7
France ..........................
5.99
5
9
Sweden . . . . . . . . . . . . . . . . . . . . . . . .
6.33
10
13
Spain ............................
5.92
Territory
Score/10
The Green Future Index 2023 was produced in association with
Premier partner
GFI23.single.final.indd 1
Gold partner
Silver partner
Territory
Score/10
Scan the QR code to experience the interactive index,
view the data, and download the full report or visit
technologyreview.com/gfi
Interested in partnering with us? Contact:
insights@technologyreview.com
5/16/23 11:16 AM
Give the gift
of knowledge.
Gifting an annual subscription to MIT Technology Review will keep the
tech-obsessed people on your gift list inspired all year long.
A gift subscription includes:
• In-depth, thoughtful stories on AI, climate change, biotech, and more
• Expert insights on emerging technologies impacting the world
• Unlimited access to our website, app, and magazine archives
• Discounts on all our signature events
Scan this code to
purchase a gift subscription
or go to TechnologyReview.com/GiveAGift
Untitled-1 1
8/3/23 13:10
Download