Uploaded by asatullayevazebiniso

DAY 1 ARTICLE

advertisement
Views
The columnist
Penny Sarchet muses
on why orchids are
so diverse p28
Aperture
Peruse stunning
examples of world
photography p30
Letters
Maybe our big bang
was just the best
of the bunch p32
Culture
Will neural tech
lead to the end
of privacy? p34
Culture columnist
Bethan Ackerley sees
double in new Dead
Ringers TV show p36
Comment AI special report
Facing AI extinction
Why do many of today’s artificial intelligence researchers
dismiss the potential risks to humanity, asks David Krueger
SIMONE ROTELLA
I
N A recent White House press
conference, press secretary
Karine Jean-Pierre couldn’t
suppress her laughter at the
question: Is it “crazy” to worry that
“literally everyone on Earth will
die” due to artificial intelligence?
Unfortunately, the answer is no.
While AI pioneers such as
Alan Turing cautioned that we
should expect “machines to take
control”, many contemporary
researchers downplay this concern.
In an era of unprecedented growth
in AI abilities, why aren’t more
experts weighing in?
Before the deep-learning
revolution in 2012, I didn’t think
human-level AI would emerge
in my lifetime. I was familiar with
arguments that AI systems would
insatiably seek power and resist
shutdown – an obvious threat
to humanity if it were to occur.
But I also figured researchers
must have good reasons not to be
worried about human extinction
risk (x-risk) from AI.
Yet after 10 years in the field,
I believe the main reasons are
actually cultural and historical.
By 2012, after several hype cycles
that didn’t pan out, most AI
researchers had stopped asking
“what if we succeed at replicating
human intelligence?”, narrowing
their ambitions to specific tasks
like autonomous driving.
When concerns resurfaced
outside their community,
researchers were too quick to
dismiss outsiders as ignorant and
their worries as science fiction. But
in my experience, AI researchers
are themselves often ignorant
of arguments for AI x-risk.
One basic argument is by
analogy: humans’ cognitive
abilities allowed us to outcompete
other species for resources,
leading to many extinctions. AI
systems could likewise deprive us
of the resources we need for our
survival. Less abstractly, AI could
displace humans economically
and, through its powers of
manipulation, politically.
But wouldn’t it be humans
wielding AIs as tools who end up
in control? Not necessarily. Many
people might choose to deploy a
system with a 99 per cent chance
of making them phenomenally
rich and powerful, even if it had a
1 per cent chance of escaping their
control and killing everyone.
Because no safe experiment can
definitively tell us whether an AI
system will actually kill everyone,
such concerns are often dismissed
as unscientific. But this isn’t an
excuse for ignoring the risk. It just
means society needs to reason
about it in the same way as other
complex social issues. Researchers
also emphasise the difficulty
of predicting when AI might
surpass human intelligence,
but this is an argument for
caution, not complacency.
Attitudes are changing, but
not quickly enough. AI x-risk
is admittedly more speculative
than important social issues
with present-day AI, like bias
and misinformation, but the basic
solution is the same: regulation.
A robust public discussion is long
overdue. By refusing to engage,
some AI researchers are neglecting
ethical responsibilities and
betraying public trust.
Big tech sponsors AI ethics
research when it doesn’t hurt the
bottom line. But it is also lobbying
to exclude general-purpose AI
from EU regulation. Concerned
researchers recently called for
a pause on developing bigger AI
models to allow society to catch
up. Critics say this isn’t politically
realistic, but problems like AI
x-risk won’t go away just because
they are politically inconvenient.
This brings us to the ugliest
reason researchers may dismiss AI
x-risk: funding. Essentially every
AI researcher (myself included)
has received funding from big
tech. At some point, society may
stop believing reassurances from
people with such strong conflicts
of interest and conclude, as
I have, that their dismissal
betrays wishful thinking rather
than good counterarguments. ❚
For more on AI, see pages 12 and 46
David Krueger is an
assistant professor in
machine learning at the
University of Cambridge
22 April 2023 | New Scientist | 27
Download