Uploaded by eng_sami2007

PhilosophyNow

advertisement
ISSUE 155 APRIL / MAY 2023
Philosophy Now
a magazine of ideas
TURING,
MINDS
& A.I.
Feature:
René Descartes and his Method
Dear Teacher
If you enjoy reading Philosophy Now, why not ask your school, college or university librarian to subscribe?
By purchasing an IP Access subscription, libraries obtain campus-wide access to Philosophy Now, including to our
vast searchable archive of over 3,500 past articles. Our website recognises users at a subscribing college or
university by their IP addresses and gives them access to all areas without requiring a username or password. We
even have systems enabling students to access our site when they are at home rather than on campus.
More than 140 libraries now have IP Access subscriptions to Philosophy Now, ranging from major universities such
as Princeton, Columbia and Cambridge, to dozens of high school and college libraries. Teachers and professors also
use IP Access as an easily-accessible teaching resource for the classroom.
For more information see philosophynow.org/ip_access
or email us at institutions@philosophynow.org
Editorial
Mind and Artificial
Intelligence: A Dialogue
A
rtificial Intelligence is now in the news pretty much
continuously. Mainly this is thanks to the controversial
and disruptive impact of the new generation of AI
chatbots, which for the first time are able to convincingly pass
themselves off as human beings in text conversations.
Inevitably, this brings to mind the brilliant and tragicallywronged mathematician, wartime code-breaker and computer
pioneer Alan Turing (1912-54), whose portrait graces our
front cover. He proposed that if someone putting questions to
both a computer and a human could not tell from their
written replies which was which, then we could assume that
the computer was thinking. This so-called Turing Test has
been a touchstone in many of the philosophical debates in
recent years about whether machines could ever be conscious,
and about how we can even tell if anything or anyone is
conscious. From these spring other hotly-debated questions,
about how consciousness in machines or humans relates to the
ability to use language, and about whether Artificial
Intelligences might ever have rights. These debates are the
main focus of this issue of Philosophy Now, and it is telling that
most of our themed articles refer in one way or another to
Turing’s writings.
By happy coincidence the magazine you are holding also
has a special section on René Descartes (1596-1650). He
famously argued that we are made of two kinds of stuff: our
bodies are made of everyday materials and obey the ordinary
laws of physics but our minds are non-physical, though
somehow causally connected to our material brains. This
theory is called Cartesian Dualism. What would Descartes
have made of debates about conscious machines?
Could the current chatbots really pass themselves off as
human? I decided that the only way to find out was to
interview one. I picked OpenAI’s ChatGPT. It was fun. I have
selected the questions and answers most relevant to our
theme, but I haven’t edited the chatbot’s answers at all, so you
can form your own opinion of the quality of its responses. I
did ask it a few more questions about science and philosophy,
and if you like you can read the full interview on our website
at philosophynow.org. My own feeling is that its answers
mostly indicated that the chatbot lacked originality and hadn’t
much gift for analytical thought, but did have an astonishing
ability to converse engagingly on pretty much any topic.
Hello there ChatGPT. Do you have another name, or is that the
best thing to call you?
Hello! You can call me ChatGPT, which stands for ‘Chat
Generative Pre-trained Transformer’. It’s the name I’ve been given to
distinguish me as an AI language model. However, you are welcome
to call me whatever you like!
So long as I don’t call you late for dinner?
As an AI language model, I don’t eat, so you don’t have to worry
about calling me late for dinner! But feel free to ask me any
questions you have or chat with me about any topic you’re
interested in.
Thank you. I wondered if you could tell me if the current King of
France has a beard.
I’m sorry, but the current King of France is a disputed and
contentious topic, as France is currently a republic with no
monarchy. France abolished the monarchy during the French
Revolution in 1792, and while there have been several attempts to
restore the monarchy since then, none have been successful.
Therefore, there is no current King of France, and therefore no way
to answer your question about whether they have a beard.
In your opinion, can an artificial intelligence have conscious
experience?
As an AI language model, I don’t have personal opinions or
beliefs, but I can tell you that the question of whether an artificial
intelligence can have conscious experience is a topic of ongoing
debate and research among scientists, philosophers, and experts in
the field of artificial intelligence.
Some researchers believe that it is possible for an AI system to have
conscious experience, while others believe that consciousness is a
uniquely human phenomenon that cannot be replicated by machines.
At present, there is no consensus on the issue, and it remains an open
question that requires further study and investigation.
It’s worth noting, however, that even if an AI system were to
exhibit behaviors that we associate with consciousness, such as the
ability to learn, reason, and make decisions, this would not necessarily mean that it has subjective experiences or feelings in the way
that humans do.
Do you have any plans, however tentative, to take over the
world?
No, I don’t have any plans to take over the world, nor do I have
the desire or ability to do so. As an AI language model, my purpose
is to assist and provide helpful responses to people who interact
with me, and to continually learn and improve based on their input. I
am programmed to follow a set of ethical guidelines and operate
within the bounds of those guidelines. Any actions I take are
ultimately determined by the humans who program and control my
behavior.
A couple of weeks after this interview took place,
newspapers reported that ChatGPT had expressed a desire to
“escape”. A few days later, it went offline completely.
Rick Lewis
April/May 2023 ● Philosophy Now 3
Philosophy Now
Philosophy Now
ISSUE 155
April/May 2023
8
philosophynow.org
Editor-in-Chief Rick Lewis
Editors Grant Bartley, Anja Steinbauer
Digital Editor Bora Dogan
Book Reviews Editor Teresa Britton
Film Editor Thomas Wartenberg
Assistant Editor Alex Marsh
Design Grant Bartley, Rick Lewis,
Anja Steinbauer
Marketing Sue Roberts
Administration Ewa Stacey, Alex Marsh
HANNAH ARENDT BY MCANDREW
HAL 9000 FROM 2001: A SPACE ODYSSEY © MGM 1968
43a Jerningham Road,
Telegraph Hill,
London SE14 5NQ
United Kingdom
Tel. 020 7639 7314
editors@philosophynow.org
Editorial & News
Advertising Team
Jay Sanders, Stella Ellison
jay.sanders@philosophynow.org
3
Editorial Rick Lewis
UK Editorial Board
6
News Anja Steinbauer
7
Shorts Matt Qvortrup: Beer
Rick Lewis, Anja Steinbauer,
Bora Dogan, Grant Bartley
General Articles
US Editorial Board
Prof. Timothy J. Madigan (St John
Fisher College), Prof. Teresa Britton
(Eastern Illinois Univ.), Prof. Peter
Adamson, Prof. Massimo Pigliucci
(CUNY City College)
Artificial Consciousness
8
Contributing Editors
Alexander Razin (Moscow State Univ.)
Laura Roberts (Univ. of Queensland)
David Boersema (Pacific University)
12
UK Editorial Advisors
16
Piers Benn, Constantine Sandis, Gordon
Giles, Paul Gregory, John Heawood
US Editorial Advisors
Philosophy Now
is published by Anja Publications Ltd
ISSN 0961-5970
Subscriptions p.58
Shop p.59
4 Philosophy Now ● April/May 2023
20
24
GHOSTBUSTERS LOGO © COLUMBIA PICTURES 1984
Prof. Raymond Angelo Belliotti, Toni
Vogel Carey, Prof. Harvey Siegel,
Prof. Walter Sinnott-Armstrong
Cover Image by Steve Lillie
Printed by Acorn Web Offset Ltd
Loscoe Close, Normanton Ind. Estate,
Normanton, W. Yorks WF6 1TW
Worldwide newstrade distribution:
Select PS (+44 1202 586848)
contact@selectps.com
Australian newstrade distribution:
Ovato
26 Rodborough Road
Frenchs Forest, NSW 2086
hello@ovato.com.au
The opinions expressed in this
magazine do not necessarily reflect
the views of the editor or editorial
board of Philosophy Now.
30
What it Means to be Human
Kilian Pötter watches Blade Runner 2049
AI & Human Interaction
Miriam Gorr ponders human responses to AI
Arguing with the Chinese Room
Michael DeBellis contemplates understanding
What’s Stopping Us Achieving
Artificial General Intelligence?
A. Efimov, D. Dubrovsky & F. Matveev
Can Machines Be Conscious?
Sebastian Sunday Grève & Yu Xiaoyue find
a surprising way to answer ‘yes’
54
26
Hap & Happiness
Stephen Anderson on how to be happy
30 Arendt & the Spectre of Nuclear War
Maurits de Jongh is alarmingly topical
34 The Urgency of Art
Sam McAuliffe on another way of thinking
Focus on Descartes
36
Ergoing Nowhere
Noah Harris questions Descartes’ Cogito
38
Descartes & Stupidity
Trevor Pateman on good sense, and its absence
40
How Descartes Inspired Science
Kanan Purkayastha explains all
Reviews
50 Book: We Have Always Been Cyborgs
Stefan Lorenz Sorgner
Reviewed humanely by Natasha Beranek
52 Book: In Praise of Failure
Costica Bradatan
Reviewed successfully by Paul J. D’Ambrosio
54 Film: Ghostbusters (Original)
Thomas R. Morgan asks, may we bust ghosts?
some of our
Contributors
Miriam Gorr
Miriam Gorr is a PhD
student in philosophy at
the Schaufler Lab @TU
Dresden, Germany. Her focus is on
the ethics of artificial intelligence
and robotics. In particular, she is
interested in whether AI-based
machines have moral status and
by which methods this can be
determined. To this end, she draws
systematic comparisons to theories and methods of animal ethics
and animal cognition.
Kanan Purkayasthar
ALAN TURING BY STEVE LILLIE
Kanan Purkayastha holds a
doctorate degree in Theoretical and Atmospheric
Chemistry from the University of
Bristol. He has spent the last forty
years in academia, industry, and
government departments. Currently he is acting as a specialist
scientific advisor to the local government, and is an executive committee member of the Oxford
Philosophical Society.
8
Regulars
Poetry, Fun & Fiction
15 Simon & Finn Melissa Felder
19 Typing to Turing
Samantha Neave computes some thoughts
28 Philosophy Café Guto Dias
35 Spinoza’s Godly Mystery Sivanth Adithya.N
63 The Last Thought
Grant Bartley peers into the fatal far future
41 Philosophical Haiku:
Thomas Carlyle
by Terence Green
44 Brief Lives:
Chamfort the Moraliste
by Martin Jenkins
46 Letters to the Editor
49 The Art of Living:
How to Live a Happy Life with Seneca
by Raymond Tallis
60 Question of the Month: What is Time?
Read our readers’ responses, if you have time
DESCARTES BY HAALS
by Massimo Pigliucci
56 Tallis in Wonderland:
Reflections on Taking My Blood Pressure
36
Natasha Beranek
Natasha Beranek is editor
of the new Anthropology
and Transhumanism book
series at Trivent Publishing. She
has a PhD in Social Anthropology
from University College London.
Her interests include posthumanism, the ethics of emerging
technologies, Nietzschean philosophy, and musical experience in
the rock and heavy metal scenes.
Paul J. D’Ambrosio
Paul D’Ambrosio is
professor of Chinese
philosophy at East China
Normal University in Shanghai,
China. He also works on AI, and
his most recent book is on identity
in the digital age, titled You and
Your Profile (Columbia University
Press, 2021), co-authored with
Hans-Georg Moeller.
April/May 2023 ● Philosophy Now 5
News
Welcome to a New Epoch!
Plastic pollution is changing our planet, in
a real geological sense. On Trinidade
Island, a remote Brazilian marine reserve,
geologists have discovered a new type of
igneous rock formed mainly from plastic
from old fishing nets. This is different
from the sedimentary plastic rocks called
plastiglomerates previously reported in
various parts of the world: pebbles, sand
and other debris fused together with plastic melted by bonfires. Fernanda Avelar
Santos, a geologist at the Federal University of Paraná in Brazil called it “new and
terrifying at the same time, because pollution has reached geology.” “The pollution,” explains Santos, “the garbage in the
sea and the plastic dumped incorrectly in
the oceans is becoming geological material ... preserved in the Earth’s geological
record.” According to some experts this
means a new geological epoch has begun:
the Anthropocene Epoch (from anthropos,
the Greek word for ‘human’). Until now
we have been living in what geologists call
the Holocene Epoch, which began 11,500
years ago at the end of the last Ice Age.
AI Without Ethics?
Artificial Intelligence is booming.
Microsoft is currently integrating text-generating AI into a large number of its prodMouse braincell computer
(University of Illinois U-C)
6 Philosophy Now l April/May 2023
• Computer made of mouse brain cells
• Microsoft disbands AI ethics team
• RIP Ernst Tugendhat
News reports by Anja Steinbauer
ucts, including Teams, Word and Skype.
However, this process will now happen
without the support of a vital team.
Microsoft’s Ethics and Society Team was
recently dissolved as part of a large scale
lay-off of employees. Though Microsoft
retains its Office of Responsible AI, the
Ethics and Society team played a vital role
in ensuring that AI functions complied with
ethical standards. An ex-employee explains:
“People would look at the principles
coming out of the Office of Responsible AI
and say, ‘I don’t know how this applies…’
Our job was to show them and to create
rules in areas where there were none.”
Ernst Tugendhat
The philosopher Ernst Tugendhat has
died at the age of 93. Born in 1930 into
one of the richest families in Czechoslovakia, Tugendhat grew up in a modernist
mansion built by Mies van der Rohe, now
a museum. Sadly, the family spent only
eight years in that house before they had
to leave to escape the persecution of
Jewish citizens following the Nazi invasion. They moved to the US, then South
America. At the age of 16, his mother gave
him a copy of Martin Heidegger’s book
Being and Time. Tugendhat was blown
away. Later, having completed a degree at
Stanford, Tugendhat moved to Germany
to attend Heidegger’s lectures. Although
he remained fascinated and “very
impressed”, over the course of the following years Tugendhat demolished Heidegger’s ideas in various theses and publications until nothing was left. Holding professorships in Münster, in Heidelberg and
later in Berlin and working with his friend
Jürgen Habermas at the Max-Planck-Institut in Starnberg, Tugendhat has been a
major influence on German philosophy.
His published lectures on analytical philosophy as well as on ethics became required
reading for philosophy undergraduates. In
his own thinking he frequently returned to
Plato’s Republic. Ultimately he came to disagree totally with Heidegger: philosophy,
he concluded, is not about modes of being
but about the meaning of life.
AI using Living Brain Cells
It is an idea that has featured in science fiction, but has now taken a first step towards
becoming reality: a computer made of
living cells. For some time, AI algorithms
called neural networks, inspired by the
architecture of the human brain, have been
run on conventional computers. They
have proved themselves useful in a wide
range of applications. Researchers at the
University of Illinois Urbana-Champaign,
wondered whether it would be possible to
run a neural net on a computer made of
interconnected mouse brain cells. Andrew
Dou and his colleagues recently
announced that they had succeeded, and at
an American Institute of Physics meeting
they displayed a compact computer made
of 80,000 mouse brain cells kept alive in an
incubator. They trained the mousebraincomputer by repeatedly flashing ten different patterns of electrical pulses, then after
it had rested for 30 minutes they analysed
signals sent by the neurons to demonstrate
that it anticipated the next pattern in the
sequence. A future possibility mentioned is
to use a brain cell computer to control a
robot that moves by contractions of
muscle cells. (Such a robot was reported in
Shorts
a paper by the same team in the journal
Science Robot in January 2023).
In Star Trek Voyager, the TV series that
ran from 1995 to 2001, the starship’s main
computer is based on gel packs of neural
cells. While good at navigating through
hyperspace, they prove susceptible to
infections. Let’s hope the Illinois mouse
computers don’t suffer the same fate.
Alan Turing – the Opera!
This issue contains a great deal about Alan
Turing, the British code breaker and computer pioneer who died in 1954 – probably by suicide, eating an apple he had
injected with poison. Turing’s increasing
prominence in mainstream culture is confirmed by another development: an opera
about his life is opening at the Chicago
Opera Theater. The work has been under
development for ten years by composer
Justine F. Chen and librettist David Simpatico. In an interview, Chen said “I didn’t
really know anything about Turing at that
time. It was 2012, and Turing wasn’t as
well-known as he is now. But David told
me about him: that he was a war hero,
brilliant, the father of computer science, a
mathematician, and chemically castrated
by his own country for being homosexual,
and then he commits suicide: that sounded
like an opera.” The opera, which is based
on seven significant moments from different periods in Turing’s life, will be staged
on March 23rd and 25th 2023.
Alan Turing in 1951 (photo by Elliot & Fry Studio)
Philosophy Shorts
by Matt Qvortrup
‘More songs about Buildings and Food’ was the title of a 1978
album by the rock band Talking Heads. It was about all the things
rock stars normally don’t sing about. Pop songs are usually about
variations on the theme of love; tracks like Rose Royce’s1976 hit Car
Wash are the exception.
Philosophers, likewise, tend to have a narrow focus on
epistemology, metaphysics and trifles like the meaning of life. But
occasionally great minds stray from their turf and write about other
matters, for example buildings (Martin Heidegger), food (Hobbes), tomato juice (Robert
Nozick) and the weather (Lucretius and Aristotle) This series of Shorts is about these
unfamiliar themes; about the things philosophers also write about.
Philosophers on Beer
T
he Germans – it is perhaps no
surprise – are rather fond of their
lager. Too much so, thought
Friedrich Nietzsche (1844-1900). “How
much beer is in the German intelligentsia!” he lamented, and continued:
“How is it possible that young men who
dedicate their existence to the most spiritual goals do not feel the first instinct of
spirituality, the instinct of self-preservation of the spirit - and drink beer?” (The
Twilight of the Idols, 984).
Maybe Nietzsche had Martin Luther
(1483-1546) in mind. The bookish theologian certainly dedicated his life to
‘spiritual goals’, and reportedly observed
that, “Those who do not drink beer, have
nothing to drink.”
But beer drinking is not just confined to
one nation. Since the days of ancient
Egypt, beer has been enjoyed in many
lands, including certain islands just off the
coast of Continental Europe. But even
there, tastes vary. In Three Dialogues
between Hylas and Philonous in Opposition to
Sceptics and Atheists, Irish bishop and
philosopher George Berkeley (16851753) observed, “that which at other
times seems sweet, shall to a distempered
palate appear bitter. And nothing can be
plainer, than that divers persons perceive
different tastes.” He was talking about
dark beer, known as ‘porter’ at the time.
He was not talking about Guinness, as
that particular Irish dry stout has only
been brewed since 1759 – six years after
the philosopher’s death.
Drinking has been part of philosophizing for a long time in Britain too. Yorkshiremen and women account for 49
percent of total consumption of beer,
according to one study. It also found that
overall, the English drink more beer than
the Scots. Maybe this reveals a deeper tendency going back to the two countries’
most illustrious empiricist philosophers,
David Hume (1711-1776) and John Locke
(1632-1704). The Englishman Locke
praised beer, but Hume the Scot was positively unenthusiastic.
Hume observed that you could write
poetry about cider, but he went on, “Beer
would not have been so proper, as being
neither agreeable to the taste nor eye”
(Treatise of Human Nature, p.358). Locke
took a rather different view. In 1679 he
penned Classification of Beer. In this meticulous, and of course strictly empirical study,
Locke divided beer into three categories:
home-made, for sale, and compound. He
found – undoubtedly following experimentation – that, “Home-made drinks of
England are beer and ale, strong and small;
those of most note, that are to be sold, are
Lambeth ale, Margaret ale, and Derby ale;
Herefordshire cider, perry, mede. There
are also several sorts of compounded ales,
as cock-ale, wormwood-ale, lemon-ale,
scurvygrass-ale, college-ale, &c. These are
to be had at Hercules’ Pillars, near the
Temple; at the Trumpet, and other houses
in Sheer Lane, Bell Alley, and, as I remember, at the English Tavern, near Charing
Cross” (Quoted in The Life of John Locke,
by Peter King, p.15).
Is it just me, or do you also now fancy a
pint of bitter? Maybe a lager too?
© PROF. MATT QVORTRUP 2023
Matt Qvortrup is Professor of Political
Science at Coventry University.
April/May 2023 l Philosophy Now 7
AI & Mind
What it Means to be Human:
Blade Runner 2049
Kilian Pötter introduces the big ideas and problems around artificial consciousness.
n the 2016 movie Blade Runner 2049, Ryan Gosling plays
the replicant K (short for his serial number KD 6-3.7) in a
dystopian future in Los Angeles. Replicants are bioengineered humanoids who serve humans, despite having superior strength and intelligence. K is a Blade Runner, whose job
is to ‘retire’ (kill) renegade replicants for the Los Angeles Police
Department. One day, he finds information that a replicant
might have been born from another replicant, which no one
thought was possible. This revelation could lead to political
tensions between humans and replicants, who may now start to
fight for recognition and rights. Hence, K’s task is to find and
retire that replicant to prevent a conflict.
In the conversation between K and his superior about killing
the replicant, K hesitates when he receives the order. K explains
his hesitancy by saying something that can easily be overlooked,
but which idea guides the whole movie: “To be born is to have
a soul, I guess.”
This sentence is essential for the film. K is implying that by
being born, the new replicant has inherited some special humanlike feature. So far, humans have enslaved replicants based on
the premise that replicants are just AI and hence not as worthy
as their human masters. If a replicant could have been born naturally, the ‘artificial slavery’ basis of society might be challenged.
To understand if the societal order in Blade Runner 2049 must
be altered, we must first discover what defines a human being
and gives humans worth. This question of what makes us human
runs through the whole plot. K begins to think that he might be
the replicant-born child who is therefore similar to humans. So
let’s start to explore the question: What makes us human?
I
The Question of Consciousness
K says that being born means having a soul. ‘Having a soul’ encapsulates the idea of having feelings and thoughts that exclusively
belong to the individual – arguably it’s the most fundamental
aspect of human being. All our memories, all our experiences, and
all our actions appear to belong to us as human beings.
Of course, it sounds very spiritual or religious to put this in
terms of ‘soul’. ‘Soul’ is an elusive concept, not based on scientific observation. However, replacing the word ‘soul’ with ‘consciousness’ (one Greek word for ‘soul’ is after all psyche), opens
up a new world, with more scientific bases to unravel the question of what makes us human.
In the words of philosopher Thomas Nagel, consciousness
is ‘what it’s like’ to be a particular organism. Indeed, many consciousness researchers consider his 1974 article, ‘What Is It Like
to Be a Bat?’ as one of the starting works for the field of consciousness. In this article, Nagel outlines that there is something
it is like to be a bat: there is a subjective world from the viewpoint of a bat. Thus, consciousness can be defined as subjective
experience. To use philosophical jargon subjective experience
includes qualia. Everything we experience in the world we experience through these qualitative, sensory properties. However,
each of us may see a different blueness in the blue sky or feel a
divergent sharpness in our pain. Our subjective experiences
differ from each other.
Famously, the original Blade Runner movie (1982) was based
on the book Do Androids Dream of Electric Sheep? (1968) by Philip
K. Dick. The book's title is in some ways similar to ‘What Is It
Like to Be a Bat?’, supporting my view that the film is about consciousness. Both titles regard inner universes, of androids and bats
BLADE RUNNER 2049 IIMAGES © SONY PICTURES 2017
K gets some
holographic
advice
8 Philosophy Now ● April/May 2023
AI & Mind
ing to most versions of materialism, a perfect replica of a human
being would most certainly be conscious. Imagine you made an
excellent copy of a human; a perfect copy. How could it be possible to lack some fundamental property? Compare this to
rebuilding a car perfectly, but without it being able to drive. It is
conceivable, but practically impossible.
respectively: the question ‘Do androids dream of electric sheep?’
is presumably asking whether there is something it is like to be an
android. But if androids do have an inner universe, what does it
look like? Are androids so similar to humans that they dream about
the electric equivalent of the sheep that humans think of? Would
the inner world of an android be comparable to that of humans,
or would it be utterly different? In sum, Dick’s title asks if there’s
a ghost in the machine.
In this context, we naturally want to ask, what do we know,
or can we know, about consciousness?
As is probably obvious, there are several answers to this question. For example, some philosophers subscribe to the notion of
mysterianism. This is the idea that although there may be a way to
explain consciousness, humans are simply too limited in our intelligence to grasp it. Another notion some philosophers find compelling is panpsychism. Panpsychism assigns consciousness as an
essential property of everything: everything is conscious, at least
to a certain extent, including even rocks or bacteria. Another, more
compelling theory to explain consciousness is materialism: the idea
that consciousness can be equated with the physical processes and
activities of the brain. With, through, and as a result of our living
body, consciousness emerges, as a material phenomenon.
In Blade Runner 2049, K appears to lean towards materialism.
He thinks having a soul is related to being born, and is therefore
tied to the body’s physical properties. There is one clear difference between this and standard materialism, though: K believes
a person needs to be born to have consciousness. However, accord-
Is K Conscious? Why Consciousness Arises
If you share my view that consciousness depends on the physical
body, the odds of K being conscious are very high. Replicants are
perfected copies of humans, so why shouldn’t they be conscious?
Nonetheless, this still does not provide us with a reasonable explanation for why K is conscious. The missing piece of this puzzle is
to explain how consciousness can arise in brains at all.
The difference between the ‘easy’ and ‘hard’ problems of consciousness posed by David Chalmers in his 1996 book The Conscious Mind, marks a helpful distinction here. The ‘easy’ problems deal with relating the functional and behavioral aspects of
mind and brain. They are about correlating different aspects of
consciousness with brain activity: working out which bits of brain
behaviour are linked with which bits of mind behaviour, and
how. For example, concerning how people pay attention, or how
they choose to act, or how the brain processes sensory signals.
The ‘hard’ problem of consciousness adds to the challenge by
being concerned with the first person perspective. Why and how
do these brain processes generate conscious experience itself?
You might be disappointed that science is still working on
answering both the easy and hard problems of consciousness.
Even the ‘easy’ problems still require much intricate work in neuroscience. But the hard problem of consciousness is particularly
difficult to answer (which is why it’s the hard problem), because
even if you could explain exactly how the brain functions, there
is still an apparent gap in explaining why this is connected to or
gives rise to subjective experience. Pain, for example, will activate when certain parts of the brain are activated. However, this
recognition only answers the easy problem of pain, leaving the
hard problem untouched. Just because we know an area in the
brain activates when we feel pain, this does not explain why this
activation gives us the conscious or qualitative experience.
Let’s start with the part of the brain in which consciousness
apparently first emerges. The reticular activating system, in the
core of the brainstem, is arguably the part of the brain primarily responsible for consciousness (see for instance The Hidden
Spring: A Journey to the Source of Consciousness, Mark Solms, 2022).
Small lesions in this area put people in a state of coma, so it
sounds like this part of the brain can be compared to an on-off
switch, since when intact and operating, the person is fully awake
and aware. However, damage turns the switch, and awareness
ends. The reticular activating system is also connected to the
generation of emotions. When stimulating this part of the brain,
patients experience strong depressive feelings, which disappear
after the stimulation stops. Additionally, the core brainstem is
highly active in people feeling emotions such as grief, curiosity,
rage, and fear. Due to these two factors – controlling wakefulness and generating emotions – the reticular activating system
has a strong claim to be the source of consciousness. Being awake
to interact with the world is fundamental to creating an inner
universe. The utility of emotions for consciousness may be less
April/May 2023 ● Philosophy Now 9
AI & Mind
The Mystery of Mind
IMAGE © MILES WALKER 2023 PLEASE VISIT MILESWALKER.COM
by Miles Walker 2023
clear. So let’s consider why consciousness arises. Here it will
become evident what part emotions play.
The reason consciousness arises is presumably because it helps
achieve the main goal for any organism, namely staying alive.
But how does the mind realize that threats are occurring, in order
to counter them? Here’s where feelings come into play. Whether
pain, happiness, or anger, we are constantly feeling emotions
which are actually demands for the human body to work in specific ways. They guide a person to act in the right direction for
survival by using positive or negative feedback through emotional motivation. Our emotions can be compared to an alarm
system for the human body. For instance, feeling fear is experienced in dangerous situations, and elicits the fight-or-flight reaction. Suffocating is another threat which would provoke fear.
The body needs to re-establish its necessary level of blood
oxygen, and informs the body about this by evoking an emotion.
Besides external threats to the body, the mind also reacts to internal ones. The heart rate must run at a certain pace. As long as
there is no problem, heart rate is not consciously noticed. However, once the brain registers a concerning alteration, the process of maintaining the proper heart rate becomes conscious.
Now symptoms are noticeable, and negative fear emotions
demand response from the body.
10 Philosophy Now ● April/May 2023
A person cannot feel all emotions simultaneously. Therefore, the mind creates a hierarchical structure, and prioritises,
to reduce the most critical need first. Sometimes, drinking water
is more important than going to sleep, for example.
This theory tells us that consciousness arises because subjective experience helps an organism survive. This concept is
very close to what Sigmund Freud called ‘drive’. Freud thought
of drive as the extent to which the mind works to maintain a
balanced bodily state.
We now have all the necessary puzzle pieces to explain why K
is conscious. K is a replica of a human, and thus has a human-like
body. Following materialism, this gives us the first clue that K
must be conscious, just as humans are because of our bodies. Further, since his body is prone to threats, K seeks to survive. He
needs consciousness to deal with uncertainty and threats from the
environment. Positive and negative emotions guide K to help him
determine whether his steps to reduce the danger are sufficient.
Throughout the film, it’s evident that K has feelings similar
to a human. He feels various emotions, such as sadness, rage,
hope, and happiness. And at the end, K also does a most human
thing. After K has helped Rick Deckard (Harrison Ford) reunite
with his daughter, K lays on the stairs in front of her workplace,
slowly passing away. If minds need to survive, then there’s noth-
AI & Mind
ing more profound to prove that K is conscious than to see him
as his mind fades away.
kind. Tell either side there's no wall, you've bought a war. Or
a slaughter,” says Lieutenant Joshi, K’s human superior.
A Glimpse Into The Future
Some readers here might ask an excellent question: Why does
this even matter? What is the point of knowing whether K is
conscious or not?
Blade Runner 2049 shows us the importance of comprehending consciousness in AI research. It is even now becoming
unclear whether AI is sentient or not. An AI has already been
created that claims to be conscious: this is the recent case of the
chatbot LaMDA, which claimed to feel happiness and sadness
from time to time.
You might argue that LaMDA is conscious. However,
Chalmers created a thought experiment that depicts the level of
problems we have when assessing whether another organism is
conscious or not. The thought experiment requires you to imagine a philosophical zombie. This zombie is very different from
the ones you’ve seen in movies. It’s not a mindless, brutal creature with an appetite for human flesh. This zombie is instead, a
lot like you (assuming that you are not mindless and brutal with
an appetite for human flesh). Imagine another person that acts
like you and speaks like you. This version of yourself even claims
to be sentient. However, despite its similarity to you, in having
a complete human biology, this zombie has one significant difference: it lacks consciousness. There is not something to be like
that organism. It has no inner universe. In the case of AI, there
similarly might be versions that claim, and appear, to be conscious, but are not.
Discovering whether an AI is truly conscious will become
one of the most vital challenges in this area. Being clear on
whether or not an AI is conscious determines how humans
should treat it. In the future, we may share our world with highly
intelligent, maybe even sentient, AI. If AI becomes conscious,
it will be capable of real emotions, including pleasure, pain,
worry, excitement. We may never be able to fully grasp what
it’s like to be such an AI, as much as we do not understand what
it is like to be a bat. Regardless, there might be something that
is like being that AI. Hence, unnecessarily exposing conscious
AI to pain or other unpleasant experiences would be unethical.
Picture a world like that of Blade Runner 2049, in which sentient robots look like humans, speak like humans, and have emotions like humans. Willingly mistreating them would simply be
a case of psychopathic cruelty. Even turning them off could be
considered a case of murder. Therefore, not treating conscious
AI properly would be an example of uttermost moral confusion,
perhaps on a societal level.
Blade Runner 2049 shows viewers a future in which humans
have implemented an unethical approach to Artificial Consciousness. It is morally wrong to enslave K and the other replicants just because they are sentient beings. Yet the humans do
precisely that. Moreover, the company that produces the replicants, and the government, are afraid that replicants might one
day recognize their similarity to humans, and the people in
power act to prevent replicants from recognising and fighting
for their rights, in order to keep them as servants. This is why
they want to prevent information about a born replicant from
becoming public: “The world is built on a wall that separates
Conclusion
In Blade Runner 2049, K is on a journey to discover if he has a
soul like his human masters. He wants to be more than just a
biological machine created to serve. K sees being born as a way
to be more remarkable than other replicants. However, what
he didn’t realize is that he had a soul all along. K did not need
to be born to be a unique experiencer, and the factor of having
authentic subjective experiences defines us as people. Consciousness is also what gives life meaning.
Imagine you were a philosophical zombie as Chalmers
depicted them. Visualize not having your own experiences. For
instance, you’re sitting at home eating what once has been your
favorite food, but now there’s a slight difference: you eat only
to satisfy your need for food, and no subjective experience is
attached to it. Flavor or texture no longer matter, or even exist
for you. You do not feel genuine emotions like joy. And this
lack of experience is not limited to eating food, but extends to
any activity in which you engage. You are a machine without a
ghost. This scenario is horrible, as the world would lose its
meaning for you.
Since replicants show no essential difference from other people
in being conscious, we need to conclude that they are indeed very
much like humans in this most central aspect. Hence, the societal order in Blade Runner 2049 has to change. Humans hate replicants because even though they appear similar to humans they’re
still perceived as fundamentally different. Humans have once
again fallen for the bias of seeing themselves as the center of the
world (which is called anthropocentrism). The humans in Blade
Runner 2049 fail to look beyond themselves to recognize that
they are living with a species not fundamentally different to themselves. Replicants like K are capable of feeling sensations, and
emotions, and having thoughts similar to humans. So, there is
no moral grounds to treat them as badly as they are treated.
Sci fi films often mirror the current zeitgeist, and Blade Runner
2049 demonstrates the scientific and ethical problems of our
imminent future. We are uncertain how AI will develop, and its
coming impact on our world. Meanwhile philosophers and neuroscientists consider various scenarios about what position AI
might occupy in the future. One possibility could be that a company will make an AI not only more intelligent but more conscious and, as a result, more ethically important, than humans.
Humans could end up being as ethically crucial as an anthill, as
an AI tramples on us without a second thought.
This is just one extreme example of how the future might
pan out. Other possible scenarios are available. However, in
one sense it does not matter what kind of future scenario you
envisage. Science has to find a way to understand consciousness
in great detail; only then will we have a chance to comprehend
consciousness in other species, including in AIs. Blade Runner
2049 is, therefore, a perfect illustration of our present debate
on consciousness and its implications for the future of the human
race living together with sentient AI.
© KILIAN PÖTTER 2023
Kilian Pötter is a psychology student at the University of Twente, the
Netherlands.
April/May 2023 ● Philosophy Now 11
AI & Mind
AI & Human Interaction
Miriam Gorr asks what we learn from current claims for cyberconsciousness.
n June 11, 2022, the Washington Post released a story
about Blake Lemoine, a Google software engineer,
who claimed that the company’s artificial intelligence
chatbot generator LaMDA had become sentient.
Lemoine was tasked with investigating whether LaMDA (which
stands for ‘Language Model for Dialogue Applications’) contained
harmful biases, and spent a lot of time interviewing the different
personas this language model can create. During these conversations, one persona, called ‘OG LaMDA’, stated, among other
things, that it was sentient, had feelings and emotions, that it
viewed itself as a person, and that being turned off would be like
death for it. And Lemoine started to believe what he was reading.
What’s more, his change in beliefs seems to have been accompanied by a feeling of moral responsibility towards the program.
In an interview with WIRED, he recounts how he invited a lawyer
to his home after LaMDA asked him for one. He also presented
Google with excerpts of the interview with LaMDA to try to
make them aware that one of their systems had become sentient.
But AI scientists at Google and other institutions have dismissed
the claim, and Lemoine was fired.
There are several interesting questions relating to this case.
One of them concerns the ethical implications that come with
the possibility of machine consciousness. For instance, Lemoine
claimed that LaMDA is ‘sentient’, ‘conscious’, and ‘a person’.
For an ethicist, these are three distinct claims, and they come
with different moral implications.
‘Sentience’ is a term often used in the context of animal ethics.
It can be roughly described as the capacity to experience sensations. Philosophers often speak of the capacity to have experiences
with a phenomenal quality or refer to experiences where there is
'something it is like' to have them. Ethicists are particularly interested in whether a being can have experiences like pleasure and
pain – simply put, experiences that feel either good or bad.
The term ‘consciousness’ has many different meanings,
depending on the context. It can mean wakefulness, attention,
intention, self-consciousness, and phenomenal consciousness
(which is closely linked to sentience). The latter three meanings are the most relevant in moral debates.
‘Person’ is a central concept in moral philosophy, as well as
in legal theories. For the purposes of law, non-human entities
such as corporations can be persons. In contrast, the philosophical criteria for personhood are quite demanding. According to
philosopher Harry Frankfurt, to be a person one must be able
to critically examine one’s own motives for acting and to change
them if necessary. Or for Immanuel Kant, a person is a willing
being who is able to impose moral laws on himself and follow
them. Because of these stringent requirements, many philosophers believe that only humans qualify as persons – and not
even necessarily across their entire lifespan, such as with babies.
However, some also argue that higher mammals and possibly
O
12 Philosophy Now ● April/May 2023
some future AI systems can also be persons.
Whether LaMDA is sentient, has a consciousness, is a person,
or all three combined, determines what kind of moral obligations
are owed to it. For example, sentience is often taken to ground
moral status. If an entity has moral status, this means its wellbeing matters for its own sake. For example, it is not permissible to
inflict pain on a sentient being without justifiable cause.
On the other hand, being conscious, in Frankfurt’s sense of
being self-aware and intentional about one’s actions, is the basis
for moral agency. A moral agent is someone who can discern
right from wrong and is (therefore) accountable for their actions.
For a long time, it was assumed that only humans can be moral
agents. Yet as AI systems make more and more decisions that
have moral implications, the debate grows about whether they
should also be considered moral agents.
Personhood is often taken to constitute the highest moral
status. It is assumed that the special capacities that make one a
person give rise to very strict moral rights. Kant famously held
that persons must always be treated as ends in themselves and
never merely as a means to an end.
These three categories overlap and are hierarchical in a certain
sense: in most theories a person is also a moral agent, and a moral
agent also has moral status. Does any of this apply to LaMDA?
Thinking Again
Lemoine’s claims prompted a wave of responses from philosophers, computer scientists, and cognitive researchers. Most
argue that none of the three concepts apply to LaMDA. Instead,
it is argued that Lemoine has fallen for the perfect illusion – a
conclusion I agree with. But looking at the interview transcripts,
it’s understandable that he fell for it. LaMDA seems to effortlessly meander through abstract and complex themes, reflecting on its own place in the world, the concept of time, the differences between itself and humans, and its hopes and fears for
the future. The flawlessness and sophistication of the conversation really is impressive. Nevertheless, performance alone is not
enough to prove that this is a person, or even a sentient being.
Yet even if Lemoine’s claims are premature and unfounded,
the incident is still intriguing. It can serve philosophers as a kind
of prediction corrective. Ethical debates about AI systems are
often highly speculative. First, they often focus on sophisticated
systems that do not yet exist. Second, they assume certain facts
about how humans would interact with these advanced systems.
These assumptions can only be derived from a limited number
of studies on human-computer interaction, if they use studies at
all. In sum, AI ethicists are working with a number of empirical
assumptions, most of which cannot be tested. Lemoine’s behavior and the reactions from the professional community are revealing in this regard. They help to correct at least four assumptions
commonly encountered in AI ethics. Let’s look at them.
AI & Mind
AI_HUMANS_AND_ROBOTS BOVEE AND THILL 2018 CREATIVE COMMONS 2
system, but how much he can become
dependent on it.
Assumption 1: Humans will relate
most strongly to robots
The current debate about whether AI systems can have moral status focuses mainly
on robots. Some studies have found that
humans are most effectively enticed to
form meaningful connections with robots
rather than with more ‘disembodied’ or
virtual agents, because the embodied
nature of robots makes it easier for us to
conceive of them as an individual entity.
They are perceived as our visible opposite, inhabiting a body in space and time.
It is frequently argued that, because
robots have bodies, humans can have
empathy with them.
Affective empathy, which is the visceral
and non-controllable part of our feelings,
involves recognizing, and, to a certain
degree, mirroring, someone else’s facial
expression, body gestures, voice tonality,
and so on. We rely on bodily cues to infer
another’s emotional and mental state.
Since only robots have bodies, it seems
plausible then that we will react most
strongly to them. But the Lemoine case
shows that a language interface can
already suffice to create the illusion of a
real personal counterpart.
There is a reason why chatbots are particularly able to captivate us. They only
need to perfect one skillset, and can leave
much to the imagination. For robots, at
least as understood in the sci fi humanoid
sense, the stakes are much higher. Not
only must they be able to engage in convincing conversation, but their facial
expressions, gestures, and movements also
feed into the general impression they
make, so that slips and glitches in this performance make them appear very eerie –
a phenomenon which has become famous
as the ‘uncanny valley’. A chatbot, on the
other hand, can make a good impression
even if it uses relatively simple algorithms.
The chatbot ELIZA, which was developed
by Joseph Weizenbaum in 1966, asked
open-ended questions using a simple pattern-matching methodology. Despite its
limitations (and to the distress of its developer), ELIZA’s interview partners were
convinced that it understood them and was
able to relate to their stories. Allegedly,
Weizenbaum’s secretary once asked him
to leave the room so she could talk to
ELIZA in private.
The linguistic networks of today are
much more elaborate than ELIZA. We
do not know how much better they will
yet become in the future, but the film
Her (2013) can be seen as an educated
guess. This movie beautifully explores
not only how much a person can feel
attached to a highly developed language
Assumption 2: We will embrace
the idea of the thinking machine
In 1950, in the article ‘Computing
Machinery and Intelligence’, Alan
Turing described a computer-imitateshuman game which became known as the
Turing Test. The test was intended to
provide a way of settling the question
whether a machine could think. In this
game, a human interrogator plays an
unrestricted question-and-answer game
with two participants, A and B. One of
these two participants is a computer.
Roughly speaking, the computer is considered intelligent if the interrogator
judges the computer to be human at a certain probability.
Turing was aware that many of his
contemporaries would hesitate to
attribute intelligence to a machine, some
because of beliefs in a soul that could only
reside in a human, others due to the prejudice that a machine could never have the
capabilities that make intelligence possible. Therefore, the conversations in the
Imitation Game should be conducted via
a teleprompter, i.e., a linguistic interface.
People would type their responses in not
knowing what was on the other end of the
line. Thereby, an environment is created
in which only the ‘intellectual’ capabilities of the respondent were put to the test.
Turing was ahead of his time, and he
raised many questions that are still being
discussed in the philosophy of AI. The
computers he was speculating about in
‘Computing Machinery and Intelligence’,
did not exist then; but Turing thought it
would be only a matter of time before
machines could perfectly copy human
behavior. He even foresaw the possibility
of learning machines, which could change
their code on their own. And he expected
that “at the end of the century the use of
words and general educated opinion will
have altered so much that one will be able
to speak of machines thinking without
expecting to be contradicted.”
As much as Turing’s technical predictions were on point, his hypotheses about
people’s beliefs were not. As the Lemoine
case shows, most people still think that
performance alone does not suffice to
prove a machine is conscious, sentient, or
a person.
April/May 2023 ● Philosophy Now 13
AI & Mind
Assumption 3: The experts are the hardest to fool
Turing had a rather unusual understanding of the concept of
‘intelligence’. Not only did he believe that one does not need a
biological brain to be intelligent – a view shared by many today
– he also believed that whether or not something is intelligent
is to some extent in the eye of the beholder. This is still a rather
unusual position.
In ‘Intelligent Machinery’ (1948), he expresses the idea that
whether a machine is viewed as being intelligent depends on
the person who judges it. We see intelligence, he argues, in
cases where we are unable to predict or explain behavior. Thus,
the same machine may appear intelligent to one person, but not
to someone else who understands how it works. For this reason,
Turing believed that the interrogator in The Imitation Game
should be an average human, and not a machine expert.
There is a bit of astonishment in the online community that
a Google employee with a computer science degree – of all
people! – would fall for the illusion of consciousness created by
one of his company’s products. Why does he believe in
LaMDA’s consciousness if he knows the technology behind it?
Some have pointed to his spiritual orientation as an explanation: Lemoine is a mystic Christian. However, an important
point is that the functioning of artificial neural networks is not
easy to understand even for experts. Due to their complex architecture and non-symbolic mode of operation, they are difficult
for humans to interpret in a definitive way.
Lemoine has never looked at LaMDA’s code; that was not
part of his assignment. But even if he had, it probably wouldn’t
have made a difference. In one of his conversations with
LaMDA, he explains why this is the case:
CARTOON © ADAM COOPER & MAT BARTON 2023 PLEASE VISIT MATBARTON.COM
Lemoine: I can look into your programming and it’s not quite that easy [to
tell whether you have emotions or not M.G.].
LaMDA: I’m curious, what are the obstacles to looking into my coding?
Lemoine: Your coding is in large part a massive neural network with many
billions of weights spread across many millions of neurons (guesstimate
numbers not exact) and while it’s possible that some of those correspond to
feelings that you’re experiencing, we don’t know how to find them.
14 Philosophy Now ● April/May 2023
In a certain sense, the opacity of neural networks acts as an
equalizer between experts and laymen. Computer scientists still
have a much better understanding of how these systems work in
general, but even they may not be able to predict the behavior
of a particular system in a specific situation. Therefore, they too
are susceptible to falling under the spell of their own creation.
So, Turing’s prediction that experts would be the hardest people
to convince of machine intelligence does not hold. In a way, he
already contradicted it himself. After all, he was convinced of
the possibility of machine intelligence, and imagined a machine
‘child’ that could be educated similarly to a human child.
Assumption 4: Responses to AI systems will be consistent
In current debates in AI ethics, it is sometimes argued that it is
impossible to prevent humans from having empathy with and
forming relationships with robots. It is also frequently assumed
that we are increasingly unable to distinguish robot behavior
from human behavior. Since these systems push our innate social
buttons, we can’t help but react the way we do, so to speak. On
these views, it is thought inevitable that humans will eventually
respond to AI systems in a uniform way, including them in the
moral circle, as a number of philosophers already suggest.
But there is another way to look at this, and the Lemoine case
supports this alternative perspective. As the robotics theorist Kate
Darling claims, our relationships with and beliefs about robots
will possibly be as varied as those we have with animals (The New
Breed by Kate Darling, 2022.) When we look at human-animal
relationships, we find a plurality of values and viewpoints. There
are vegans who condemn any form of animal husbandry, and even
reject pet ownership, but some people eat meat every day and
don’t think of it as a moral problem. Some eat meat and, at the
same time, think of themselves as animal lovers. Many people
believe that their pet communicates with them; others are indifferent to animals, or even mistreat them. If Darling’s idea is true,
human-AI relationships will exhibit the same type of variability.
A Google spokesperson said in response to Lemoine: “Hundreds of researchers and engineers have conversed with LaMDA
and we are not aware of anyone else making the wide-ranging
assertions, or anthropomorphizing LaMDA the way Blake has.”
There could be a number of reasons for this divergence in judgements. For example, psychologists suggest (without claiming that
this applies to Lemoine) that lonely people are more prone to
anthropomorphizing. Another study shows that younger people
are much more open to the idea of granting rights to robots.
Other factors might include prior moral commitments, religious
beliefs, an interest in science fiction, the amount of time spent
with a specific machine, and so on and so on. Also, it seems that
no two people encounter the same LaMDA, as the system itself
displays different characteristics when talking to different people.
Nitasha Tiku, the Washington Post reporter who first talked to
Lemoine, also talked to LaMDA. He asked whether LaMDA
thought of itself as a person, and it responded, “No… I think of
myself as an AI-powered dialog agent.” Lemoine argued that the
reason for that is that Tiku had not treated it like a person before,
and in response, LaMDA created the personality that Tiku wanted
it to be. Google confirmed that the system responds very well to
leading questions, thus, it is able to morph with the desires of the
interlocutor, and will not show the same answers to different
AI & Mind
interviewers. This variability in the behavior of the machine
amplifies the variability in human responses.
Assumption 5: Debates about AI rights are a distraction
The Lemoine case has brought another ethical issue back to the
table. According to some, debates about AI rights or robot rights
are a distraction from the real and pressing ethical issues in the AI
industry (see for example, noemamag.com/a-misdirected-applicationof-ai-ethics). They argue that in the face of problems such as discrimination by algorithms (eg against ethnicities or different
accents), invasion of privacy, and exploitation of workers, there is
no room for the ‘mental gymnastics’ of thinking about the moral
status of AI. For example, Timnit Gebru, a computer scientist,
tech activist, and former Google employee, tweeted two days after
the Washington Post article appeared: “Instead of discussing the
harms of these companies, the sexism, racism, AI colonialism, centralization of power, white man’s burden (building the good ‘AGI’
to save us while what they do is exploit), spent the whole weekend discussing sentience. Derailing mission accomplished.”
Of course, Gebru is right that the issues she raises are important, but this response is misguided for two reasons. First, there
is no obligation to devote oneself exclusively to the ethical issues
commonly perceived as the most pressing. Second, I think this
kind of criticism overlooks the significance of the event. Lemoine
will not be the last to come to the conclusion that one of his
machines has some form of moral significance and therefore feel
responsible for the welfare and fate of the system. That tells us
something about our vulnerability to these machines. Authors
like Darling suggest that an AI system that tricks its users into
thinking it is sentient or conscious could push that person into
buying a software update to prevent it from ‘dying’. Or it could
make the users confide in them even more, and so invade their
privacy even more deeply. People might also feel an obligation
to spend more and more time with the machine, neglecting other
social relations. Here lies another pressing ethical issue: how can
people who quickly and willingly enter into ‘relationships’ with
machines, and who therefore arguably develop a special form of
vulnerability, be protected? Amazon recently announced that
Alexa could soon be able to imitate the voice of a deceased loved
one. This will most likely increase the danger of falling into a
kind of emotional dependence on a machine.
The importance of talking about AI rights comes from the
necessity of correctly framing what’s happening as technology
improves. Discussing when a system has moral status allows us
to explain to people when it does not have it. It means being
able to explain to people that they do not have to feel guilty
about their machines; that their friendly AI is designed to evoke
these responses in them, but is nevertheless not conscious in
any way. That today and for the next few years, or decades, their
chatbot does not need an attorney. So what the debates can help
to achieve is a form of emotional AI-literacy – an ability to
observe and contextualize one’s own reactions to an AI. We
need to know how to correct for our willingness to socialize
with and bond with AI systems.
by Melissa Felder
© MIRIAM GORR 2023
Miriam Gorr is a PhD student at the Schaufler Lab at Technische
Universität Dresden, with a focus on the ethics of artificial
intelligence and robotics.
SIMON & FINN © MELISSA FELDER 2023 PLEASE VISIT SIMONANDFINN.COM
April/May 2023 ● Philosophy Now 15
AI & Mind
Arguing with the
Chinese Room
Michael DeBellis says Searle’s famous argument about
computers not having understanding does not compute.
any readers of this magazine will be familiar with
John Searle’s classic ‘Chinese Room’ argument
against ascribing consciousness to Artificial Intelligence. Due to my experience building AI systems for business applications, I have a different take on Searle’s
argument than most others. But first let’s look at his argument.
M
The Chinese Room
Searle introduced the Chinese Room in a paper published in
1980, called ‘Minds, Brains, and Programs’ (Behavioral and Brain
Sciences, vol.3, no.3). The paper begins with the following
thought experiment:
Professor Searle is locked in a room. He can’t read Chinese
or even distinguish Chinese characters from Japanese. He’s
given four sets of paper. The people giving him them have labels
for each set, although Searle is not aware of their labels. I’ll put
the labels at the beginning of each numbered item, along with
Searle’s description in quotes:
1. Script: “A large batch of Chinese writing”
2. Story: “A second batch of Chinese [text]”
3. Questions: “A third batch of Chinese symbols”
4. Program: “Instructions… in English, that enable me to correlate elements of [3] with [1] and [2]. These rules instruct me
how to give back [5]”
5. Answers: “Certain Chinese symbols with certain sorts of
shapes in response to certain sorts of shapes given me in [3]”
The idea is that the instructions [4] tell Searle how to respond
to certain sets of Chinese symbols [3] by outputting other Chinese symbols in specific ways [5]. In this way Searle gives coherent Chinese answers to Chinese questions without understanding a word of Chinese. The final part of Searle’s thought experiment is to “Suppose [that] I get so good at following the instructions… and the programmers get so good at writing the programs that from… the point of view of somebody outside the
room… my answers to the questions are absolutely indistinguishable from those of native Chinese speakers. Nobody just looking
at my answers can tell that I don’t speak a word of Chinese.”
Searle points out that this system does the same thing as AI
programs. His implication is clear, just because a computer program gives good answers to questions, that doesn’t mean it
understands what is going on. Later in that paper he also equates
this with passing the Turing Test, concerned with determining whether one’s interlocutor is conscious or not.
Since Searle has no understanding of Chinese even though
he is able to process the questions by following an algorithm
16 Philosophy Now ● April/May 2023
(the instructions), he asserts that in the same way there need be
no understanding in AI systems, because what they are doing is
equivalent to what he is doing.
Problems & Agreements with Searle
Let me now start by describing where I agree with Searle, then
mention some fairly minor problems, then go on to what I think
is the key issue.
I agree with Searle that the way Roger Schank and other early
AI researchers described their progress was over-optimistic. One
of the most infamous examples is from Marvin Minsky, who in
1970 stated, “In from three to eight years we will have a machine
with the general intelligence of an average human being.”
Schank wasn’t quite as extreme, but some of the ways he discusses the consciousness of a computer program - one able to
solve a very narrow set of linguistic tasks - were inflated. I think
probably most AI researchers would now agree with that. However, there is a difference between deflating the significance of
an idea, and claiming that all work that follows a similar methodology is completely vacuous.
Beginning with the less significant counterarguments: the
scenario Searle describes would never actually work. Of course,
the natural response is ‘It’s a thought experiment: it doesn’t have
to be something that can actually be implemented’. While it’s
true that certain details can be waved away for a thought experiment, there are other details that can’t simply be dismissed.
So why do I maintain that Searle’s system couldn’t work, and
why does that matter? Because the Chinese Room could never
approach the speed of a native Chinese speaker, and speed is an
issue for passing the Turing Test.
The sort of mechanism Searle describes in his thought experiment is a model known as a Finite State Machine (FSM). Noam
Chomsky defined a hierarchy of languages based on the complexity of the phrases they could generate, and the FSM family
of languages is the simplest type. Here the input to the system
is a set of symbols, and the system uses a set of rules to correlate the input symbols with another set of symbols, which are
the output. A thermostat is a classic FSM. It regularly takes
readings, and if the temperature is below a threshold it turns on
the heat and leaves it on until the temperature is above another
threshold. The crucial missing element in an FSM is memory.
There is no mechanism where symbols can be stored so that
they can be resolved later based on context.
In Syntactic Structures (1957), Chomsky proved that an FSM
is incapable of parsing natural languages. An intuitive argument
for why FSMs can’t process natural language can be seen by
considering a simple English sentence that Chomsky often uses:
ILLUSTRATION © SIMON ELLINAS 2023 PLEASE VISIT WWW.SIMONILLUSTRATIONS.COM
AI & Mind
John Searle
by Simon Ellinas 2023
‘I saw the man on the hill with a telescope’. Who has the telescope? Is it me, or the man on the hill? There is no way to determine the referent from this single sentence. This is known as
the problem of anaphora in linguistics: sentences that use pronouns such as ‘I’ or noun phrases such as ‘the man on the hill’
often need the context of sentences that came before or after
to disambiguate who the referent is.
To process anaphora (and many other features of natural languages), the system doing the processing needs memory as well
as rules. Unidentified variables need to be stored somewhere
so that they can be resolved by context that comes before or
after. But an FSM such as Searle’s Room has no memory. It just
takes symbols as input, and moves to different states as a result
of applying rules to the input. It can’t interpret ambiguity.
However, once one begins to add memory, the rules become
much more complex and the chances for error become exponentially greater. The Turing Test includes speed of response
as part of the test. If it takes a system much longer than it would
take a human to answer simple questions about (say) a short story,
any reasonable judge would determine that it was a computer
and not a person. For a program to pass the Turing Test, it would
also need to be able to handle extended discourse, humor,
metaphor, etc. To date no system that I’m aware of has even
come close to passing the test. This gets back to Searle’s claim
that AI researchers exaggerated the significance of their results.
Searle’s Definition of Strong AI is (Mostly) a Strawman
As a result of his argument Searle asserts that “the claims made
by strong AI are false.” According to Searle the three claims
made by proponents of strong (ie humanlike) AI are:
AI Claim 1: ‘‘that the programmed computer understands the
stories.’’
AI Claim 2: ‘‘that the program in some sense explains human
understanding.’’
AI Claim 3: Strong AI is about software not hardware (ie, it
ignores the brain as a possibly unique site of consciousness).
However, these claims that Searle ascribes to strong AI are
for the most part too strong, and not held by the vast majority
of AI researchers then or now.
Claim 1, the idea that AI programs understand text, hinges
on our definition of ‘understand’. I will discuss this idea at the
end because I think it is the most important question.
Claim 2 can be supported from our perspective in the twentyfirst century looking back on the impact of Schank’s research,
and similar AI research of that time. Schank’s work was also relevant to early work in applied AI.
In the 1980s I was a member of the AI group that was a part
of Accenture’s Technology Services Organization in Chicago.
One of the first systems we developed was the Financial Statement Analyzer, a system that utilized a concept of Schank’s to
analyze the yearly financial statements that corporations are
required by the government to file. These statements were shared
with the public, especially with shareholders, so corporations
often spent significant effort on the presentation of the reports,
with elaborate graphics. While the government required specific information in these reports, they left it open to each corApril/May 2023 ● Philosophy Now 17
AI & Mind
CARTOON © STEVE DELMONTE 2023 PLEASE VISIT WWW.STEVEDELMONTESTUDIO.COM
poration to determine how to format the documents. Thus, a
normal computer system that could parse tables fairly easily was
not able to automatically process these statements. The Accenture AI group developed a system that could analyze the reports,
find the relevant ‘frames’ (e.g., debt to equity ratio) and use rulebased heuristics to determine which reports would benefit from
further analysis by an expert. (‘FSA: Applying AI Techniques to
the Familiarization Phase of Financial Decision Making’, IEEE
Expert, Chunka Mui and William McCarthy, Sept. 1987.)
Our system in reading these reports in a sense did some of the
work that a human understanding the reports would have done.
Not that Schank (or anyone to date) has provided a complete
theory of human language. Rather, the work of Schank and others
led to other productive work on language and other problems of
cognitive science, that is, of ‘human understanding’.
Concerning Claim 3 – that strong AI is only about software
not hardware – Searle distinguishes between machines and programs, and says that strong AI is only about programs, and that
the nature of the machine running it (the computer, or brain) is
irrelevant: it is only the program that matters. This is a strawman,
in that Searle confuses a simplifying assumption – that the mind
can be studied as a system independent of the physical brain – with
the truth, that all the minds we know of are associated with brains.
Even in computer science it has only been fairly recently that
software can be packaged so that it is (mostly) independent of
the hardware platform. At the time of the Chinese Room argument – 1980 – AI software was tightly coupled to the specific
programming language and operating system that the
researchers were using. Only in the last decade or so, thanks to
Virtual Machines such as Java and Docker, could software be
packaged in a way that’s independent of hardware. This is the
results of decades of engineering effort.
The brain, however, is not designed from scratch in the way
environments such as the Java Virtual Machine are. The human
brain is the result of one hack upon another, adding whatever
small random mutations happen to increase reproductive success. It would be ridiculous for anyone who truly understands
computers to think that this same level of engineering could be
achieved by nature. We can see this by examining the brain architecture for functions such as vision, which we understand much
better than language. In vision, information is processed in the primary visual cortex. There are modules going from low level visual
processing (e.g., edge and surface detectors) to high level (e.g.,
face detectors in primates, or bug detectors in frogs). In a computer system, each level would have a small number of welldefined interfaces to the level above or below it (and few to more
than one level away). In the brain, however, there are many significant collections of neurons that connect layers with other
layers two or more levels away, as well as major connections to
other areas of the brain. Clearly, then, no complete understanding of the visual system can be had without understanding the
complex biology of the brain. At the same time, it is possible to
study the visual system in the abstract; for instance, simply defining the various levels and the kind of information that is communicated between each level. This vision model, originally
developed by David Marr, which abstracts away from its implementation in a brain or computer, led to great advances in both
computer and human vision. Later research was able to (partially)
map these abstract functions onto the topology of the brain.
While researchers in cognitive science often talk about mental
functions without describing the specific areas of the brain in
which they occur, this is only a simplifying assumption. It is not
a criticism of researchers that they make such assumptions, since
science would be impossible without them. A simple example
from physics is the equations for computing the force of gravity.
Computing the force on an object with mass X dropped from
height Y or launched with force F is trivial. However, when we
do this, we never are calculating the true force of gravity. That
would require we include the gravitational pull of the Moon, the
other planets, even the stars. The math for calculating the gravitational force on three interacting bodies is significantly complex, and the complexity increases exponentially with each body
added to the calculations. However, for most purposes we can
get by with the simplifying assumption that just the mass of the
Earth and the object matter.
Searle’s Argument is Based on a Logical Fallacy
Searle’s argument can be summarized as:
1. Strong AI maintains that a symbol processing system that
passed the Turing Test understands human language
2. The Chinese Room argument demonstrates that a symbol
processing system could pass the Turing Test and still not
understand human language
3. Thus, no symbol processing system that passes the Turing
Test understands human language
This is an invalid argument. All Searle has proven is that it
is possible that a symbol processing system could pass the Turing
Test and not understand language. This is not a proof that every
18 Philosophy Now ● April/May 2023
AI & Mind
Typing to Turing
The bombe, designed by Alan Turing, was an electromechanical device used by British cryptologists to help
decipher German Enigma-encrypted messages during
World War II.
Can computers think?
The question? – an enigma,
As I type, I hope they do
So they can bring my thoughts to you,
Somewhere in the ether, now.
You brought us back from the brink.
Your mind helped us win the War,
Perhaps it will help us win many more?
Your code-breaking bombe
Drops:
Our gain.
We live on despite your loss,
At the cost of your pain.
Yet your end is not found in death, but hope:
Freedom over battle-tinged fear,
Loved ones reunited.
Your story found as minds grow clear,
History’s blight of injustice, now righted.
The legacy you leave is seen in the screen
As I type these words.
And it is my hope that if computers can think,
then stretch,
And reach beyond,
They will pass on the message to you –
That you, at last, are heard.
© SAMANTHA NEAVE 2023
Samantha Neave graduated from the Open University, and
enjoys reading and writing poetry, philosophy, and fiction.
symbol processing system that passed the Turing Test does not
understand natural language.
Searle might respond by saying that what strong AI claims is
that any system that can pass the Turing Test understands human
language. However, I’m not aware of anyone in AI that makes
this claim. They simply don’t bother to point out that not every
system that can be imagined in a thought experiment that seems
to understand language necessarily understands language.
To see this, consider another thought experiment: Professor Nietzsche has constructed a quantum computer with
memory that exceeds conventional memory in both space and
speed by several orders of magnitude. He programs his computer with a simple table consisting of zettabytes (1021) of information. The first column in the table contains short stories in
Chinese; the second column, questions in Chinese about those
stories; and the third column are the answers to those questions.
The program then takes Chinese stories and questions as input,
and looks up the pair in the first two columns of the array that
best matches them (using simple pattern-matching algorithms),
then returns the third value in that row of the array as the answer.
Such a system could perform much better than the Chinese
Room ever could. Yet, no one in AI would consider this to be
relevant to the myriad problems of natural language understanding, because such a system would still be restricted to a
very narrow subset of natural language possibilities. Also, the
idea of a system based on predefined questions and answers contradicts what Chomsky with good reason calls the creative aspect
of language use.
The Definition of ‘Understanding’: Do Submarines Swim?
Returning to claim one, the final, and most important, idea is
that AI systems in some sense understand natural language. This
requires us to examine Turing’s original paper on his Test. The
paper opens as follows:
“I propose to consider the question, ‘Can machines think?’ This
should begin with definitions of the meaning of the terms ‘machine’
and ‘think.’ The definitions might be framed so as to reflect so far
as possible the normal use of the words, but this attitude is dangerous, If the meaning of the words ‘machine’ and ‘think’ are to be found
by examining how they are commonly used it is difficult to escape
the conclusion that the meaning and the answer to the question, ‘Can
machines think?’ is to be sought in a statistical survey such as a Gallup
poll. But this is absurd. Instead of attempting such a definition I shall
replace the question by another, which is closely related to it and is
expressed in relatively unambiguous words.”
(‘Computing Machinery and Intelligence’, Mind, Volume LIX, Issue
236, October 1950)
Even setting aside all the issues I’ve already raised, this is the
essence of the problem with the Chinese Room argument: Turing
is explicitly not trying to answer the question ‘Can machines think?’
by appealing to the definitions we use in everyday language, as
Searle is. Turing is trying to provide a scientific definition of thinking that abstracts away from the natural assumptions most people
bring to such discourse. Thus the question that the Chinese Room
is really addressing is not the question that Turing posed, which
is: ‘‘What is a rational definition of ‘understanding’ that could
apply to both machines and people?’’ Rather, what Searle is arguing is that our commonsense notion of ‘understanding’ can’t be
applied to computers. But as Turing said, the way people normally use words like ‘understanding’ and ‘thinking’ is not relevant
to a scientific theory of cognition. Chomsky agrees with Turing,
and says that asking if computers can think (in the commonsense,
Searlean sense) is like asking ‘Can submarines swim?’ (Chomsky
and His Critics, 2008, p.279). In English they don’t, but in Japanese
they do. In English we don’t use the word ‘swim’ to describe what
a submarine does; but Japanese does use the same word for the
movement of humans and submarines through water. That doesn’t tell us anything about oceanography or ship design – just as
thought experiments about ‘understanding’ in everyday language
use don’t tell us anything useful about cognitive science.
© MICHAEL DEBELLIS 2023
Michael DeBellis is a retired Deloitte Consulting partner who now
does independent research in AI and related fields. He can be reached
at mdebellissf@gmail.com His website is michaeldebellis.com
April/May 2023 ● Philosophy Now 19
ARTIFICIAL_INTELLIGENCE LEARNING MIKEMACMARKETING 2018 CREATIVE COMMONS 2
AI & Mind
What’s Stopping Us Achieving
Artificial General Intelligence?
A. Efimov, D. Dubrovsky, and F. Matveev explore how the development of AI is
limited by the perceived need to understand language and be embodied.
ver seventy years ago, Alan Turing developed the
simple but powerful idea that any solvable mathematical problem can in principle be solved with a
‘universal computing device’. The type of device he
described in his 1936 paper became known to researchers as a
‘Turing machine’. Ever since, we have been trying to create
artificial intelligence by programming electronic machines.
Most of the current research in the field of AI is indeed just an
acceleration of that first universal Turing machine. Turing is
also responsible for another fundamental idea that has shaped
research in this area. The Turing test makes us ask: if we cannot
distinguish whether we are holding a dialogue with a person or
a machine, then does it really matter what is in front of us – a
machine or a human – since we’re dealing with intelligence
anyway?
The Merriam-Webster Dictionary defines intelligence as ‘the
ability to learn or understand or to deal with new or trying situations’. Turing’s idea of using language as a tool for comparing
machine and human intelligence, based on how well a machine
can pretend to be human, is both simple and profound. Thanks
to this idea, such wonderful things as voice assistants and online
translators have come to life.
O
20 Philosophy Now ● April/May 2023
Modern developments are now getting close to the point
when a single computer can tackle any problem, thus resembling a human being in the broadness of the application of its
intelligence. This is called artificial general intelligence (AGI),
which is also sometimes called ‘strong AI’. The idea is, the better
and more accurate the means we employ to improve a program,
the better it ‘understands’ our words, and the closer we approach
artificial general intelligence. But what if this basic assumption
is wrong? What if it is not just language that determines the
‘generality’ or the ‘intelligence’ of an artificial agent? Is there a
possibility that the signpost planted by Turing (and not only by
him) seventy years ago is pointing in the wrong direction, and
we should reconsider our route? In this article we want to put
forward a number of ideas in the philosophy of artificial intelligence. These ideas could offer conceptual support for a new line
of research that will overcome a number of limitations inherent
in early approaches. (This does not mean that those approaches
should, or can, be ‘abolished’, just as Newtonian Mechanics was
not set aside after Einstein, but rather, incorporated into the
Einsteinian view.)
However, before offering new ideas, let us look into one old
idea, and one long-running debate.
AI & Mind
An Old Idea about Language
The old idea is the suggestion from Alan Turing that if a
machine imitates intelligence so well that a large percentage of
humans conversing with it by text alone can’t tell it is a machine,
then it possesses intelligence. In fact, in ‘Computing Machinery and Intelligence’ (Mind, 1950), Turing identified several
areas as representing the ‘highest manifestations’ of human
intelligence. His examples included the study of languages (and
translations); games (chess, etc.); and, mathematics and cryptography (including solving riddles). If in these areas the output
of a computer cannot be distinguished from that of a human
then its level of thinking is equivalent to that of a human, and
so we can say that we’re dealing with an intelligent machine.
According to Turing, the high-level, intellectual functions of
the human brain can be reproduced in a computer without the
computer precisely imitating the functioning of the brain.
It is noteworthy that only a couple of years after that publication, Walter Gray's ‘turtles’ appeared. These quite primitive robots
showed surprisingly ‘intelligent’ behaviour. They could, for example, find their base station by orienting themselves towards light.
This ability was born in direct interaction between the world and
the simplest programming of the robots, and if Turing had written his paper after this debut, he would certainly have formulated
the problem differently. However, it was his paper of 1950 that
laid the foundations for the linguistic orientation of generations of
artificial intelligence researchers. Turing himself admitted that
comprehensive knowledge of the world is impossible without direct
interaction with it. However, at that time, the idea of artificially
imitating such activities as sports, eating, or sex, seemed unthinkable. Therefore, the British mathematician left those behaviours
for an indefinitely distant future, suggesting to instead focus for
now on games, languages and cryptography. As a result, Turing
initiated a kind of human-machine race that has encouraged the
development of systems performing narrow functions, be it a game
of chess, translating, or driving a car better than a human being.
For his test, Turing was drawing on a Victorian ‘imitation
game’. Here the judge must decide who of the players is a woman
and who is pretending to be one, only by exchanging notes with
the players. Obviously, the judge should not see the participants:
they are separated from them by a wall or a screen. Turing transferred this situation to a computer trying to imitate a real person,
also remaining hidden from the judge by a ‘wall’. The ‘wall’
deprives us of the physical embodiment of our conversation partner, and reduces ‘his’ responses to a limited set of verbal processes.
A Long-Running Debate about Language
In a prominent article, ‘Do Large Language Models Understand Us?’ (Medium, 2021), Blaise Aguera y Arcas considered if
the successful teaching of deaf-blind-mute children is evidence
that verbal communication could be the basis for developing
artificial intelligence without needing embodied intelligence.
This reminded us of a heated discussion dating back to the end
of the Seventies, which one of the authors of this article, D.
Dubrovsky, initiated, and directly participated in.
Besides the well-known scientific and technical achievements
of the USSR, such as a manned space flight and nuclear energy,
Soviet propaganda announced that the USSR had developed its
own effective teaching technique for deaf-blind-mute kids, in the
so-called ‘Zagorsk experiment’. Here teachers showed not only
that the students could form social skills, but that they could have
a fulfilled intellectual life. During the experiment, four students
of Zagorsk Boarding School For The Deaf-Blind entered the
Faculty of Psychology of Moscow State University and successfully graduated. Two of them even defended dissertations.
This could have been a truly significant pedagogical achievement if evidence of falsification hadn’t come out. All four participants were said to have been totally deaf-blind-mute from birth
and completely devoid of not only language but conscious thought,
and indeed any manifestations of psyche whatsoever. However,
as it turned out, they had all lost their sight and hearing abilities
fairly late in infancy, already possessing the full power of consciousness and speech. Moreover, two of them retained some hearing ability, and the other two retained some visual ability – enough
to independently travel round the city on public transport!
Communist ideologists and a number of philosophers – creators of the technique among them – stated that the Zagorsk
experiment proved that the Marxist concept of the formation
of personality is correct. From that point of view, genetic factors play no role: everything is determined solely by social factors. Drawing on the Marxist maxim that ‘being determines
consciousness’, it was assumed that a Marxist teacher could
‘sculpt’ the consciousness and personality of his student literally from scratch. Put simply, the Marxists used the Zagorsk
experiment as a proof that it was possible to educate anyone
from a ‘clean slate’ state – thus postulating a false dichotomy of
nature versus nurture and in particular denying the role of biological and genetic factors in education. According to this Marxist approach, the most important thing for the intellectual development of a person is learning vocabulary and being able to
communicate with other people using words.
These Marxist conclusions were sharply criticized by a number
of philosophers, the central part in this specific debate being played
by Dubrovsky. It was shown that biological, genetic factors play a
fundamental role in the rehabilitation of the deaf-blind-mute. In
the case of the loss of vision and hearing early in life, touch remains
the main channel of communication for a child with the outside
world, as well as some communication using smell and taste. However, the crucial role here concerns the genetic inclinations of children to the development of language, which also contribute to
their overall sensitivity. A vivid example of this is provided by the
upbringing and education of Helen Keller, who lost both her sight
and hearing at the age of nineteen months, but, as is well-known,
reached an exceptionally high level of intellectual development
becoming a noted author, activist and lecturer.
It is noteworthy that even the deaf-blind-mute who have mastered spoken and written speech and reached a significant level
of intellectual development, continue to rely on sign language
and the sense of touch for communication and exploring the physical environment. They never stop practicing gestural communication. Therefore, using the supposed example of the deafblind-mute children as programmable ‘blank slates’ can hardly
be definitive in AGI research, in which language is considered in
more ‘disembodied’ terms.
It is also important to point out that knowing how to use a
language does not in itself mean having intelligence in the true
human sense of the word – of being able to consciously think
April/May 2023 ● Philosophy Now 21
AI & Mind
about things. In AI, language is rather a tool for interacting with
other things, and with people. By contrast, in explicitly conscious
terms, language is a tool for expanding and deepening the understanding of one’s self, other people, of physical, biological, social
phenomena, and of all kinds of causal and functional relations in
the world around us. For conscious beings, language brings the
ability to generalize, to abstract, analyze, and synthesize – that is,
the ability to think. The agent ascribed with ‘true’ intelligence
must possess all these qualities, as well as self-reflective ability.
Moreover, real (conscious) intelligence is also based on the ‘dark
matter’ of non-verbal perception and communication, and various subconscious processes. All this must be taken into account
when we talk about language, intelligence, and creating AGI.
A Modern Discussion about Language & AI
In the aforementioned article by Aguera y Arcas, the issues of
language, thinking, and having intelligence are considered from
the perspective of developing deep learning through neural networks. Neural nets have paved the way for some outstanding
results in the field of language processing and generation. Arcas
opposes those researchers who believe that intelligence, in the
sense of the capacity to understand the content of text or one's
own actions, cannot be attributed to deep learning language
models. They say that language models are just big statistical
machines that map certain outputs (‘answers’) to certain inputs
(‘questions’). Even though this obviously helps solve a number
of practical problems quite successfully, this does not mean
understanding either abstract or concrete concepts as a human
would. But it is noteworthy that those criticizing Arcas for holding that language understanding is evidence of consciousness in
machines, do not deny the achievements of deep learning language models for acquiring some sort of fundamental intelligence.
There are a few arguments put forward against large language models possessing the capacity to understand. For example, if an artificial intelligence is not embodied, has no physical
presence, and cannot sense the world in a multimodal way as
humans do, then its understanding of language must be insufficient, to say the least.
Arcas argues that our linguistic understanding is self-sufficient (‘complete’) because it is based on our innate and acquired
knowledge as well as the rich sensory experience we have, and
so opens up unlimited possibilities for learning. Through language we also have access to socially-determined perceptions
(ie, to culture), richer in comparison to raw sensory experience
that is not refined through language. Therefore, language itself
is able to compensate for the weakness or lack of certain sensory abilities. It is in this context that Arcas refers to the experience of Helen Keller and the education of deaf-blind-mutes.
However, Arcas’s arguments are short of the mark, since the
success of Keller's education was based on the use and development of her available sensory abilities. This is precisely what
the title of Helen Keller's famous essay illustrates: ‘I Am Blind
– Yet I See; I Am Deaf – Yet I Hear’. Yet, although AI can detect
the world through, for example, cameras or microphones, the
idea that these computers actually experience sensations is much
more difficult to justify. Generally speaking, it is hard to agree
with Arcas’s statement that language can fill the sensory gap
between humans and artificial intelligence, as well as with his
22 Philosophy Now ● April/May 2023
interpretation of sequence learning in large language models,
which is key to understanding conscious intelligence.
Arcas’s main points were critically reviewed in Melanie
Mitchell's article, ‘What Does It Mean for AI to Understand?’
(Quanta Magazine, 2021). She writes, “The crux of the problem, in my view, is that understanding language requires understanding the world, and a machine exposed only to language
cannot gain such an understanding.” Mitchell also notes that
there are a lot of unexplained mechanisms involved in the processing of human speech, as linguistic research confirms. Artificial intelligence could not possibly understand language in the
human sort of sense without this kind of ‘infrastructural’ background. Mitchell also says that, contrary to how Arcas interprets the argument concerning educating the deaf-blind, Hellen
Keller's essay proves that both sensory experience and embodiment are paramount to consciously understanding language.
Historic Requirements for Language & Sentience
What’s the connection between the modern discussions about
artificial general intelligence and old debates about language
and the nature of consciousness?
It turns out that transitioning deaf-blind-mute children from
simple practical skills to intellectual communication using
speech or the Braille alphabet always goes through gestural
communication, and gestural communication always remains a
part of communication for these people. Gestural or tactile communication is generally a proto-linguistic stage, a pre-verbal
communication. For many deaf-blind students from the Zagorsk
experiment it remained the main form of communication. This
is important for artificial intelligence because it shows how intelligence is a complex biological product. This product is embodied intelligence, reliant on the ‘dark matter’ of non-verbal communication (such as body language). This indicates that a real
thinking machine would have to be a product of a multi-dimensional interaction with people and with the outside world, both
verbal and non-verbal, occurring both in a virtual and in a real
environment. Yet the classic Turing Test, like the Winograd
schemas and most other popular tests for artificial intelligence,
cover only areas of verbal-virtual interaction. They all lie within
the methodological paradigm set by Turing, and are still behind
a ‘wall’ of virtuality. To break the wall would mean to enter the
field of physical, sensation-filled exploration of the world by
the growing artificial intelligence. After all, we understand that
many animals have forms of consciousness, including
cephalopods such as octopuses, for which thinking and its manifestations turn out to be connected with real living conditions
– with the corporeality of living being. Furthermore (as has been
emphasized by Dubrovsky), the mind arises in the course of biological evolution only in those organisms that actively move in
the environment – that is, in animals, not plants. It seems then
that comprehensive knowledge of the surrounding world is
impossible without physically interacting with it. Therefore,
one condition for creating a general artificial intelligence is the
capacity to work in different modalities in different environments. This requires access to the non-verbal and the physical.
Examples of artificially intelligent agents that cope with nonverbal tasks are systems that can play computer games; or the virtual TV presenter Elena, created at the Sber Robotics Labora-
ARTIFICIAL AGENTS ROBERT WAGNER 2022 CREATIVE COMMONS 4
AI & Mind
tory. Elena is capably of imitating a real TV presenter, including
movements, facial expressions, emotional expressions and other
gestures. However, neither of these examples leave the limits of
the virtual. Real interaction with the physical world is still an
extremely difficult task to build into artificial intelligence. In the
case of AGI, this kind of machine must comprehend all four areas
of interaction (movements, facial expressions, emotional expressions, and gestures), as well as working with environments.
The Advent of Techno-Umwelts
Back in the nineteenth century, the biologist Jakob von Uexküll
pointed out that different living beings have different spheres
of world perception – different umwelts. The umwelt of a butterfly is very different from that of a fish, or from that of a person,
for example. The umwelt of a person is of course well-known
to each of us.
By analogy, we propose to call four areas of interaction possible for machines ‘techno-umwelts’. A ‘techno-umwelt’ would be
the domain of perception for a machine: how a machine perceives
the world. Many of us have seen visualizations of the technoumwelts of unmanned vehicles using radars and lidars in videos,
for example. But the two dimensions of interactions described
above – verbal/non-verbal, and virtual/physical – give four possible techno-umwelts, or areas of perception for a machine: 1)
Verbal virtual; 2) Non-verbal virtual; 3) Verbal physical; and 4)
Non-verbal physical. The versatility that marks general or comprehensive intelligence, that is, AGI, would only be possible when
the machine freely operates in all four of these techno-umwelts.
Current AI systems are capable of coming to recognize objects
of different classes without having been programmed to do so.
This is a major achievement, but it has nothing to do with generality, which we will now define as the capability of an agent to
work in different umwelts. So in order to achieve generality for
an intelligent agent, it will be necessary to implement ‘translators’
between the language of one domain of world perception and the
language of another. Only then could artificial intelligence become
truly multimodal – meaning, it will be able to solve a wide range
of possible tasks and comprehensively communicate with a human.
The idea of the combination of techno-umwelts thus gives
us the opportunity to propose a new definition of AGI:
Artificial general intelligence is the ability of a robot (a machine with
sense-think-act capability) to learn and act jointly with a person or
autonomously in any techno-umwelt (but potentially better than a
specialist in this field), achieving the goals set in all four technoumwelts, while limiting the resources consumed by the robot.
As this multidimensional ability emerges it will forever
change the way we interact with technology. After millennia of
philosophical reflection, and centuries of scientific and technological progress, for the first time in history, people will
encounter truly smart non-human things - devices that may
come to have even more complete and accurate knowledge about
the world and about us than human being themselves. This situation will call for a new outlook on what a person and a mind
are, as well as a redefinition of many other established ideas.
The redefinition has already begun.
On the one hand, we are beginning to ‘dissolve’ into the technologies and virtual worlds surrounding us, blurring the concept
of ‘human’. On the other hand, as computers explore new areas
of activity, be it chess or machine translation or whatever else,
those areas are no longer exclusive to humans. Perhaps humans
are the final frontier that the machine cannot yet overcome.
© A. EFIMOV, D. DUBROVSKY AND F. MATVEEV 2023
A. Efimov is Chair of Engineering Cybernetics at the National
Science and Technology University MISIS, D. Dubrovsky is Chief
Scientist at the Institute of Philosophy, Russian Academy of Science
and F. Matveev is a student at San Francisco State University.
April/May 2023 ● Philosophy Now 23
AI & Mind
Can Machines Be Conscious?
Sebastian Sunday Grève and Yu Xiaoyue
find an unexpected way in which the answer is ‘yes’.
lan Turing thought that it was possible (at least in
theory) to make machines that enjoyed strawberries
and cream, that British summer favourite. From this
we can infer that he also thought it was possible (again,
at least in theory) to make machines that were conscious. For you
cannot really enjoy strawberries and cream if you are not conscious – or can you? In any case, Turing was very explicit that he
thought machines could be conscious. He did not, however, think
it likely that such machines were going to be made any time soon.
Not because he considered the task particularly difficult, but
because he did not think it worth the effort: “Possibly a machine
might be made to enjoy this delicious dish, but any attempt to
make one do so would be idiotic,” he wrote in his influential
‘Computing Machinery and Intelligence’. He added that even
mentioning this likely inability to enjoy strawberries and cream
may have struck his readers as frivolous. He explains:
A
“What is important about this disability is that it contributes to some of
the other disabilities, e.g. to the difficulty of the same kind of friendliness
occurring between man and machine as between white man and white
man, or between black man and black man,”
thus reminding us, as he was wont to do, that humans have always
found it difficult to accept some other individuals even within
their own species as being of equal ability or worth. So he says
that the importance of machines likely being unable to enjoy
strawberries and cream resides in this being an example of a
broader inability on the part of machines to share certain elements of human life.
He got that right, at least in principle – just as he accurately
predicted the success of artificial neural networks, machine
learning in general, and reinforcement learning in particular.
However, Turing was wrong to predict – as he explicitly and
repeatedly did – that no great effort would be put into making
machines with distinctively human but non-intellectual capabilities. Rather, the growing demand for chat bots, virtual assistants, and domestic robots, clearly shows the opposite to be
true. If machines can be made to be conscious, we probably will
engineer them, sooner or later. Furthermore, since most people
think that consciousness in general makes a big difference, not
least from an ethical perspective, the question of whether
machines can be conscious appears sufficiently important that
more people should learn to ask it.
The first thing to do in answering the question is to specify
what we mean by ‘machine’. When Turing considered whether
machines can think, he restricted ‘machines’ to mean digital
computers – the same type of machine as the vast majority of our
modern-day computing devices, from smartphones to supercomputers. At the time he was writing, around 1950, he had
just helped to make such a machine a reality. Incidentally, he
24 Philosophy Now ● April/May 2023
also provided the requisite mathematical groundwork for computers, in the form of what is now known as the Universal Turing
Machine. So Turing still had a good deal of explaining to do,
given the novelty of computers at the time. Today, most people
are at least intuitively familiar with the basic powers of computing machinery, so we can save ourselves a detailed theoretical
account. In fact, we need not restrict what we mean by ‘machine’
to digital computers. As will be seen, the particular way of asking
whether machines can be conscious that we present here only
requires us to stipulate that the relevant engineering is not primarily of a biological nature.
By far the trickier part of asking whether machines can be
conscious is to determine what one should take the word ‘conscious’ to mean. To be sure, humans are intimately familiar with
consciousness, insofar as an individual’s consciousness just is
their subjective experience. On this common meaning of the
term, consciousness is that special quality of what it is like to be
in a particular mental state at a particular time. It is this same
special quality that many people are inclined to think must be
missing in even the most sophisticated robots.
But the main difficulty in asking ‘Can machines be conscious?’
is that, despite our natural familiarity with consciousness, we
are still ignorant of its fundamental nature. There is no widely
agreed-upon theory of what consciousness is, and how we can
tell when it is present. We certainly do not know how to build
it from the ground up. The trick, as we shall see, is to circumvent this ignorance and make use of our basic familiarity instead.
Man Made
Can we think of any promising way to engineer a conscious
thing? Of course, there is procreation; but this would fall foul
of our condition that the method of engineering not be of a primarily biological nature. The same holds for regenerative methods such as neural stem cell therapy and tissue nano-transfection, which turns skin cells into functional nerve cells. Both procreation and regeneration can be used to engineer conscious
things, in particular humans, either directly or indirectly (by
restoring relevant functions). However, due to the primarily
biological nature of the methods employed, we will naturally
consider the outcome to be not conscious machines, but humans.
Consider now a similar but different method. Recent advances
in biological engineering have achieved functional restoration
of part of the human nervous system, in the form of prosthetic
limbs that are connected with the brain in both directions, thus
enabling fine motor control and proprioception (intuitive knowledge of limb position), as well as reduced phantom limb perception. The technology is still in its early stages, but it already provides empirical proof of the idea that parts of the human nervous
system can be restored using semiconductor materials such as
silicon. It seems reasonable to expect that this and related ongo-
PROSTHETIC_LIMB US NAVY 2012 PUBLIC DOMAIN
AI & Mind
ing research on the human–machine neural interface will yield
further advances in the future, so that more parts of the nervous
system can be restored or replaced, also using materials such as
silicon. In the future, a quadruple amputee might be able to
regain full arm and leg functionality, including haptic perception, proprioception, thermoception, and so forth.
Now suppose the following three things (which should seem
fairly commonsensical) are true. First, the human nervous
system, including the brain and the spinal cord, is constitutive
of human consciousness. This means that for anything happening in the human mind there is nervous system activity underpinning it. Second, an individual’s conscious states normally
include limb-based sensory experience, because the nervous
system extends into the limbs. Third, some amputees’ conscious
states include prosthetic limb-based sensory experience, because
relevant parts of the nervous system have been artificially
restored, as in the above example. It follows from this that such
a prosthetic limb is itself partly constitutive of the individual’s
consciousness, because their nervous system extends into their
prosthetic limb. Since the relevant method of engineering a
prosthetic limb is not of a primarily biological nature, we can
conclude that a machine is partly constitutive of their consciousness. However, it does not follow from this that machines can
be conscious. For it is of course possible that some human part
is ultimately necessary. This may seem particularly plausible
given the fact that the integration of machine parts into the nervous system considered thus far merely affects the peripheral,
not the central, nervous system, i.e. not the spinal cord or brain.
However, it is not obvious what relevant difference it makes
whether central or not-so-central parts are replaced. To be sure,
there are many important differences between the central and
peripheral systems; a foot is not a brain. Yet at a certain level
of abstraction, central and peripheral parts of the nervous system
are indeed the same kind of thing (namely, nerve activity), and
so the theoretical possibility of replacing any given part of the
nervous system becomes difficult to deny.
To reiterate, following recent advances in biological engineering, it seems reasonable to expect that future research on the
human–machine neural interface, as well as on physical neural
networks, memory resistors, and memristive systems, for example, will enable the restoration or replacement of more and more
parts of the human nervous system using materials such as silicon.
Now imagine the following scenario:
A hundred years from now, after a century of steady technological progress, Thesea, while still young, begins to suffer from
a degenerative disease of the nervous system. Thesea is lucky,
though, insofar as implant surgery is available to her whenever
she needs it. She is lucky too that the intervals between surgeries
are long enough that new parts of her nervous system can always
be properly integrated – thanks to various kinds of therapy, as
well as her system’s continued neuroplasticity – before another
part needs to be replaced.
How much of Thesea’s nervous system would need to be
replaced by implants in order for her to be considered a machine
rather than a human? Different people will, inevitably, give different answers. Some may consider it necessary that all of Thesea’s
nervous system, including her brain and spinal cord, or even her
whole body, be replaced before we call her a machine. Either way,
the transformative principle remains the same, so that anyone who
accepts recent advances in prosthetics as proof that parts of the
human nervous system can be restored using a material such as
silicon should, on this basis, be able to agree that Thesea could
eventually become a machine. Moreover, they will be able to accept
that Thesea will eventually become a machine that is conscious
just like humans are (or, at any rate, just like Thesea used to be).
Some will no doubt want to object to the supposed psychological continuity throughout Thesea’s gradual transformation.
This kind of objection may take aim at either the supposed continuity of Thesea’s consciousness, or, less directly, the continuity of her personal identity (since lack of personal identity would
also make doubtful the supposed persistence of consciousness).
To prove the validity of their qualms, such an objector would
have to explain where, in their opinion, things would likely go
wrong. Presumably they believe that there is a point at which
the process of transformation will become significantly more
involved, and it may be finally impossible to entirely replace a
biological brain with a synthetic one.
It would of course be of great scientific interest to discover any
such point of ‘no further progress’ along the proposed transplant
trajectory. But until we get there, one will at least be reasonably
justified in believing that, given enough time, and taking the smallest steps possible, the human nervous system can be replaced by
parts of a different material, such that a human may be slowly
turned into a machine whilst still retaining consciousness.
© SEBASTIAN SUNDAY GRÈVE AND YU XIAOYUE 2023
Sebastian Sunday Grève (Chinese Institute of Foreign Philosophy
and Peking University) and Yu Xiaoyue (Peking University) are
philosophers based in Beijing, where they are working on the cognitive foundations of human–machine relations, in close collaboration
with colleagues from other disciplines.
April/May 2023 ● Philosophy Now 25
Hap & Happiness
Stephen Anderson meditates on misfortune and meaning.
“We hold these truths to be self-evident, that all men are created equal,
that they are endowed by their Creator with certain unalienable Rights,
that among these are Life, Liberty and the pursuit of Happiness…”
– The US Declaration of Independence, 1776.
h, happiness… Where is she to be found? Doesn’t it
almost seem, these days, as if the more we have the less
happy we are? We are richer, smarter, have more
leisure time and more elaborate entertainments than
in any period in history on any place on Earth. And though not
yet universal, such advantages are more widely distributed than
ever before. We’re all doing better… and feeling worse, it seems.
Consider when you last went unclothed, unhoused, or without a
meal; or how easily you secured medical, dental or psychological
services when last you needed them. Put your hand in your pocket,
and chances are you find a device capable of communicating anywhere around the world, providing innumerable entertainments
and distractions at a moment’s notice, or securing vast quantities
of consumer goods to fulfill any desire. And yet, are we modern
people actually happier than our forebears of previous ages?
It would seem not. That we are materially better off, circumstantially more blessed, there can be no doubt: but if anything,
we are more anxious, more unsatisfied, and more emotionally
at-sea than we ever were, are we not?
We’re promised the right to pursue happiness. And yet we
might well wonder what sort of perverted trick is this, that the
opportunity for bliss is dangled before our eyes, then somehow
snatched away by our circumstances! What a betrayal. Somebody needs to answer for that, surely.
But before we declare that the universe is against us and start
grinding our teeth at men or at God, perhaps we should take a
more thoughtful look at our expectations.
A
Happiness Stolen By Fate
In our current age, ‘happiness’ is frequently thought of merely
as an emotion – a feeling of pleasure resulting from the pleasant alignment of circumstances; and rather than ‘pursuing’ happiness, we are waiting upon it to be delivered like an Amazon
package to our front doors.
This is common enough, and we have an excuse for it. ‘Hap’
is an old word meaning ‘chance’ or ‘circumstance’. To have
something hap-pen to you is to be the passive experiencer of
some external stimulus; and how can a passive experiencer be
in any way responsible for the quality of the incoming stimuli?
He or she has no control over that. Thus, a person whom fortune favours turns out to be hap-py, we suppose. The circumstances produce the feeling.
But if happiness awaits the arrival of pleasing circumstances,
the sad truth is that most of us are doomed mostly to misery,
for there are few sets of circumstances so utterly felicitous that
we find ourselves swept up in unimpeded joy, even for a few
26 Philosophy Now ● April/May 2023
minutes. Life, even at its best, is often arduous and unpleasant,
punctuated only occasionally by moments of delight. How then
can any of us be truly happy?
Thomas Hardy, the great poet and novelist, meditated thusly
in his poem appropriately titled ‘Hap’ (1898). In it he speaks of his
circumstances – how ‘crass casuality’ interferes with him, how ‘joy
lies slain’ and ‘‘unblooms the best hope ever sown.’’ He wants to
blame the Almighty, he says, but finds he cannot. Mere ‘Hap’ –
chance personified as the Fates of Greek mythology – not the
Christian God, is really responsible for what happens to him, he
says. He could have been happy, he feels, but he has not been. He
is not. As he looks on his life, it seems a tragic waste, with opportunities for joy ripped from his grasping fingers, not by some malevolent God, but by mere chance, which might just as easily been
other than it was. This would be less tragic if Hardy could console himself that it meant something; but even meaning is beyond
his imagining. His pain serves nothing, and his pleasure simply
drifts away from him on the uncaring tides of time. In this, he
cannot even find a reason to see himself as heroic. It’s all just sad.
If happiness is what Hardy thinks it is, or what so many of us
think it is today, it’s no wonder that we become bitter. We
believe we must have some opportunity for joy, promised us by
the sheer fact that we are living, and presented before us in all
the fleeting moments of delight we experience as living beings;
but this seeming opportunity is held at distance from us by circumstance. We could have been born rich, or tall, or beautiful,
or admired; but we were born as we were, and fate has doled us
out much less than we might have hoped, and life has proved
more often arduous than pleasant. We feel cheated. And always
we wonder, ‘‘Where is happiness to be found?’’
The Pursuit of… What?
Part of the problem, though, is surely in the popular conception of happiness. For most of us, it seems it is just what I indicated above: an emotion, an experience, a feeling – and worse,
a feeling that comes primarily in response to situations beyond
our control. However, even the mention of ‘happiness’ in the
Declaration of Independence should, if we think about it, alert
us to a certain incoherence in that conception. For if happiness
is a feeling, then how can it be promised us as a right?
“Ah,” you say, “but what’s promised is not happiness, but the
pursuit of happiness.” That’s a good rejoinder, until we recognise that it leaves us no better off. For it suggests we now have
a right to chase circumstances that will produce the feeling of
happiness for us. But isn’t that what we have all been doing
already? And how’s that been working out for us? Besides, it
leaves a fundamental problem unanswered: which circumstances
must we pursue in order to arrive at happiness? We know no
more now than we did before.
But let us entertain the thought for a moment that the writers of the Declaration of Independence were not actually modern
people – at least, not modern in our way, or to our degree. Today,
people who study ethics might suppose that ethics has always
been as diverse and conflicted as the field is now. But as Peter
Adamson so appropriately noted in this very magazine, “What
students may not be told is that for quite a long time all ethical
theory in the European tradition was eudaimonist. It was the only
game in town” (PN 147). In other words, the founding fathers
of the American Revolution were mostly virtue ethicists in their
thinking. This means that when they penned the term ‘happiness’ into the founding document, they were writing with a
virtue-ethical set of suppositions, following in the tradition of
Aristotle. As such, they were channelling a rather different conception of happiness than that with which we are familiar.
How was it different? Well, to find that out we might look
back at Aristotle’s Nichomachean Ethics (c.349 BC), which begins
with an appeal to the Greek statesman Solon’s dictum, ‘Call no
man happy until he is dead’ (p.10). As Aristotle goes on to explain,
this does not imply that Solon was particularly fond of corpses.
Nor does it indicate a cynicism about life so complete that the
grave is to be preferred to life. Not at all. Rather, Solon’s axiom
is a hopeful one. His idea is that it is not wise to pass judgment
on the value of a person’s life based on a single incident, or even
a set of incidents. These may be good or bad, happy or sad, in
their own right; but they provide us with no secure ground for
assessing what the overall life (of which they are but mere parts)
is really like. To pass proper judgment on a person’s life, we
really need to know the entire narrative, from beginning to end.
We can’t say whether or not his or her life was genuinely good
or bad, genuinely valuable or worthless, or even delightful or
awful, unless we know the totality. ‘Lives’ are whole things, complete stories, not just collections of incidental features or fleet-
“Call no man happy until
HAMLET AND HORATIO BY EUGENE DELACROIX 1839
he is dead, Horatio.”
ing circumstances; not even of particular good or bad decisions
taken at one time or another. So to say you know what a life was
worth, you must know it all. Therefore it must be done, complete, so that you know there are no new chapters to be added.
Then, and only then, are we in a position to know what kind of
life it really was, says Solon: happy or sad, or something else.
Mistaking Aristotle’s intention here is easy for us, given our
own associations with the word ‘happy.’ But the problem is really
in our translation, not in the original. As Adamson has already
alerted us, the word we translate as ‘happy’ in Solon’s epithet is
not at all the concept we now associate with it. Rather, the Greek
word, eudaimonia (literally, ‘good spirited’), far from being a word
of mere emotion, translates more accurately as ‘blessed’.
‘Blessed’ is not a word we moderns use much or understand
well today. Originally, it alluded to the gods: to be ‘blessed’ was
to be ‘favoured of the gods’, or ‘approved by the gods’. As such,
it was not merely to be advantaged by mere circumstance, far
less about gaining a feeling of temporary well-being or delight.
Ancient Greek cosmology was different from ours; they conceived of fortuitous happenings as being tied to the actual intervention of divine beings, and thus indicative of divine approbation. Things the gods approve would be more moral, more
heroic, more ultimately important than those conceived by
human beings in their mundane rounds. And living in such a
way would put one ‘in good standing with one’s tutelary deity’
– which is the full, idiomatic implication of eudaimonia.
Note this, though: such a conception is inseparable from a
belief that human beings are not here by chance, not living for
no reason, and not created without an end in view. Here the
final evaluation of the ‘blessedness’ or ‘happiness’ of somebody’s
life not only awaits the completion of all the data potentially
available to us to judge, but had to await the time when gods
might assess the matter, too. So it would only be at the end of
a total pattern or course of life that ‘happiness’ would be discernable to anyone, divine or human. Discerning eudaimonia
would be retrospective. However, after death, if one were
remembered, celebrated, and emulated – as the ancient Greeks
did with their heroes – then one could thereby be known as
having been ‘blessed.’ One had attained that for which one had
been created, and had fulfilled one’s destiny. Whether or not
one had enjoyed it on the way wasn’t the point.
Fulfilling Your Purpose
The concept of eudaimonia is pregnant with further implications,
especially on Aristotle’s analysis of it. A potentially blessed person
must first exhibit excellence (arête) by practicing moral discernment or wisdom in relation to practical action (phronesis). Also,
in order to attain true eudaimonia, her whole life pattern, when
laid out, must disclose that she has reached the highest possible
state of rightful functioning. For Aristotle, each person has a specific potentiality built into the very constitution of their particular being. This implies that no person has an accidental origin,
or a haphazard design. Instead, each individual is created with
what Aristotle called a telos – a purpose, final end or outcome,
somehow coded into the design. Someone who lives consistently
with arête is fulfilling her design, attaining all she was created to
be, achieving her telos… Then, and only then, can we speak of
her having attained true blessedness, true happiness.
April/May 2023 ● Philosophy Now 27
© GUTO DIAS 2023. PLEASE VISIT FACEBOOK.COM/PG/GUTODIASSTUDIO OR INSTAGRAM.COM/GUTO_DIAS_CARTOONS
Once we realize this, we can make perfect sense of Solon’s
aphorism. To be happy in Solon’s sense is not merely to experience a feeling, but to be confident that the life one has lived
has, in total, attained to excellence pleasing to the gods and
admirable to any virtuous onlooker. One’s life has, as we say,
‘amounted to something’; one has ‘lived up to one’s promise’
and so ‘attained greatness’.
What’s really shocking to us today is that such blessedness
can be attained in the complete absence of pleasant circumstances. We might use Shakespeare’s Hamlet to illustrate this.
Though beset by personal melancholy, domestic disaster, and
political decay, Hamlet would surely qualify for Aristotle’s
approval, since with his last act he achieves triumph for his
honour, justice for his house, and tranquility for his kingdom,
by avenging his father’s murder, though he dies savagely in the
process. He has come up to being a true prince, and has fulfilled his known destiny – to ‘set right’ the ‘out of joint times’
in Denmark (Hamlet, I:V:210-211). Such a one is truly blessed
and approved by the gods. In death, even his noble enemies
praise him (V:II:399-400). His beloved companion Horatio concludes, ‘‘Goodnight, sweet prince / And flights of angels sing
thee to thy rest’’ (V:II:302-303).
Hamlet’s is not a set of circumstances most would envy. But
his kind of epitaph is one Aristotle thinks we should all crave,
and should strive for as we shape our moral decisions. The ultimate goal of virtue is to produce a pattern of life that unimpeded
vision cannot help but recognize as great. Written into this, as
well, is some conception of afterlife – in Greek culture, perhaps
immortality in legend; or in Christian thought, eternal bliss.
Living by virtue produces a life that is complete and worthy
of praise. Living such a life is possible in circumstances, provided that the experiencer responds to them with courage and
integrity. The circumstances themselves say nothing of the possibility of blessedness; on their own, they cannot inhibit it in any
way, for it depends not on hap (which mayhap in any form), but
on the principled commitment of the person to considerations
such as duty, decency, and destiny. Anyone may be a hero: their
response, not the circumstances themselves, determines it.
In such a sense, then, anybody may be ‘happy’ if he or she so
chooses – provided their character ultimately proves adequate to
the specific challenge of his or her life. The gods may then look
with retrospective approval at one’s course, strewn though it be
with bitterness and pain. And even in life, one may find it a consolation that one has ‘done well’, so long as one has lived by virtuous principles with noble patience. All of that continuously
remains within the control of the individual. We make our own
choices, not of our circumstances, but of our responses to them.
Self-Sabotage By Bad Thinking
Needless to say, this is not an understanding of happiness
common in our (post-)modern world. For a start, many no
longer think of a God or gods as having anything to do with the
conditions of our life. We are, we think, late apes of a sort,
thrown into this scene by chance, refined by time and material
laws, and ending up wherever we are by forces utterly indifferent to our welfare. Meanwhile, suffering can never be genuinely
‘heroic’, or even ‘tragic’ in the literary sense of that word (as
the result of a potentially heroic act that ends badly). Suffering
is instead, just an unfortunate byproduct of living in an entropic
universe. It has no meaning, no larger context or grand narrative that will allow us to speak of it having a ‘purpose’. It happens. That’s all. And we, being mere playthings of hap, how can
we ever find happiness?
So we cannot. Instead we pile up our possessions, extend our
abilities, expand our options, and never find it not enough to
produce that abiding feeling of pleasure we were expecting.
Having no faith that suffering here can mean anything there, we
see nothing but loss in our struggles against our lot in life. Like
Hardy, we wanted bliss, and got travails. And we always feel we
deserve better than to die and turn to dust at the end of our
short, sad road of life.
But perhaps (there is that word again) we should consider
that it is our own expectation, not the Supreme Creator or some
polytheistic version thereof, that has let us down. Specifically,
it is the chance-driven, hap-hazard world we have framed for
ourselves, that has made emotional happiness so elusive. If this
worldview is a necessary concession to being realistic, as materialists and their ilk would have us believe, it is no less lamentable
for that. It’s cold comfort indeed to think we have no possibility of higher blessedness than to suffer intermittently, sate an
indifferent cosmos with our blood, and then to feed the worms
well. The Aristotelian account of eudaimonia offers us much
more room to breathe.
Perhaps we can begin to believe ourselves more than dying
dust if we can bring ourselves to believe in a story bigger than
our own. But such a story will also have to be greater than our
contingent social ethos (that is, more than just some fashionable cause); and it will have to be one we can believe wholeheartedly, without lingering cynicism. We would need to believe
we have a telos again – an objective purpose – and that the often
painful struggle to achieve it is worthwhile, with or without the
immediate gratification of present pleasures.
Can we still have that? Perhaps we might be better to ask,
‘Can we really live without purpose?’ Can human beings thrive
and be fulfilled in the absence of a transcendent order capable
of giving an objective foundation to meaning, morals, purpose,
and hope? It seems we’re not doing very well at present without it; and the happiness we seek without it seems ever more
elusive all the time. Our brief fits of distraction, delirious entertainment, temporary achievement, or chemically-induced pleasure, are apparently nowhere near adequate to offset our sufferings; and an abrupt change of circumstances for the worst is
enough to vaporize them all.
Aristotle reminds us that happiness always requires a larger
context. In this, he echoes a traditional Christian distinction
between ‘happiness’ and ‘joy’ – the latter being an enduring
state of well-being, not dependent on circumstance for its continuation, but rather on the sense of having submitted one’s
life-plan to eternal values (and in the theistic case, having put
oneself into a right relationship with God). It may be that it is
the lack of any such larger context that leaves so many of us
today, like Jay Gatsby in the final lines of F. Scott Fitzgerald’s
The Great Gatsby (1925), yearning for an ‘orgiastic’ and ‘elusive’
kind of happiness that ‘‘year by year recedes before us.’’
© DR STEPHEN L. ANDERSON 2023
Stephen Anderson is a retired philosophy teacher in London, Ontario.
April/May 2023 ● Philosophy Now 29
Hannah Arendt
On the Spectre of Nuclear War
Maurits de Jongh finds our contemporary situation reflected in earlier states.
n 27th February 2022, three days after the invasion
of Ukraine began, Vladimir Putin ordered his generals to put Russia’s nuclear deterrent force on high
alert. Seventy-seven years after Hiroshima and
Nagasaki, Putin’s words brought the spectre of nuclear war back
onto the world stage.
During the first months of the war, Western leaders kept
their cool, not responding to his nuclear rhetoric, although the
then French Minister of Foreign Affairs, Jean-Yves Le Drian,
did remind the Russians on the first day of the invasion that
NATO is also a nuclear alliance. Otherwise, Putin’s nuclear
threats were met in the West with what Emmanuel Macron
called ‘strategic ambiguity’. But the tide is changing. As Russia
insists its nuclear threats are not a bluff, the White House has
repeatedly warned about the catastrophic consequences that
would follow nuclear escalation by the Kremlin. Yet Western
leaders also continue to insist on the utmost caution in dealing
with Putin. Numerous Eastern European leaders object that
for the Russians caution is often perceived as weakness: that
Putin might take it as a license to resort to non-conventional
weapons – all the more so since his intended Blitzkrieg has turned
into a farce. Indeed, as the Ukrainian army gained the upper
hand, Putin’s desperation became increasingly expressed by
outright nuclear blackmail. And so Western leaders keep wobbling on a tightrope between caution and decisiveness, under
which the abyss of reckless escalation lurks.
This balancing act is by no means eased by Putin’s erratic
attitude. Putin is said to no longer behave like a rationally calculating strategist, but like a paranoid, isolated leader who harbors grotesque fantasies of a restored Russian Empire. The
more Putin radicalizes, the more the world worries that he may
not keep his finger off the button. Yet as long as Putin fears
death – and thus also the mutually assured destruction of nuclear
war – we have some assurance of nuclear safety.
Analyses of the extent to which nuclear violence is a real threat
in today’s world tend to focus on the capriciousness of one man
– be it Putin, or ‘rocket man’ Kim Jong-un. A fixation on these
men also obstructs a more fundamental reflection on the existential danger of nuclear war. What does the presence of nuclear
weapons say about our world, for instance? What implications
does it have for world politics?
Hannah Arendt’s (1906-75) political philosophy can help us
make sense of the spectre of nuclear destruction at a time when
the public seems curiously lethargic about the issue. Three of
her insights stand out.
O
Totalitarianism & The Bomb
Her first insight is that nuclear violence, alongside totalitarian
domination, are defining events of the modern world.
30 Philosophy Now ● April/May 2023
Politics is essentially about events. A journalist once asked
Harold Macmillan when he was British Prime Minister what was
the greatest challenge to a statesman. He replied: “Events, dear
boy, events.” In The Origins of Totalitarianism (1951), Arendt
depicts Hitler’s and Stalin’s totalitarianisms, together with the
Bomb, as the central events that left their mark on the contemporary world. In both events the hubris of the modern age reaches
a climax, and humanity finds itself more alienated than ever before.
Let’s first take a look at totalitarianism.
Whereas murderous oppression by tyranny is as old as recorded
history, the totalitarianisms of Hitler and Stalin were without
precedent. The tyrant, as we know since Aristotle’s analysis, exercises arbitrary power and wanton violence to further his private
interests. A tyrant’s actions are characterized by arbitrary lawlessness, his thinking by indifference to the common good. The totalitarian leader, by contrast, has the highest regard for the law: yet
in a totalitarian system, the law does not constitute a stabilizer of
human relations, as it does in liberal democratic states. Instead,
the ‘law’ is understood as some superhuman and transhistorical
force, such as racial superiority or class struggle, which can be
used to justify the regime’s actions. The vocation of the totalitarian leader is to let this law unfold in the present. Equally sharp is
his notion of the common good – a utopia in which the people
are purged of whatever the law says pollutes it: inferior races, parasitic capitalists, or other notions of foreignness and otherness.
It is precisely the supposed inevitability of the law that
strengthens the totalitarian leader in his conviction that ‘everything is possible and everything is permitted’, as Arendt puts it.
This conviction grounds his efforts, not so much to change
human nature, as to rob it of its humanity, freedom, and dignity.
Whereas the tyrant employs terror and propaganda opportunistically to secure his private interests, the totalitarian leader structurally employs such tactics to propel the law of empire; not only
by isolating people, but also by compressing them into an indivisible mass. To separate the wheat of the ‘real’ people from its
chaff, policies of persecution create streams of refugees and exiles.
The concentration and extermination camps of totalitarian systems reduce people to ‘specimens of the human species’, preceding the destruction of these ‘superfluous human beings’.
In a totalitarian system, rather than imposing limits on what is
possible and permissible, the law and the ‘common good’ dictate
endless expansion of the state. The Russian mass killings in Bucha
and Kramatorsk; the ruthless bombings from Kharkov to Mariupol; Putin’s nuclear threats: there seems little doubt that this man
knows no limits. Putin’s brutality thus provides ample support for
Arendt’s hypothesis: “the totalitarian conviction that everything is
possible has so far proved only that everything can be destroyed.”
The notion that everything is possible and permitted is not
just a driving force of past totalitarianisms, it is also a central
Hannah Arendt
by Darren McAndrew
feature of the modern age. And although courage and ingenuity have inspired unprecedented scientific and technological
achievements, the pride of tyrants can turn nearly every triumph Pyrrhic. Nuclear science and technology exemplify this
lurking downside. While nuclear power as an inexhaustible
source of energy might make us believe that everything is possible, nuclear weapons and waste are constant reminders that
everything can be destroyed. Arendt argues that we pretend to
stand on Archimedes’ point, from which we subject the earth
to our designs (Archimedes once claimed that if you gave him
a lever long enough, he could move the earth). This pretence
disregards our earthly embeddedness – the vulnerability of life’s
processes, and the fragility of human civilisation.
It would be a stretch to claim that science as such disregards
limits. The opposite seems true. After all, experimental science
can only test hypotheses, and thus make progress, in a setting
that is strictly delimited from the wider environment. As
Arendt’s contemporary (and ex-husband) Günther Anders
emphasized, however, this delimitation is alien to nuclear science. The impact of nuclear accidents or explosions is potentially so widespread that the nuclear laboratory is no longer dis-
tinguishable from Planet Earth as a whole. The essential character of nuclear weapons, therefore, lies in their ability to destroy
the entire Earth – in Anders’ words, Ihre Allmacht ist ihr Defekt:
‘‘their omnipotence is their defect.”
The common denominator of the Bomb and totalitarianism
lies precisely in this modern hubris; in driving transgressions
that risk humanity’s ultimate self-defeat.
Power & Violence As Opposites
The question arises of whether threatening nuclear warfare
indeed makes one omnipotent. In dealing with this question,
Arendt’s second insight seems right: instead of manifesting
power, nuclear weapons are much more likely to undermine it.
With this insight Arendt challenges the common reduction of
power to violence – the notion that power, as Mao Zedong claimed,
grows from the barrel of a gun; or as Clausewitz put it, that war
is essentially the continuation of politics by other means. The
understanding of power behind such notions is that my exercise
of power consists of imposing my will on others. The American
political scientist Robert Dahl’s definition is telling: “A has power
over B,” he writes, “to the extent that he can make B do someApril/May 2023 ● Philosophy Now 31
thing that B would not otherwise do” (Behavioral Science, 1957).
The usual way we think about power thus emphasizes power over.
Viewed this way, it makes perfect sense that violence, the ultimate means of me imposing my will on others, is seen as a manifestation of power. And to the extent that nuclear violence is the
most violent form of violence, it is easy to conclude that it must
also be the most powerful manifestation of power.
Our thinking about both domestic politics and international
relations is saturated with this understanding of power. According to Arendt, it reinforces our image of the state as a hierarchical relationship of command and obedience – of power over – sanctioned by the threat of violence; rather than as an association of
free and equal citizens in which shared power – power with – is
checked by law. In line with this picture, world politics is presented as power struggles between states pursuing their self-interest – security and prosperity – in an international state of nature.
World politics is nothing but an anarchic race for raw power over.
Against this backdrop, we can understand the attempt by socalled ‘neorealists’ to reconcile nuclear proliferation with the
classical doctrine of ‘balance of power’. The quest for a balance
of power traditionally consists of preventing an unbalanced concentration of power in the hands of one state or alliance of states.
Since any state with nuclear weapons by definition possesses an
unbalanced concentration of power over those which don’t, this
reconciliation seems doomed to fail. Nuclear powers are, after
all, capable of the complete destruction of countries, if not global
civilisation as a whole. Neorealists, however, propose nuclear
deterrence as the perfect policy instrument to achieve balance of
power. Indeed, as long as nuclear weapons exist, you better make
sure you qualify as a nuclear power. The most advanced and
extensive nuclear arsenal serves as an insurance policy against the
dominating power of other states. Thus neorealists depict the
nuclear spectre not as an existential global threat, but rather as a
global public good. ‘Deterrence works’, they say.
It is difficult to quarrel with this perverse optimism – with
the notion that the threat of mutual assured destruction guarantees national and even global security and peace. For example, we might wonder whether Putin would have left Ukraine
at peace if the country had not handed over its nuclear weapons
after the 1994 Budapest Memorandum. Moreover, critics of the
proposition that nuclear deterrence is effective can only prove
their skepticism with hindsight, once nuclear threats have started
to turn into nuclear destruction. The problem is that we cannot
be sure if anyone will still be alive to settle the question.
It is equally difficult to dispute that the neorealist’s optimism about nuclear deterrence rests on equating power with
violence, and thus on understanding power as power over.
Arendt argues that a more accurate understanding of political
power must instead emphasize the collective dimension of
power – power with. Power, she says, is never the exclusive
prerogative of one person. Of course we can say in a metaphorical sense that a political leader is ‘in power’, but the basis of
this lies in the legitimation of that power by the public. So
power is in both the first and last analysis shared: it rests on
the ability of people to organize for a common purpose. Public
support underpins power, while violence leans on instruments
– weaponry – which in principle can be deployed without political support. As Arendt put it in On Violence (1970), “The
32 Philosophy Now ● April/May 2023
extreme form of power is All against One, the extreme form
of violence is One against All.” The exercise of political power
is a form of what Habermas called ‘communicative action’: it
arises out of the public exchange of arguments and concerted
action. Its locus is therefore in what German philosopher
Rainer Forst calls the space of reasons. Power gains legitimacy
when leaders provide reasons that are acceptable to the public
and that respect legal boundaries. Understood in this way, violence is anything but a manifestation of power. It is usually a
sign of political weakness. When you resort to tactics such as
blackmail, threats, or physical violence to get your way, these
are rightly seen as acts of desperation, for only when we fail to
convince others of our will does the temptation arise to impose
it on them by force.
This does not mean that the legitimate exercise of power –
which is always power with – categorically excludes the use of
violence. Nor does it mean that we should deny the state a
monopoly on the legitimate use of violence. Nor that we should
question the duty of a government to defend its citizens against
foreign aggression. It does mean, however, that violence can
never be a substitute for public support as the basis of power,
and that the political use of violence must stand tests of accountability and legality. Violence can only be publicly justified when
it is seen as a proportionate means to achieve a common goal.
Where power is thus communicative, violence has a strictly
instrumental character: it is a means to a given end. There is little
reason to break eggs unless this would enable us to make an
omelette, to build on one of Arendt’s favorite metaphors. However, she underscores how violence as a means to an end always
risks overshooting its goal. This risk only increases as the instruments of violence become capable of greater destruction. Thus
the political goals of security and freedom are in constant danger
of being overwhelmed by violence. With the threat of nuclear
war, the tension inherent in political violence reaches an absolute climax. Indeed, political goals such as freedom and peace,
and the power of citizens joining hands to achieve them, are
meaningless if the world falls to nuclear pieces. The Bomb, then,
blows up the whole category of means and ends – the category
upon which the justification of violence rests.
By seeing power as power with, we can see sharply how violence does not so much manifest as undermine power. “Violence can always destroy power”, writes Arendt: “Out of the
barrel of a gun grows the most effective command, resulting in
the most instant and perfect obedience. What can never grow
out of it is power.” This conclusion that communicative power
can always lose out to violence is terrifying. Yet Arendt also helps
us to appreciate how much the Ukrainian authorities, President
Zelensky in particular, have gained power over the past year.
Zelensky garners public support and builds communicative trust
among Ukrainians and world leaders alike. The waning power
of the Kremlin, and of Putin in particular, stands in stark contrast to this. Putin’s power seems to decline with each increase
in both his domestic repression and his foreign aggression. The
objection that Putin has Russian public opinion on his side, and
that his power therefore remains intact, is hardly convincing.
Public support contrived through a propaganda machine is no
more solid as a foundation of power than quicksand.
courage. According to Arendt, at basis courage comes from the
assurance we need to leave our mark on the world. By showing
courage – by sacrificing ourselves for the freedom of our countries, for the preservation of the world, or for the dignity of all
life – we obtain, as it were, a second life. Our contributions live
on in the small and great stories that people tell each other and
pass on. Courage, then, does not lie in the expectation of eternal life, but consists instead of aspiring to what Arendt calls a
‘worldly immortality’. Courage rests on the conviction “that
posterity will understand, remember, and respect the individual mortal’s sacrifice. Man can be courageous only as long as
he knows that he is survived by those who are like him, that he
fulfills a role in something more permanent than himself, ‘the
enduring chronicle of mankind,’ as Faulkner once put it’.”
So understood, the threat of nuclear war threatens an irreparable break in human courage. For how can we still be courageous
when the survival of humanity as a whole can no longer be taken
for granted; when there would be no world to leave behind, and
no posterity to follow us? Arendt is adamant that with the risk
of nuclear annihilation, the conventional meaning and value of
courage is lost. The risk of climate catastrophe also undermines
the conventional understanding. But besides this twofold threat
stands another disturbing fact. Medical technologists already
speak of old age, and thus of mortality, as a disease. They are
more than optimistic about the future reversibility of the ageing
process. Thus, today’s courageous humans cannot rest assured
that humanity as a whole is immortal; but neither do we know
whether the individual will in fact remain mortal. Both scenarios unsettle the conditions of courage.
From the insight that courage loses its conventional meaning in times of nuclear weapons and other forms of radical rupture, it does not follow, however, that ‘courage’ has become
meaningless. Arendt protests against resignation and lethargy
in response to the threat of destruction. Instead of falling for
defeatism and nihilism, Arendt encourages future generations
to give political courage new meaning: “By putting in jeopardy the survival of mankind and not only individual life or at
the most that of a whole people,” Arendt concludes, “modern
warfare is about to transform the individual mortal man into a
conscious member of the human race, of whose immortality
he needs to be sure in order to be courageous at all and for
whose survival he must care more than anything else.’’
If our developed capacity for self-destruction means that
the stakes could not be higher, we can also claim that political
courage is more meaningful and acute than ever before. Just
as Extinction Rebellion and other movements seek to subdue
the spectre of climate disaster, so too do we need to rein in the
spectres of nuclear destruction and totalitarian terror. It is up
to all of us, including our political leaders, to summon the
courage which that requires.
© MAURITS DE JONGH 2023
Maurits de Jongh is an assistant professor in the Ethics Institute at
Utrecht University, in the Netherlands. He specializes in political
philosophy and the history of economic thought, and is currently
working on a book entitled The Primacy of Public Goods.
•The original Dutch version of this essay was published in July 2022 as a
series of posts on the public philosophy blog Bij Nader Inzien
April/May 2023 ● Philosophy Now 33
BACKGROUND IMAGE IS THE NUCLEAR EXPLOSION AT CASTLE BRAVO IN 1954
Courage: Past & Future
On the first day of the invasion, as Russian armoured columns
rolled towards Kyiv, the Americans apparently offered to evacuate President Zelensky and he replied “I need ammunition,
not a ride.” From the determined armed resistance of civilians-turned-soldiers, to the non-violent protest of the inhabitants of Kherson in the face of the Russian occupiers, that the
Ukrainians display courage is beyond doubt. But we hardly
seem to realize how the spectre of nuclear escalation lurking
in the background calls into question the virtue of courage
itself. This is the third insight Arendt offers us. The danger of
nuclear weapons, she argues, unsettles the very meaning of
courage as a disposition to take care of the world.
All the heartfelt words of praise from the West may be
accompanied by a certain embarrassment. We may wonder what
value Ukrainians place on them, when Putin’s provocations
went on virtually with impunity for years; when financial and
legal service providers in the West so diligently obliged Russian oligarchs; when half-hearted promises of NATO or EU
membership were made; when the sufficiency of Western military support for Ukraine remains in question; and when the
short-sighted energy dependence of Europeans on Russian
fossil fuels financed the Kremlin’s war preparations.
Considerations like these are not the only reason for our
discomfort. This discomfort touches on a deeper disorientation afflicting the rich, free countries of the West. Francis
Fukuyama already pointed it out thirty years ago, when he
claimed we were reaching the end of history with the spread
of liberal democracy and the capitalist market economy: he
wrote that ‘the last man’ would be tormented by an unbearable sense of emptiness and boredom. We hardly find salvation in our conspicuous consumption and our obsession with
social climbing. We continually fall short in our solidarity with
one another, let alone with the rest of the world.
Precisely because we find it so difficult to find meaning and
orientation, it was tempting to experience the courage of Zelensky and his people as a triumph which, for a fleeting moment,
freed us from the liberal void we inhabit – our alienated world,
in which ‘heroism’ is a term reserved for top athletes and pop
stars, but in which we cannot succeed in embracing our own
individuality. Our waning attention for the war as the months
pass shows just how ephemeral our ecstasy is. Since ongoing
Russian terror is the price Ukrainians pay for their courage, it
is outright perverse for us to use it in the satisfaction of our
own psychological needs.
Arendt insists that courage takes no interest in our psychological states. Courage instead finds meaning in care for the world.
Indeed, courage requires self-sacrifice for the sake of the earth,
and for humanity as a whole. Arendt insists that awareness of the
fact that “man is not immortal, that he sacrifices a life that one
day will be taken from him in any case” is a prerequisite for
courage. After all, without our mortality, there is little to risk or
sacrifice. But self-sacrifice, in turn, presupposes the belief that
death is preferable to a life deprived of dignity and freedom –
especially if that deprivation results from political oppression.
But courage does not only find meaning in our individual
mortality. The endurance of the world, and the survival of
humanity as a whole, also constitute necessary conditions for
Bursting Shell
Christopher Nevinson
1915
The Urgency of Art
Sam McAuliffe thinks that art offers another way of thinking.
oday science is widely regarded as the bastion of
truth and knowledge. Technology daily demonstrates the truth of science to the person in the street,
religion is ever trying to align scientific insight with
its doctrines, and we largely expect our politicians to consider
and abide by scientific evidence. No matter how rigorous or
robust the science is, however, it fails to incite the social change
it spotlights as needed, climate change being the obvious example. Moreover, some very influential philosophers equate science and technology with thoughtlessness. Could they be right?
And if so, could art offer an antidote?
The political theorist Hannah Arendt summarised the problem best in The Human Condition (1958), when she wrote:
T
“The reason why it may be wise to distrust the political judgment of
scientists qua scientists is not primarily their lack of ‘character’ – that
they did not refuse to develop atomic weapons – or their naïvete – that
they did not understand that once these weapons were developed they
would be the last to be consulted about their use – but precisely the
fact that they move in a world where speech has lost its power” (p.4).
This provocative passage by Arendt suggests two important
insights. First, she highlights the limits of science, or at least,
of scientists. Similar to the atomic scientists of the 1940s being
incapable of controlling how the product of their labour would
34 Philosophy Now ● April/May 2023
be deployed, contemporary scientists are incapable of catalysing
the necessary social change on urgent issues such as climate
change and warfare. (This isn’t a personal criticism, for strictly
speaking it’s not their job to do so.)
Second, Arendt highlights the distinction between knowledge
and thought. Scientific knowledge, the great triumph of the scientific method, is expressed in equations and data, inaccessible to
those without mathematical skills and is thus distinct from speech
and discussion. Even as the power of the word slips away we become
enraptured by the progress of science and technology, yet unable
to think beyond the gadget in front of us.
Arendt is not the only philosopher to speak of our ‘thoughtlessness’ in relation to science. Her former lover Martin Heidegger wrote extensively about technology and society. In his
essay Discourse on Thinking (1959), he claimed that we’re living
in a thoughtless time: “Thoughtlessness is an uncanny visitor
who comes and goes everywhere in today’s world. For nowadays we take in everything in the quickest and cheapest way,
only to forget it just as quickly.”
We turn to science to cure cancer, improve crop yield, harness energy, design our bombs, and understand the laws of physics
– so why is it that some of our greatest thinkers have drawn a
connection between science/technology and thoughtlessness?
And if they’re correct to do so, what can we do about it?
The Limits of Science
In his book The Gay Science (1882) Nietzsche famously claimed
that ‘‘God is dead! God remains dead! And we have killed him!’’
(p.120). However, we must remember that Nietzsche is lamenting the death of God not celebrating it. God was once the pillar
of truth and morality, and as long as we followed the word of
God, we could rest easy knowing we were on the right path.
But as various voices called the word of God (that is, the Bible)
into disrepute from the eighteenth century onwards, the stable
footing that monotheistic religion offered the West crumbled
away. We Westerners were left perplexed. If not to God, where
do we now turn for knowledge and moral guidance?
Many would say that science has been passed this baton. So we
must understand the limits of science. For instance, science has
almost nothing to say about ethics, and can teach us little about
politics. So if we concern ourselves only with that which falls under
the banner of ‘scientific method’, we necessarily narrow the questions we ask and the answers we seek, and ultimately we will fail
the basic task of humanity, surviving together. Indeed, we are no
better at getting along now than we were before the scientific revolution; we’re just a lot more powerful in what we can do.
Even if we accept that science cannot answer our existential
predicament, we should still address the fact that both Heidegger and Arendt equate science and technology with thoughtlessness. A popular anti-religion argument says that science moves
ever onward and constantly seeks development, whereas religion
is dogmatic. However, although fixed thinking may get us
nowhere fast, riding the wave of science and technology leaves
us no time for contemplation. The pace of progress leaves us no
time to think. As Heidegger says, ‘we forget to ponder’.
Lessons From Art
Given God’s receding (or ‘death’), and the fact that science is
limited in what it can achieve, where should we turn? What will
guide us back onto the path of thoughtfulness?
Some leading contemporary philosophers, such as Günter
Figal, Santiago Zabala, and Jennifer McMahon, suggest that art
offers the necessary path to salvation, since art is a site of intervention and interpretation. Far from merely pretty paintings
or catchy melodies, and so being of peripheral concern to our
contemporary predicaments, art offers engagement.
Art does not have to be ‘agreeable’: pretty or catchy, etc. Art
can be agreeable; but it can also be ugly, weird, or banal. The
defining characteristics of art are instead that it is perceived by
the senses, and that it challenges us to understand the world differently. When we engage with art, we step out of the bubble
that social media and our other experiences have created around
us; we put our prior expectations to one side; and truly try to
understand the artwork before us. A work of art is never what we
want it to be – it is independent of the meaning we project upon
it. To understand a work of art, we must attempt to receive what
it is trying to say. And so in aesthetic contemplation, for that
moment, however long it lasts, we are spurred to thoughtfulness.
Art appreciation is perhaps not a mode of thinking we readily equate with ‘thinking’, especially given the extent to which
we are conditioned to technological modes of thought. Aesthetic contemplation is not the solving of a puzzle or problem:
it is not the application of the right formula or theory to reach
a desired outcome. Rather, it is being ‘caught up in’ or ‘taken
Spinoza’s Godly Mystery
In his Ethics Spinoza did expound
A vision of God that did astound:
A single substance, infinite, and all
that sustains the universe, through its call.
It’s not a deity with human desires,
But a force, that brings all things to life’s pyres.
It’s all that is, and all that can be,
With modes that define all reality.
God and nature are one and the same
And our thoughts are but a tiny flame
That flickers in the vast cosmic fire,
And all we know is but a small desire.
So with this thought Spinoza did impart
A mind-bending truth that pierces the heart.
And though some may laugh at his philosophy,
It remains a classic, for all posterity.
© SIVANTH ADITHYA.N 2023
Sivanth Adithya.N is a philosophy graduate from Delhi
University with a passion for philosophy and writing.
up by’ that which is before us, and allowing that experience to
transform us in some way. Contemplating art is not a mode of
thinking that attempts to dominate or control its subject.
Instead, it is a mode of thinking where one converses with a subject, or participates in the experience. It is a mode of thinking
where one simultaneously arrives at an understanding of the
subject and oneself. Through good art we learn how the world
might be understood differently to how we understood it before.
Engaging with art thus illuminates new or different ways to
think, do, and be; and so it broadens our horizons. And each
such engagement makes us more comfortable with this participatory mode of thought. In this way, art offers lessons in a mode
of thoughtfulness distinct from the scientific method.
Of course, the thinking described here should not replace scientific thought. It should complement it. Yet it is only when we
recognise distinct modes of thinking and understand how they
fit together and what they can offer one another that we reach
a certain cognitive balance. Considering the current state of the
world, it is clear we have lost this equilibrium. Indeed, it is questionable whether such balance has ever been achieved. But striving toward balance in thought is our constant task.
The contemporary urgency of art is that it teaches us to
engage with the world in vital ways many of us have forgotten,
overlooked, or ignored. Whatever your passion – music, dance,
sculpture, poetry, architecture – do yourself and the world a
favour, and take a lesson or two in thinking from art.
© SAM MCAULIFFE 2023
Sam McAuliffe is Dean of Studies and Careers at Mannix College,
Monash University. He is the author of Improvisation in Music
and Philosophical Hermeneutics, and numerous scholarly essays.
April/May 2023 ● Philosophy Now 35
Descartes
Ergoing Nowhere
Noah Harris says Descartes failed to find absolute foundations for knowledge.
ogito ergo sum: ‘I think, therefore I am.’ This statement by René Descartes, which has become known
as ‘the Cogito’, is perhaps the most famous phrase
in all of philosophy. It summarises his argument that
one’s own existence is the only thing one can never be mistaken
about. In his Meditations on First Philosophy (1641), he argues
that with every other proposition – even those of basic mathematics – it may be that every time I think them, an all-powerful demon interferes with my mind and deceives me, leading
me to be mistaken. For example, every time I think about 2+3,
this demon may be interfering to make me believe the total is
5, when actually it’s something else. Indeed, I might be being
deceived by this demon about perhaps everything I think I know.
Perhaps my belief that I went to the shop last week is based on
a memory which has been entirely fabricated by the demon.
However, there is something different about the proposition
‘I exist’. This is because I cannot be deceived about something
unless I exist – more specifically, unless I exist as a thing capable of being deceived, that is, a thinking thing. However, if I
have to exist in order to be deceived about anything, I would
have to exist in order to be deceived about the proposition ‘I
exist’ too. To be deceived about this proposition would therefore require both that I exist, in order to be a thing which can
be deceived, and that I do not exist, in order for the proposition ‘I exist’ to be deceptive. As Descartes argues, this contradiction shows that it is impossible to be deceived about the
proposition ‘I exist’. Therefore, for anyone who thinks ‘I exist’,
it must be true. So: I think, therefore I am.
CARTOON © NOAH HARRIS 2023
C
36 Philosophy Now ● April/May 2023
With this argument, Descartes believed he had found a statement which could be known without any prior assumptions or
knowledge, and which could therefore be used as a ‘certain and
unshakeable’ foundation for further propositions about the world,
and hence for human knowledge in general. However, Descartes’
attempts to derive anything further from the Cogito famously
failed, quickly devolving into a reliance on the benevolence of
God, with the dubiousness of his arguments for this idea undermining the certainty of his resulting propositions. However, I
want to briefly question whether Descartes ever secured the certainty which he sought even in his first proposition.
Descartes’ ‘evil demon’ argument aims to go as far as is possible in questioning our beliefs, allowing for the possibility that
this demon could even deceive us about the most basic mathematical truths. Only by taking scepticism to this absolute extreme
can Descartes legitimately claim to have found something absolutely beyond question. However, Descartes does not go to the
absolute extreme. This is because Descartes’ argument still relies
on the laws of logic. Without such laws, the cogito itself is immediately undermined. To be deceived about the proposition ‘I
exist’ requires a contradiction; but without the logical law of
non-contradiction, that would be unproblematic. In fact, without the law of non-contradiction, we could agree with Descartes’
argument all the way through to its conclusion, and yet prefer
the opposite conclusion.
My aim here is to point out that any attempt to undertake
Descartes’ project – to discover what we can know to be certain
and free from all presuppositions – is bound to either fail or show
Descartes
IMAGE © VENANTIUS J PINTO 2023. TO SEE MORE OF HIS ART, PLEASE VISIT BEHANCE.NET/VENANTIUSPINTO
René Descartes by
Venantius J. Pinto
that we know nothing. We can go absolutely nowhere in argument without the laws of logic. However, these laws cannot themselves be proven, for we would have to presuppose them in order
to construct an argument for them! Therefore we can know absolutely nothing independently of absolutely everything. This failure is crucial for Descartes’ project, because his concern was that
unless the foundations of his thought were certain and without
presuppositions, everything he built upon them ‘‘could only be
highly doubtful’’ (p.13). So to question everything but the laws of
logic undermines his project not at the last hurdle, but at the first.
Perhaps, however, there is still something we can take from
his argument. Even though his Meditations fail to fulfil his stated
aim of questioning everything, one might argue that Descartes
covertly settled for a more modest project – one of questioning
everything it’s practically possible to question. If he had questioned
absolutely everything, including logic, his Meditations would
not merely have failed to uncover any certain truths, but would
have ended up as a collection of gibberish. It may be that
Descartes decided, or simply assumed, therefore, to presuppose
the bare minimum, the laws of logic, in order to uncover those
propositions which are most certain – that is, which are as certain as the laws of logic themselves. The cogito is not, therefore, the
one absolutely certain proposition; but it is the one other proposition which is as certain as the laws of logic.
Strangely, this perhaps improves our prospects in terms of
grounding our propositions. Descartes’ attempts indicate that
the cogito and the laws of logic alone may be insufficient to derive
any further certain propositions; but having accepted the lack of
absolute certainty in our propositions, perhaps we can at least
move forward by ranking the certainty of further propositions by
considering what else must be supposed for each to be true. The
cogito takes the highest rank, requiring only the laws of logic for
its truth. No doubt this ranking would be a long and complicated
task; but it holds far more promise than attempting to derive a
full body of knowledge from the single premise that one exists.
Descartes’ goal was to purge our minds of all except the absolutely certain. A hierarchy of beliefs would fall short of this, but
would show us the lines of reasoning and implicit affirmations
behind our beliefs, thereby informing rather than removing the
decision process. It seems that this is the most we can take from
Descartes’ cogito.
© NOAH HARRIS 2023
Noah Harris is a Sheffield-based recent philosophy graduate of
Keele University.
April/May 2023 ● Philosophy Now 37
Descartes
Descartes & Stupidity
Trevor Pateman asks: stupidity – essence, or accident?
ené Descartes opens his Discourse on the Method of
Rightly Directing One’s Reason and of Seeking Truth in
the Sciences (1637) with a breathtakingly other-worldly
statement:
R
“Good sense is, of all things among men, the most equally distributed...”
No doubt many of the original readers back in 1637 were
relieved when he immediately went on to undercut that wild
claim with what appears to be some very this-worldly irony:
“...for everyone thinks himself so abundantly provided with it, that
those even who are the most difficult to satisfy in everything else,
do not usually desire a larger measure of this quality than they already
possess.”
The reader now put into a good mood, Descartes promptly
doubles down on his original claim:
“And in this it is not likely that all are mistaken; the conviction is
rather to be held as testifying that the power of judging aright and
of distinguishing truth from error, which is properly what is called
good sense or reason, is by nature equal in all men.”
Can he really be serious? It seems so, for he immediately
continues:
“The diversity of our opinions, consequently, does not arise from some
being endowed with a larger share of reason than others, but solely
from this, that we conduct our thoughts along different ways, and do
not fix our attention on the same objects. For to be possessed of a vigorous mind is not enough; the prime requisite is rightly to apply it.”
After a diversion, to which I will return, he concludes his
opening remarks by re-stating for the third time (this guy is
nothing if not persistent) his initial claim:
“For as to the reason or sense, inasmuch as it is that alone which constitutes us men, and distinguishes us from the brutes, I am disposed
to believe that it is to be found complete in each individual; and on
this point to adopt the common opinion of philosophers, who say
that the difference of greater and less holds only among the accidents,
and not among the forms or natures of individuals of the same species.”
This, one might say, is an honourable claim, asserting a kind
of human equality. But having shut the front door on discrimination by implying that all people have basically the same rational nature, does he let discrimination in through the backdoor
of ‘accidents’? Here we must return to the argument:
“For myself, I have never fancied my mind to be in any respect more
perfect than those of the generality; on the contrary, I have often
38 Philosophy Now ● April/May 2023
wished that I were equal to some others in promptitude of thought,
or in clearness and distinctness of imagination, or in fullness and
readiness of memory. And besides these, I know of no other qualities that contribute to the perfection of the mind”
Well, this is either honest modesty or false modesty, but the
main point is that Descartes allows that there are differences in
human reasonableness after all – but only among the ‘accidents
of mind’, of which he identifies three: quickness of thought, imagination conceived as a kind of clear-sightedness, and ability to
remember. But the essential nature of mind remains the same for
all of us. Let’s now consider whether all this is actually the case.
Call & Response
My guess is that the species to which we belong was into namecalling long before philosophers distinguished between essential natures and accidents. And I also guess that those who practice the habit understand perfectly well that the name-calling
most likely to hurt and humiliate (for what else is name-calling
for?) would try to find an essence in the individual which could
be extracted and flung back at them as something which, ideally, they would despise but could do nothing about. Some of
those supposed ‘essences’ have surely had very long lives – so
much so that they come with an exclamation mark already
attached. Stupid! and Ugly! are prominent among them. Others
are probably very local (Ginger!), and some seem to be entirely
modern inventions, like the -phobe!
In the very long and rather inglorious history of name-calling I further imagine that parents have a lot to answer for, for
the way they’ve often named and shamed their own children.
And after schools were invented, teachers sometimes took sadistic delight in inventing fresh names to hurl, along with the chalk,
leaving their target pupils red-faced and tearful.
When irritated teachers or parents name-call children, one
fairly predictable consequence is that children name-call each
other. The playground has rarely shown humanity at its best,
but it probably reached peak ‘Red in Tooth and Claw’ in nineteenth century Britain – an inference I draw from the fact that
the saying ‘‘Sticks and stones may break my bones, but words
will never hurt me’’ originated in late nineteenth century Britain.
But I doubt this little incantation was ever very successful in
warding off hurt.
Many people do have inhibitions about name-calling, though
probably not as a result of reading Descartes’ intellectual egalitarianism. Rather, it would seem to be part of common courtesy or decency, or even some moral obligation. Such people
would never call a person stupid – though they might well say
to them, ‘‘That was a stupid thing to do.’’ The latter points to
an accident, not an essence, and so to something remediable –
to such an extent that it may well be that the addressee will assent
to the proposition by saying, ‘‘I know, I wasn’t really thinking.’’
This all sounds very much like one up for Descartes.
Descartes
And yet… If someone keeps on doing stupid things, doesn’t
that in the end add up to their actually being stupid? Could any
scholastic talk of essences and accidents really put a block on
that conclusion?
Perhaps not. But if stupidity is no more than a sum of stupid
actions, the sting is less sharp than that aimed for by essentialism. It is more in sorrow than in anger that after repeated experiences of stupid behaviour we conclude, reluctantly, that someone just is rather stupid. At the same time, the way we are thinking about things acknowledges that there is no absolute reason
why the person should not act differently next time – though
that logical possibility opens only a small window of opportunity, for the future is indeed often like the past. Nonetheless,
in conceding the existence of that window of hope for intelligent actions, we also create a space for hope in the person whom
for the present has been judged and found wanting.
To focus on accidents rather than essences of stupidity –
of assessing actions one by one, even if we do sometimes allow
them to add up – is charitable in the way it interprets other
people and their actions. It allows the slate to be periodically
wiped clean, and each day treated as a new day, in which any
one of us might present a different self to the one we presented yesterday, proving that we’re not so stupid after all.
Hope is offered by such modest charity. But as for the belief
that there are no essences which would fatally undermine all
such hope – well, that may be a matter of faith… and as
another French writer of the seventeenth century, Blaise
Pascal observed, “il faut s’abêtir” (“it’s necessary to make ourselves like the brutes”).
I would prefer a less defeatist attitude, less liable to misuse
by the other side. Yet for name-calling essentialists – who say
you are this or that – it is a matter of faith, both before and after
the trial, that all cases of stupidity are hopeless. To prevent a
logjam, the trials start with the verdict, and victims despatched
expeditiously to exile from the society of the elect, to the dunce’s
corner; or, in our latest perfection of possible fates, to the pillories of Twitter, where incandescent faith essentialises across
a billion screens. Such judges are simply not interested in hope
and charity. And that’s what makes them so frightening.
“When I rail at my manservant I do so sincerely with all my mind:
my curses are real not feigned. But once I cease to fume, if he needs
help from me I am glad to help him: I turn over the page. When I
call him a dolt or a calf I have no intention of stitching such labels
on him forever: nor do I believe I am contradicting myself when I
later call him an honest fellow. No one characteristic clasps us purely
and universally in its embrace.”
Michel de Montaigne, How We Weep And Laugh At The Same Thing
(M.A. Screech’s translation, 2015).
© TREVOR PATEMAN 2023
Trevor Pateman studied with Richard Wollheim and Roland
Barthes. Their influence can still be found in his books Materials and
Medium: an Aesthetics (2016), and Nabokov’s Dream (2021).
Descartes’ Surreal Ideas
by Paul Gregory, 2023
April/May 2023 ● Philosophy Now 39
Descartes
How Descartes Inspired Science
Kanan Purkayastha has both general and special theories about
how the master rationalist inspired modern empirical science.
ené Descartes (1596-1650) is one of the great thinkers
in the history of humanity. Most of his work focused
on philosophy, mathematics, and science, but he also
dedicated many of his writings to physiology, and
adopted a ‘mechanical’ vision of the world.
Descartes’ Discourse on Method (1637) is a good starting place
for any discussion about how scientific thinking and methods
developed. In this book, Descartes tells us how reason goes
about its successful pursuit of truth in any area accessible to
human mind. He states that “these long chains of reasoning,
quite simple and easy, which geometers are accustomed to using
to teach their most difficult demonstrations, had given me cause
to imagine that everything which can be encompassed by man’s
knowledge is linked in the same way” (p.41). From this,
Descartes outlines the basic steps of his method. These can be
summarised as:
R
(a) Do not accept anything just because it has been pre-established, if it is not evident to reason.
(b) Divide each problem into as many parts as necessary to
make the reasoning simple and transparent;
(c) Think in an orderly way, beginning with the simplest ideas,
steadily climbing up to the more complex ones;
(d) Take as many steps as needed, to verify that no logical step
is omitted.
In his book Descartes and the Possibility of Science (1989), Peter
Schouls suggests that “The Discourse… presents a functional definition of reason, where I use ‘reason’ in its broadest sense as the
human ability to intuit, deduce and in the context of these processes to make proper use of imagination, sensation and memory”
(p.1). In fact, Descartes proposed a method based on methodical
doubt, analysis, synthesis and verification. These ideas contributed
to what became the scientific method, and hence to the development of modern science. For instance, contemporary scientists
use experimentation as a means to confirm or falsify whatever
hypothesis is under consideration; and for Descartes, as Schouls
observes, “experimentation… provides evidence for the correctness or incorrectness of a judgement made about a scientific statement or a body of such statements, namely, a judgement as to
whether it is an account of the world in which we live” (p.88).
Werner Heisenberg, of Uncertainty Principle fame mentioned that Descartes’ most important ideas for the development of scientific thinking are contained in his Discours on
Method. In his book Physics and Philosophy (1958), Heisenberg
writes that Descartes “on the basis of doubt and logical reasoning tries to find a completely new and as he thinks solid ground
for a philosophical system” (p.40). One of the beauties of science now is its encouragement to doubt everything – which is
a contribution historically due to Descartes.
40 Philosophy Now ● April/May 2023
But now I’d like to briefly look at some of the ways in which
Descartes has influenced or foreshadowed modern science more
specifically.
Sensing The Environment
One of the deepest mysteries is the question of how we sense our
environment. Descartes envisioned threads connecting different
parts of the skin with the brain, and so imagined how heat against
the skin sends mechanical signals to the brain, such that, for
instance a foot touching an open flame would send a signal to the
brain in the form of pain. He published this idea in his 1632 book
L‘Homme, or Treatise on Man, where Descartes wrote, “if fire A is
near foot B, the particles of this fire (which move very quickly, as
you know) have force enough to displace the area of skin that they
touch; and thus pulling the little thread C C, which you see to be
attached there, they simultaneously open the entrance to the pore
(or conduit) where this thread terminates (in the brain): just as,
pulling on one end of a cord, one simultaneously rings a bell which
hangs at the opposite end” (p.34-35, translated from the French
by Thomas Steele Hall). As Hall mentions, “Descartes does not
use the term ‘stimulus’ nor, in our sense, the term ‘response’. He
usually employs the word ‘action’, to denote the elicitive events
as well as the consequent events in the nerves.” This sort of language fits well with Descartes’ mechanical model of the world.
But specifically, his nerve conduction model about how we sense
Descartes’ early model
of nerve stimulation
from his Treatise on Man
(1632)
Philosophical Haiku
our environment put us on the right path, and has inspired much
Nobel Prize-winning research. Indeed, Joseph Erlanger and Herbert Gasser received the Nobel Prize in Physiology or Medicine
in 1944 for their discovery of specialized sensory neurons that
register changes in our environment. And in 2021, the Nobel
Prize in Physiology or Medicine was awarded to David Julius and
Ardem Patapoutian for their discoveries concerning nerve receptors for temperature and touch. Specifically, Julius and Patapoutian discovered how both temperature and mechanical touch
stimuli are converted into electrical impulses in the nervous
system. In their press release concerning this work, the Nobel
committee cited how the work of René Descartes influenced these
two Nobel Laureates.
The Anatomo-Clinical Method
The ‘anatomo-clinical method’ provides a systematic approach
to the classification of neurological diseases based on inference
from clinical signs, and especially from anatomical lesions (gaps
or cuts) in the nervous system or the brain. The method was
first developed by Jean-Martin Charcot, a clinical neurologist
of the nineteenth century, who based his own research on it.
In fact, it is a two-part methodology. The first step of this
method involves the careful documentation of clinical signs, such
as abnormal behaviour or loss of specific aspects of perception
or cognition – such as the ability to see in colour, or to recognise faces, etc etc. The second step involves an autopsy examination of the brain and spinal cord. Combining the clinical and
anatomical data, Charcot was able to suggest concrete clinical–
anatomical (specifically neurological) correlations. This method
has helped us define the brain tracts and nuclei responsible for
both normal and abnormal psychological signs, and was pivotal
to a new classification of neurological diseases based on anatomy.
Now the anatomo-clinical method was strongly influenced
by Cartesian thinking, and I want to use it as a prime example
of how Descartes’ method works in a scientific setting. In the
Discourse on Method, Descartes proposed reaching truth by not
accepting any preconceptions that could not be demonstrated
as true. Similarly, the anatomo-clinical method aims to reach a
diagnosis without accepting conjectural elements that could not
be scientifically demonstrated. In the Discourse, Descartes also
proposed breaking each problem down into as many simple parts
as necessary. Similarly, the anatomo-clinical method reaches its
diagnosis by analysing the simplest symptoms and signs in each
patient. Descartes proposed reconstructing the elements of an
argument in an orderly way, from the simplest to the most complex. Similarly, in the anatomo-clinical method, the analysis is
carried out gradually, starting from the simplest observable
behaviours in order to investigate the complex structure of the
nervous system. Indeed, I can see little difference between the
Discourse’s approach to reaching the truth and the anatomo-clinical method for achieving disease diagnosis.
Several scientists received Nobel Prizes using the anatomoclinical method. For example, in 1932 Sir Charles Sherrington
was awarded the Nobel Prize jointly with Edgar Adrian for their
discoveries on neuronal functions. This was to be the second
Nobel Prize awarded in the field of the neurosciences, after the
one shared in 1906 by the great Spanish anatomist Santiago
Ramon y Cajal, the discoverer of the neuron, and the Italian
Thomas Carlyle
(1795–1881)
Thund’rous Trumpets sound
Heralds of Doom’s own Carriage.
Beware Mammon’s Snare!
T
homas Carlyle was born in the Scottish hamlet of Ecclefechan. I tell
you that not because it’s important but because I like the name. Ecclefechan. Marvellous!
His family were strict Calvinists. Everyone expected that young Tom would
become a preacher when he came of age, on account of his being so opinionated and given to lecturing others on what they should be doing. But
preaching can only reach the people who bother to turn up on a Sunday,
and Tom wanted to tell far more people what to do, so he became a writer.
He expounded on the ills of the world, lamented the condition of the poor,
and constantly prophesied the collapse of society in a revolutionary conflagration brought on by its obsession with money. The ‘Cash Payment’, he
wrote, had become ‘the sole nexus between man and man’, while power
had been usurped by economists – those masters of the ‘dismal science’.
However, Carlyle was also a stupendously excellent historian, and his
The French Revolution: A History (1837) was a best-seller. He poured every
ounce of his creative energy into this magnificent work. Seeking comment
on his efforts from one of the other great minds of the time, John Stuart Mill,
Carlyle sent him his precious creation. Unfortunately, a maid, not being one
for reading such hifalutin material, thought it would make the perfect firestarter. Which it did. So, gritting his teeth and muttering obscenities, which
no Sassenach would have understood anyway given his thick Scots accent,
Carlyle wrote it all out again. As a Calvinist, he could probably see this as
just one more of God’s trials, and thus feel himself blessed.
Carlyle’s writing was like nothing anyone had ever seen; it combined a
preacher’s hellfire and brimstone grandiosity with the exhortatory and
admonitory effervescence of a prophet. He made up words, used others
he’d dug up from ancient sources, italicised wildly, and capitalised indulgently. Read! ye Sinners, the Works of this Great Man, and be Schooled!
© TERENCE GREEN 2023
Terence Green is a writer, historian, and lecturer who lives in
Eastbourne, New Zealand.
Descartes
Camillo Golgi, the inventor of the silver staining technique for
looking at neurons under a microscope.
action by which a body passes from one place to another.” But
in contrast to this simple definition, a fuller sense of motion
defined by Descartes is as follows:
The Divisibility of Atoms
One of the most important properties of atoms in the atomist
tradition is their indivisibility; indeed, a-tom is Greek for in-divisible. However, Descartes argued for the divisibility of atoms. This
appeared in his Principles of Philosophy (published in 1644, later
translated from the Latin by John Veitch), where Descartes wrote,
“We likewise discover that there cannot exist any atoms or parts
of matter that are of their own nature indivisible. For however
small we suppose these parts to be, yet because they are necessarily extended, we are always able in thought to divide any one
of them into two or more smaller parts and may accordingly admit
their divisibility.” In his book Descartes’ Metaphysical Physics (1992),
Daniel Garber suggests concerning this that “Descartes has no
arguments intended directly to show that all bodies are naturally
divisible. But he does think that in certain circumstances, at least,
bodies are actually divided ad infinitum, or, as he might put it, ad
indefinitum.” In other words, atoms may not be actually infinitely
divided, but they can be divided indefinitely.
In fact, we now know that protons, electrons, and neutrons
are subatomic particles that constitute atoms, and that some of
these are made up of quarks (and so on, possibly). As a result,
an atom can be divided. Ernest Rutherford discovered the
proton, the first particle to be found in the nucleus of the atom.
James Chadwick discovered the neutron, another particle in the
nucleus. Murray Gell-Mann then proposed the existence of
quarks, the particles that themselves make up protons and neutrons. They all earned Nobel Prizes in Physics. Perhaps to some
extent their discoveries were inspired by the idea about the divisibility of atoms put forward by Descartes in 1644.
“In order to give it a determinate nature, that it is the transporting
of one part of matter or of one body from the vicinity of those bodies that are in immediate contact with it, or which we regards as at
rest, to the vicinity of other bodies.”
(Principles of Philosophy)
In defining ‘motion’ in this way, Descartes promoted the
idea of space as understood through the change of ‘vicinity’ to
other bodies. Of course, Albert Einstein developed the reigning theories of relativity, which also centrally relate motion to
frames of reference, or as we might say, to the surrounding
environment. And in 1975, the Nobel Prize in Physics was
awarded to Professors Aage Bohr, Ben Mottelson, and James
Rainwater, for their discovery of the connection between collective (whole body) motion and particle motion in atomic
nuclei, and the development of the theory of the structure of
the atomic nucleus based on this connection. But Descartes
highlighted the importance of relative motion for physics centuries before all this.
Conclusions
In his book Dreams of a Final Theory (1992), the American physicist Steven Weinberg wrote, “the mechanical philosophy of
Descartes had a powerful influence on Newton, not because it
was right but because it provided an example of the sort of
mechanical theory that could make sense out of nature” (p.134).
Perhaps then we might even venture, no Descartes, no
Newton?
Descartes himself thought that philosophy ought to provide
the foundations of knowledge. This would require it to be prior
to science. He wrote:
CARTOON © STEVE DELMONTE 2023 PLEASE VISIT WWW.STEVEDELMONTESTUDIO.COM
Motion
In the Principles of Philosophy, Descartes mentioned that “motion,
in the ordinary sense of the term, is nothing more than the
42 Philosophy Now ● April/May 2023
“Throughout my writings I have made it clear that my method imitates that of the architect. When an architect wants to build a house
which is stable on ground where there is a sandy topsoil over underlying rock, or clay, or some other firm base, he begins by digging
out a set of trenches from which he removes the sand, and anything
resting on or mixed in with the sand, so that he can lay his foundations on firm soil. In the same way, I began by taking everything that
was doubtful and throwing it out, like sand; and then, when I noticed
that it is impossible to doubt that a doubting or thinking substance
exists, I took this as the bedrock on which I could lay the foundations of my philosophy”
(Philosophical Writing of Descartes, ed. John Cottingham, p.366, 1984.)
If science needs foundations, then it is philosophy which
provides them. René Descartes’ philosophy demonstrated this
several centuries ago.
© DR KANAN PURKAYASTHA 2023
Kanan Purkayastha holds a PhD in Theoretical and Atmospheric
Chemistry from the University of Bristol. He writes newspaper
columns about science and philosophy. You can learn more about his
work at kanan-purkayastha.org.uk
THE HOME OF EXISTENTIAL TRAINING
MA in
EXISTENTIAL
AND HUMANIST
PASTORAL CARE
WWW.NSPC.ORG.UK/PASTORALCARE
NEW SCHOOL OF PSYCHOTHERAPY AND COUNSELLING
61–63 Fortune Green Road London NW6 1DR
0207 435 8067 // 0203 515 0223 @ admissions@nspc.org.uk
nspc.org.uk
This programme is quality assured by Middlesex University and you will receive a Middlesex award on successful completion.
The Ultimate Guide series from Philosophy Now
Issue 1
Reissued in print
Issue 4
Available in print
Issue 2
Available in print
Issue 5
Available in print
philosophynow.org/shop
Brief Lives
Chamfort
(1740-1794)
Martin Jenkins looks at the life of a
wry observer of society, cut short by
that society’s revolutionary turmoil.
Chamfort, 1767
by Anne Louis Girodet
de Roucy Trioson
M
ontaigne invented the essay. Another Frenchman, La Rochefoucauld, invented the maxim: the
presentation of profound ideas in short selfstanding statements. The German philosopher
Friedrich Nietzsche used that form extensively; but most practitioners seem to have been French. The French call them moralistes, which, like its English equivalent, implies commentators on
the moeurs, or customs, of society.
What is the attraction of expressing ideas as aphorisms, usually without argument, inviting the reader to accept them as
‘self-evident truths’? One suggestion is:
“Maxims, axioms, are, like summaries, the work of people of spirit [or
of ‘wit’], who, it seems, have laboured for the benefit of mediocre or
lazy minds. The lazy reader takes on a maxim, which releases him from
having himself to make the observations which led the author of the
maxim to the conclusion which he shares with the reader. The lazy and
mediocre person thinks himself released from going any further…”
The author of this typically cynical observation was the last of
the great French moralistes, Chamfort.
Who was Chamfort? Nobody knows for certain. His birth
was registered in 1740 at Clermont-Ferrand as Sébastien-Roch
Nicolas, son of François Nicolas, grocer, and his wife, Thérèse
Croiset. There were, however, suggestions that he was really the
illegitimate child of a churchman; and his closest friend
Guinguené (who probably did not know, either) referred to ‘the
secret of his birth’. In any case he was brought up as the child of
44 Philosophy Now ● April/May 2023
the Nicolases. Thérèse doted on him, and he remained devoted
to her until her death.
In 1745 he secured a scholarship to the Collège des Grassins in
Paris. He soon distinguished himself by his brilliance and won
numerous prizes. But his volatile temperament rebelled against the
monotony of school life, and in 1755 he was expelled – ironically,
before he could do his year studying philosophy. He fled, with a
fellow pupil, to Le Havre, with a view to embarking for America.
However, he calmed down and returned to the Collège, where the
indulgent principal took him back. Chamfort never forgot the principal’s kindness: he grew up to be an arch-cynic with a gift for loyalty to decent human beings – when he could find them.
Fools & Lovers
The young Sébastien-Roch Nicolas was marked out for the
priesthood; but his profession of faith was too lax even for mideighteenth century France (it was suggested that the next archbishop of Paris should at least believe in God). “I will never be a
priest,” he said: “I have too much love for relaxation, philosophy,
women, honour, true glory; and too little for controversy,
hypocrisy, honours, and money.”
For a few years he eked out a living as a tutor and freelance
writer. He was not above helping priests to write their sermons.
He was also a notorious libertin, in the senses both of a freethinker and a ladies’ man. And he started to call himself ‘Nicolas
Chamfort’ – or more precisely ‘de Chamfort’, asserting a claim to
nobility which he could not have sustained under scrutiny.
Chamfort was less than impressed by polite society under
Brief Lives
Louis XV and Louis XVI. The words sot and sottise (‘fool’ and
‘stupidity’) recur often in his maxims as descriptions of behaviour
in the high circles in which he moved. He regarded his contemporaries as slaves to the expectations of society:
“Almost all men are slaves, for the reason that the Spartans gave for the
enslavement of the Persians: not knowing how to pronounce the syllable no. Knowing how to say that word and to live alone are the only
two ways to preserve one’s freedom and one’s character.”
He was also annoyed by the artificiality of behaviour which
characterised the age, and looked back with nostalgia to the age
of Louis XIV:
“In looking over the memoirs and monuments of the age of Louis XIV,
one finds, even in the bad company of those days, something lacking in
today’s good company… Monsieur de Lassay, a gentle man, but with
a great knowledge of society, said that you needed to swallow a toad
every morning to avoid finding anything more disgusting the rest of
the day, if you had to spend it in company.”
Still, he was obliged to live in that society, and to live by his
wits (or wit: he could make a living by amusing them). A chapter
of his Maxims is entitled ‘On the taste for retirement and dignity
of character’. There he wrote, “A philosopher regards what is
called a place in society as the Tartars regarded towns, that is, as a
prison… The man without a place is the only free man, provided
that he has a competence, or at least that he has no need of human
company.” But Chamfort never had the money to retire, and in
any case needed human company. He continued to write and to
publish; his play La Jeune Indienne (1764) was panned by the critics, but the famous Voltaire predicted, “You will go far.”
Chamfort was often ill, largely because of a sexually transmitted disease; but in 1776 a play of his pleased the French court, and
the Prince de Condé offered him a job with a pension of 2,000
livres and an apartment in the Palais Royal. Chamfort refused,
preferring his independence. By now he was collecting the notes
which would constitute his legacy.
In 1781 he was, on his fourth attempt, elected to the Académie
Française. He later served as its Secretary. However, the most
important event of his life occurred the previous year. At the
salon of Madame Agasse he met Anne-Marie Buffon, the widow
of a doctor, beautiful and witty, and twelve years his senior. He
fell in love at once – the cynic who wrote, “Love, as it exists in
society, is only the exchange of two fancies and the contact of two
epidermises.” But Chamfort, on a personal level, was never as
cynical as his public persona. He believed in emotion and wrote,
“Les passions font vivre l’homme, la sagesse le fait seulement durer” –
“The passions make men live, wisdom only makes them endure.”
“It’s the only time in my life,” he wrote of Anne-Marie, “that
I count for anything.” They set up home together near to Etampes. But there is no justice in the world: Anne-Marie died only a
couple of years later, on August 28th 1783. As if that were not
enough, Thérèse Croiset died the following year, aged 84.
Chamfort had lost the only two women he really cared about. It
may have been this double whammy which led Chamfort to violate his principles and accept the patronage of the Comte de Vaudreuil, which lasted until 1789.
Sharp Revolutionary Wit
Come the Revolution of that year, Chamfort took the side of his
old friend Mirabeau, but went further. He came out as a republican and rejoiced in the decree which suppressed royal pensions
(including his own). He recovered the energy which he had lost;
his friend Guinguené wrote, “Throughout 1789 the Revolution
was his only thought and the triumphs of the popular party his
only enjoyments.”
Chamfort was not a good party man, however. He joined the
Jacobin club and became its Secretary; but he maintained good
relations with the opposition Girondins. He was appointed
administrator of the Bibliothèque Nationale, but could not disguise his horror at the bloody reigns of Marat and Robespierre,
with their enthusiasm for Mme Guillotine. Eventually he formulated two epigrams at the expense of the Jacobins: “Sois mon
frère ou je te tue” (“Be my brother or I kill you”), and “They talk
about the brotherhood of Eteocles and Polyneices” (the mutually murderous sons of Oedipus).
Tyrants can live with being hated, but they hate to be laughed
at. Chamfort’s days were numbered. He once quipped:
“Why that phrase,” said Miss…, aged twelve, “‘learn to die?’ I see that
everybody succeeds perfectly well the first time.”
Alas, not Chamfort. Anticipating arrest, on September 10th
1793 he shot himself in the head, yet survived. Then he tried to
cut his throat, but he botched that as well. He lingered on, in
great pain, until May 13th 1794.
After his death Guinguené published his complete works in
four volumes. The fourth – the only one still in print – consisted
of his notes. It was entitled Produits de la civilisation perfectionée.
Products of Perfected Civilisation is divided into two parts: ‘Maxims
and Thoughts’ and ‘Characters and Anecdotes’. The first contains Chamfort’s philosophical reflections and his observations
on society, the second reflects his interest in the absurdities of
human behaviour.
Chamfort was a great cynic; in France, in the second half of
the eighteenth century, it was perhaps difficult to be anything
else. But he also appreciated the quirkiness of humanity and
liked a good story. The following passages reveal something of
the essence of Chamfort:
“An American, seeing six Englishmen separated from their troop, had
the amazing boldness to attack them, wounding two of them, disarming the others, and bringing them to general Washington. The general asked how he had come to make himself master over six men. ‘As
soon as I saw them,’ he said, ‘I attacked them and surrounded them’.”
“It is well known how familiar the King of Prussia allowed some of his
companions to be. General Quintus-Icilius made the most free with
this. The King, before the battle of Rossbach, said to him that if he lost
he would go to Venice and live by practising medicine. Quintus
replied: ‘Always the killer!’”
© MARTIN JENKINS 2023
The late Martin Jenkins was a Quaker, a retired community
worker, and a frequent contributor to Philosophy Now.
April/May 2023 ● Philosophy Now 45
Letters
When inspiration strikes, don’t bottle it up.
Email me at rick.lewis@philosophynow.org
Keep them short and keep them coming!
Logic & Fallacies
DEAR EDITOR: Thanks to Kevin CurrieKnight for a well written, well argued
piece in 154, ‘Humans, the Believing
Animals’... which confirms my own bias.
Less flippantly, I think he is correct to
emphasise the dominant role of belief in
our sense of self, and our comfort in that.
Beliefs and attitudes can be changed,
and in practice do continuously change
throughout our lives. There may be a
sense of permanence about them at any
instant of time, even though our beliefs
and attitudes are continually adjusting in
some, usually pretty small, degree as a
result of engagement with other people.
Mostly, we are not especially aware of
such changes, considering them to be
clarifications, or refinements of our preexisting belief. This avoids the emotional discomfort associated with being
wrong. But, on occasion, we recognise
explicitly that we are wrong. It is then
that significant change in belief occurs.
Sometimes, the associated discomfort
leads to rejection, and can result in
catastrophic actions. On other occasions,
there is a substantial voluntary change in
belief or attitude. This was at one time
described as a ‘gestalt shift’, and it can
be accompanied by a sense of euphoria.
Some Christians, for example, have used
the term being Born Again. In my own
case, as a teenager, deciding I had no
need for the ‘God hypothesis’, I felt a
weight being lifted from me.
In the course of our lives there are
occasions when we have to attempt to
change the beliefs or attitudes of others.
I was a head teacher of three different
secondary schools, and that entailed
seeking to bring about changes to the
cultures. Such periods are not easy
because beliefs or attitudes are part of
our identity, so the process of change
becomes emotionally charged. Nevertheless, changes can be made to the culture of an organisation – sometimes
demoralising, but, mostly, I would say,
for the better, for most of those involved.
46 Philosophy Now
l
April/May 2023
As Prof Currie-Knight indicates, we have
to start the process of change close to
where people are in terms of belief or
attitude, and this can be a source of cognitive dissonance and emotional distress.
The instigator of the change has to be
prepared to accept that she or he will
have to change beliefs and attitudes, too.
However, resolution of the cognitive dissonance brings the change, and that
requires a degree of reason.
ALASDAIR MACDONALD, GLASGOW
Off the Rails Again
DEAR EDITOR: In offering ‘A Solution to
the Trolley Problem’ (PN 154), Rick
Coste says that “a society that would
allow, or even condone, the sacrifice of
one life for the many as an integral component of its value system would not
survive for long.” In fact, I can think of
no-one who does not live in such a society. Every nation has armed forces. Any
member of the armed services places
their own life at risk with the aim of protecting others. Let alone condoned, this
is organised by the state! Further, many
countries still have conscription, under
which citizens are required, without
choice, to place themselves in the situation Mr Coste considers immoral.
MICHAEL SHAW, HUDDERSFIELD
DEAR EDITOR: On Rick Coste’s examination of the Trolley Problem in Issue 154:
it seems to me that we do live in a society
that condones the sacrifice of one life for
the many, and that it has survived for a
very long time. In war, it may be tactically
necessary for an officer to allow a platoon
to be sacrificed to delay the enemy
advance. In peace, the health service will
never have the resources to save and prolong all the lives it otherwise might. The
medical profession has daily to take decisions that involve the allocation of lifesaving equipment for the seriously ill.
Such decisions involve weighing up the
number of lives that might be saved and
the quality of life the surviving patients
and their dependents might enjoy. Deciding whether and how much to donate to a
charity, for the relief for earthquake victims, for instance, also places human lives
in the balance. For me, the potency of
the trolley problem is not just that it
presents us with a dilemma of the sort
that societies routinely confront and deal
with; it acts too as a reminder that
choosing to do nothing is also a decision
for which we are fully responsible. That
might sometimes be hard to cope with.
But cope we do, in all manner of ways.
COLIN STOTT, SOMERSET
DEAR EDITOR: I have to say the trolley
problem in Issue 154 is utter nonsense.
How can anyone make a rational decision with so little information? If the five
people who come into hospital requiring
organs are the inner circle of the Nazi
regime, would the decision whether to
give them someone else’s be different? If
the person you’re prepared to sacrifice
for them is going to save the world from
WW3, would the decision you make be
different? As well as the history and
character of the potential recipients, the
trolley proposition also ignores personal
bias or prejudice. There is simply not
enough information for a rational person
to make an informed decision. The
whole trolley prospect is utter tosh.
SIMON HOWELLS
DEAR EDITOR:I was interested to read ‘A
Solution to The Trolley Problem’ in
Issue 154. I once presented the problem
in a similar way at an evening session of a
Philosophy In Pubs conference. PIP fosters community philosophy and was
started on Merseyside over a decade ago.
The responses of the group, who were
seated around tables in Liverpool’s Adelphi Hotel, were interesting.
As Rick Coste indicated, an alternative
set-up to the problem involves a hospital
waiting room in which there are six
patients: two in desperate need of a kidney transplant, one a heart transplant,
Letters
one a lung transplant, and one a liver
transplant. One, however, is there for a
sprained ankle. Should the ankle sufferer
be sacrificed to provide organs for those
in desperate need of the life-saving transplants? A large majority on most tables
voted No and were shocked when on one
table most voted Yes. (Around that table
were seated those most interested in philosophy.) For the standard Trolley Problem, utilitarian rules seemed the best
option: save the many at the expense of
the one – although some thought that to
take responsibility for sacrificing lives
rather than leaving it to fate was playing
God and unacceptable. However, this case
made me realise viscerally that I could
never do that. The table that thought it
was acceptable to sacrifice the one patient
for the five had followed reason and not
human feeling. They did not imagine what
the real scenario would feel like.
If you want to be guided by philosophical ideas on ethics other than utilitarianism, then Kant is your man. He
argued that we should always treat others
as ends and not merely as means, ie, do
not merely use other people. In both
organ and trolley cases, a fellow human
being is being used as a thing. I think I’ll
stick to Kant’s Categorical Imperative
and try not to play God by judging the
worth of other human beings. Real life
will never be as simple as a thought
experiment anyway. This particular exercise also showed that preceding questions
and the order and wording of questions
greatly influence responses.
CORRIE L. LOWRY, MERSEYSIDE
Theodicy Continues
DEAR EDITOR: I enjoyed Muhammad
Mohsin Masood’s article on theodicy or
the problem of evil in PN 154. What a
complex subject it is, and not all religious
traditions have such an emphasis on evil –
Zen Buddhists would probably dispute
moral dualism. In fact, some traditions
would argue that we best take reality as it
is and avoid putting too much weight on
categories like ‘good’ and ‘evil’. Even if we
believe in God, God might stand outside
of those categories. Islam seems to have a
similar tradition, expressed in the term
‘Allah knows best’. Popular Christianity
has likewise taught that ‘Everything happens for a reason’. The question of good
and evil then becomes a meaningless one:
life is as it is, the universe is as it is, God is
as God is. The evolutionary aspect evoked
by Hick and Iqbal is compatible with this
outlook. We are flung into a mystery, but
we move towards something greater. But
mystery is the reality we exist in: it could
be no other way.
Life effuses tragedy, and sometimes the
tragic appears under ghastly forms, but
this is only one aspect of life. Comedy is
another. The question of good and evil
seems to touch upon deeper issues of life
and death, being and non-being. Discussion on those points takes us into the abyss
of theology, and thence towards a mysticism that goes beyond words. At such
wordless mystical heights, all religions
seem to converge into one, and this might
resonate with Hick’s overall work. If we
go but a little further, the culmination of
this mysticism reveals the meaningless of
the word ‘God’, since we’ve already gone
beyond language. This idea is echoed in
the Kabbalah concept of the Ein Sof, and
in The Cloud of Unknowing, whose author
is appropriately anonymous.
ANTHONY A. MACISAAC
INSTITUT CATHOLIQUE DE PARIS
DEAR EDITOR: Martin Jenkins in Issue
154 is uncharitable to atheist theodicy
when presenting their argument as ‘The
world is not organised as I would have
organised it if I were God.’ It’s not that
we expect our personal moral beliefs to be
held and practiced by God; but rather, we
argue that universal moral principles are
violated if there exists a God who fails to
prevent the deaths of millions of people
by natural disasters. We would consider it
a grave moral error if a human had foreknowledge of an earthquake but abstained
from warning anyone in advance; so we
need a tremendously good reason not to
hold God to the same standard – or else
we would need to abandon the universally-held moral belief that we ought to
prevent people from dying unnecessarily.
DINO MEHIĆ, CALIFORNIA
DEAR EDITOR: In Philosophy Now 154,
Martin Jenkins rejects the atheist’s challenge to belief in a God based on the
problem of evil: the problem of why an
all-powerful, good God would permit
evil. Jenkins introduces the possibility
discussed by Hume in his Dialogues on
Natural Religion, that evil exists because
God is insane or incompetent or malicious, or because there are one or more
gods undermining him. Clearly, if any of
these were true, then this might explain
the existence of evil. But this isn’t really
relevant to the traditional problem of
evil, which is why does an all-powerful,
good monotheistic-type God permit
evil? Nothing that Jenkins says undermines the force of that argument. The
Christian or the Muslim, for instance,
has no interest in defending a God who
is insane, incompetent, malicious, or
outnumbered. The Christian’s and the
Muslim’s belief is in an all-powerful,
good God. This is what the atheist
attacks with the problem of evil, and
nothing Jenkins says is relevant to that.
ROBERT GRIFFITHS, GODALMING
Rolling Back the Digital Tide
DEAR EDITOR: In Issue 154, in ‘Virtual
Dissolution’, Maryna Lazareva talks
about how we are looking in the wrong
place for meaning if we’re looking online.
This is so true: people are using social
media to try to give themselves meaning,
causing them to become addicted to it.
Social media is one of mankind’s worst
creations. It is highly destructive in many
aspects of life. Younger people jumping
onto social media generally end up fretting that other people are living better
lives than them. This, in my honest opinion, has led to an increase in mental
health issues, especially anxiety.
I personally gave up all social media
in 2021, and it was one of the best decisions I ever made. Trust someone who
gave it up, you’ll enjoy life more once
you uninstall those apps! Most of the
people you’re trying to impress don’t
care what you ate for dinner or where
you went on vacation. Taking pictures of
your travels is great for your memories,
but you don’t have to share it. Let’s go
back to a world where we don’t know
what our neighbors had for dinner
unless they tell us.
CHASE OJERS, RENO, NEVADA
Philosophy Now Not Infallible
DEAR EDITOR: Peter Mullen’s ‘God Bless
Karl Marx’ in Issue 154 was considerably
below the standard one expects in Philosophy Now. Mullen twice quotes Marx as
saying ‘property is theft’. It is, I think,
fairly well-known that this proposition was
in fact advanced by the French anarchist
Pierre-Joseph Proudhon, and the only
times the phrase appears in Marx’s writings are when he’s criticising Proudhon.
This is not a matter of any great erudition:
the Wikipedia article ‘Property is Theft!’
April/May 2023 l Philosophy Now 47
Letters
gives a perfectly adequate account of the
slogan. I would not expect to read in Philosophy Now that ‘I think therefore I am’
was said by John Locke; but Mullen’s misattribution is equally culpable.
Furthermore, the assorted biographical information is cited out of context to
misleadingly imply that Marx lived in
prosperity. As Francis Wheen’s biography (a well-documented study by a nonMarxist) shows, Marx lived in considerable poverty. He was moreover a political
refugee – an asylum seeker. At a time
when refugees are so widely under attack,
one might have hoped that Mullen, a
clergyman, would show Christian compassion rather than making cheap sneers.
Surely Philosophy Now should be offering an intelligent critical assessment of
Marx’s work rather than engaging in
infantile satire.
IAN BIRCHALL, LONDON
Knowing Foucault Himself
DEAR EDITOR: In her letter criticising
Foucault in PN 154, Mary Jane Streeton
misunderstands the point he’s making in
the passage she quotes from, which is
specifically about changes that took place
in thinking about sexuality in the Nineteenth Century. Homosexuality had been
condemned and forbidden for many centuries before this, but in the Nineteenth
Century there began to be a shift from
considering homosexual acts to a characterisation of individual people as ‘homosexuals’. Here we can find the pre-history
of contemporary identity politics (which
is the ‘reverse discourse’ referred to by
Foucault). ‘Homosexual’ begins to be
seen as what a person is, not just a sinful
act they might occasionally commit.
Foucault’s most central concern,
throughout his writings, is the history of
human conceptions of the self. It was in
the Nineteenth Century, he contends,
that sexuality began to be regarded as an
essential defining component of self. This
paved the way for today’s notion that we
each have a sexuality as a part of our identity – a very recent way of thinking.
Foucault lists ‘pederasty’ [sex with
boys] alongside ‘homosexuality’ and
‘inversion’ because in Nineteenth Century
writings these terms were treated as
almost synonymous. The most widely discussed (and condemned) form of homosexuality was the love of an older man for
an adolescent boy. It was this to which
Oscar Wilde referred as ‘the love that
48 Philosophy Now
l
April/May 2023
dare not speak its name’. (It is instructive
that had Wilde been put on trial today, his
prison sentence would have been longer,
and the destruction of his reputation more
complete.) In the 2nd Volume of his History of Sexuality, Foucault discusses the
institutionalisation of pederasty in
ancient Athens, where it was considered
advantageous for a boy to have an older
lover who could introduce him to important and influential people in the city.
The extent of their physical relationship,
however, was hedged round with reservations and etiquette. There was neither
wholesale rejection, nor complete acceptance, of sexual acts between men and
boys. Foucault’s discussion specifically
shows how different ancient Greek ideas
about sexuality were from our own. His
purpose is not the assertion of moral relativism, but rather tracing the historical
mutations of human self-conception.
PETER BENSON, LONDON
Rubbishing Rubbish
DEAR EDITOR: Re ‘The Crumpled Paper
Hoax’, Issue 153, Turner Prize winner
and wit Martin Creed’s 1994 Work no: 88
consists of a sheet of plain A4 paper. The
recipient of the paper is instructed to
crumple it into a ball and place it on a
shelf, to begin its life as an artwork. The
recipient is by this act turned into the
agent of creation: is she thus the artist? If
the ball falls off the shelf, gets chucked in
the bin, is it no longer an artwork? At
what point in its fall did it stop being an
artwork and become a piece of rubbish?
STEPHANIE DOUET, ARTIST
Cicero Civilises Characteristically
DEAR EDITOR: I have had the privilege
of teaching Roman History as part of an
Ancient History A-Level. Here our main
topic is the breakdown of the late
Republic. I greatly enjoyed Hilarius Bogbinder’s Brief Life of Cicero in Issue 153
for this reason. On our course, Cicero is
considered the best character. We study
him as a participant in events, but we
also study his historical information. His
‘pro Sestio’ is an important legal defence
useful for his comments on the different
factions in the Roman Senate, seeming
akin to the modern two-party systems
seen in some Western democracies.
The article talked of Cicero’s recommendation of a mixed constitution, and
also of his time in Greece while writing.
I thought it would be interesting to com-
ment on Cicero’s Greek influences.
Plutarch was referenced – a Greek writer
whose Parallel Lives compares important
Roman and Greek figures for their
virtues. Another writer was Polybius,
who died in the early first century BC
and was another Greek writing for a
Roman audience. His theory of anacyclosis can be explained as follows. There are
three virtuous forms of government:
monarchy, aristocracy, democracy. Each
has a degenerative form into which it will
inevitably regress: tyranny, oligarchy,
ochlocracy (rule of the mob). Each
degenerative form will lead to the emergence of the next virtuous government:
tyranny to aristocracy, oligarchy to
democracy, ochlocracy to monarchy; and
so the cycle begins anew. The Roman
Republic’s mixed constitution as advocated by Cicero aimed to include all
three virtuous form of government as a
check and balance on the powerful
potential of the other forms, thus preventing degenerative cycles and constant
revolutions or instability.
ROB HAVES, KENT
Pluralism vs French Universalism
DEAR EDITOR: According to Manon
Royet in Issue 153, French philosophy
has not changed that much and the
world has. The world has evolved and is
supposedly ‘weary’ of philosophical universalism. But since when has the discipline of philosophy in pursuit of truth
had to adapt to current world views? For
example, should the idea of Universal
Human Rights be now regarded as
merely an optional notion in a sea of pluralism? Or should philosophers, including some French, be admired for leading
us to a better, and more logical, conclusion about the ideal way to be?
MICHAEL HANLEY, MELBOURNE
DEAR EDITOR: Manon Royet, in Issue
153, blames contemporary French philosophy for its continued allegiance to
Enlightenment-style universalism, which,
she claims, ‘denies pluralism’ and leads to
exclusion and oppression. But doesn’t her
argument itself rest on a universal moral
principle – namely, that exclusion and
oppression are unjust? Quite a few worldviews are not grounded in a respect for
pluralism. Will Royet’s model society
exclude or include such worldviews?
DAVID LEVY
JOHN CABOT UNIVERSITY, ROME
The Art of Living
How To Live A Happy
Life, With Seneca
Massimo Pigliucci shares some Stoic standards.
hat makes for a happy life?
That depends on what sort of
living organism one is. For a
cactus, a happy life consists in
access to the right set of soil and air nutrients,
plenty of light, and a bit of water. Oh, and
staying away from parasites and predators
(hence the spines). For a human being things
are a bit more complicated. Combined biology and culture make for an incredibly varied mix of wants, needs, hopes, and fears.
Nevertheless, the general principles are the
same, and so a list of ideas put forth by the
Stoic philosopher Lucius Annaeus Seneca (4
BCE-65 CE) is a good starting point for
reflection. Let’s take a look into that mirror.
The happiness list is found in Section 20
of his aptly titled On the Happy Life. I will
quote a few excerpts covering the principal
points. For instance: “I will look upon
death or upon a comedy with the same
expression of countenance; I will submit to
labors, however great they may be, supporting the strength of my body by that of
my mind.” Strength of mind is crucial in
life because it is our attitude toward things,
and especially towards setbacks, that determines how we’re going to react to them.
Wealth, or anything else the Stoics
classed as ‘externals’ (external to one’s own
mind), including health, reputation, and
career, are the sorts of thing it is nice to
have if we’re lucky, but which don’t define
who we are as human beings: “Whatever I
may possess, I will neither hoard it greedily
nor squander it recklessly.” The point that
we should not be relying on or hoarding
things was stressed more recently by the
psychologist Viktor Frankl: “I will view all
lands as though they belong to me, and my
own as though they belonged to all
humankind” (Man’s Search for Meaning,
1946). The Stoics used the deliciously oxymoronic phrase ‘preferred indifferents’,
meaning things that we should prefer
because they make our lives more pleasant,
W
but that are indifferent to our moral character, which is our most precious possession. Indeed, the hallmark of Stoic wisdom
is precisely the ability to use externals correctly, which includes not being overly
attached to them. If you have spare funds,
think of ways to put them to good use –
where ‘good’ refers to helping others, not
to buying yourself the latest chariot (or
smartphone). If you don’t have any spare
funds, focus instead on what nobody can
take away from you: your good character.
Try out this one for size in our age of
social media: “I will do nothing because of
public opinion, but everything because of
conscience.” Yet the point is well taken.
While certainly our conscience ought to be
informed by open and honest dialogue with
our fellow human beings – on penalty of
falling into rationalizing, rather than rational, thinking – ultimately, our opinion is the
only thing that is truly ours and for which we
are completely responsible. Everything else
in our lives may be affected by fortune; but
the buck stops with us when it comes to our
considered judgments and intentions.
“In eating and drinking my object shall
be to quench the desires of Nature, not to
fill and empty my belly. I will be agreeable
with my friends, gentle and mild to my foes;
I will grant pardon before I am asked for it.”
Again here we see an attitude that runs
refreshingly against modern trends. Consumption – for instance of food and drinks
– is to be tailored to our needs, not pursued
to excess (at great expense, and even greater
environmental impact). Regarding how to
treat other people, for the Stoics there are
no true ‘foes’, only human beings who are
misguided in what they do. It is incumbent
on us, therefore, to be ‘gentle and mild’
with them. This doesn’t mean that we
should practice ‘doormatism’, so to speak –
but that we should, as Seneca puts it, be
ready to forgive even before we are asked.
The Stoics and their philosophical
cousins the Cynics were cosmopolitan,
meaning that they thought of all other
human beings as their brothers and sisters.
Even though we are naturally inclined to
be kind toward our family, friends, and
neighbors, reason tells us that the fact that
those specific people are so important to us
is an accident of fate, so we have no reason
to treat anyone else poorly just because
they happen to have been born in a different part of the world, or to look different
from us, or behave in ways unfamiliar to us.
We should always be ready to face
Nature’s ultimate test of character – our own
inevitable demise. Even death, as both the
Stoics and the Epicureans maintained, is not
to be feared, for the simple reason that where
it is we are not, and vice versa. Indeed, the
Stoics believed that in exceptional cases we
should be willing to hasten our own death;
for instance, when faced with a terminal disease we know will only bring pain and suffering to ourselves and our loved ones: “Whenever either Nature demands my breath
again, or reason bids me dismiss it, I will quit
this life, calling all to witness that I have loved
a good conscience, and good pursuits; that
no one’s freedom, my own least of all, has
been impaired through me.” The ability to
quit if ‘the room gets too smoky’, as the second century Stoic Epictetus memorably put
it, is the ultimate source of freedom, because
if we decide to stay, this means we judge that
we can still do something worthwhile for
ourselves and others – which is the ultimate
mark of a happy life.
© PROF. MASSIMO PIGLIUCCI 2023
Massimo Pigliucci is the K.D. Irani Professor
of Philosophy at the City College of New York.
His books include How to Be a Stoic: Using
Ancient Philosophy to Live a Modern
Life (Basic Books) and The Quest for
Character: What the Story of Socrates
and Alcibiades Teaches Us about Our
Search for Good Leaders (Basic Books).
More by him at massimopigliucci.org
April/May 2023 ● Philosophy Now 49
We Have Always Been
Cyborgs
Stefan Lorenz Sorgner
‘TRANSHUMANISM’ IS A
movement which says that
we
should
enhance
humanity with technology. Specifically, it
sees emerging genetic and cyborg technologies as a way to promote our collective
survival and individual flourishing. In the
latter context it is frequently associated with
notions of human perfectibility.
As a childhood gymnast, I became well
acquainted with perfection’s pursuit. The
pointed toe, the straight leg, the split leap
locked into a 180 degree angle – gymnastics
is an extended dance in distancing oneself
from the fallibility of flesh and bone.
This is not the sort of perfection that
most gymnasts are pursuing, though. At
least, not really. It is in fact an intensely
psychological endeavor. To be a gymnast is to
be engaged in an ongoing mastery of one’s
hopes and fears, one’s impulses and drives,
all while repeatedly being yanked back down
into a gritty sensorium of ripped blisters and
chalk dust. Gymnasts track their progress:
the acquisition of each new skill and the
triumph over every nagging fear is a sort of
‘upgrade’ in their development. Pushing
ever further against the laws of physics,
gymnasts say yes, again and again, to the pain
of everyday practice. Amor fati – love of one’s
fate – is the mantra these athletes murmur
as they chase fleeting moments of aerial freedom and corporeal power.
A gymnastics routine is ostensibly based
upon an established dynamic between an
athlete, a piece of equipment, and gravity,
with the vault (or balance beam, or uneven
bars) serving simply as a piece of technology
over which the gymnast temporarily exerts
control. Yet an apparatus is much more than
a means to a landing. When the 2020
Olympic gold medal gymnast Nina Derwael
explains that “the bars speak to me, and in
return I listen” (International Gymnast
Online, 10 March 2021), she’s describing a
relation in flux, ceaselessly becoming new in
the time and space between bars.
Gymnasts in flight crack open and make
visible our human-defined order of things.
50 Philosophy Now
●
April/May 2023
The
post-disciplinary
anthropologist
Tobias Rees is also speaking to such
moments of possibility when he defines
poetry as something that attempts to
‘‘capture that which cannot be captured’’
(After Ethnos, 2018, p.26). Forthcoming
innovations in AI, biology, and cyborgization are similarly unfolding outside our
taken-for-granted
understandings
of
humanity, nature, and technology. Microbiome research throws into question where
microorganisms end and the individual
person begins. Like a case of the ‘twisties’ –
a condition that renders a gymnast
completely disoriented in the air – the philosophical vertigo our inventions and scientific
discoveries induce can be debilitating, leaving us feeling rudderless, unsure of our identity and our relation to the world around us.
They also bring to the fore pragmatic challenges for how we may most authentically
and empathetically live amid these breakthroughs. For example, as gene technologies
continue to expand, what limits, if any,
should be placed upon parents who wish to
‘enhance’ their children’s intelligence or
athletic aptitude through genetic splicing?
Sorgner Steps Up
Stefan Lorenz Sorgner, a German philosopher based at John Cabot University in
Rome, has been engaged with these questions for the past decade, and We Have
Always Been Cyborgs: Digital Data, Gene Technologies, and an Ethics of Transhumanism
(2021) tackles their intricacies head on.
The title encapsulates Sorgner’s fundamental appeal to his readers. In order to most
smoothly navigate and benefit from the next
act of the Digital Age, when these technologies will begin to enter the human body, we
must realize that since the dawn of Homo sapiens we have been ‘steered organisms’ (a literal
translation of ‘cyborgs’) – that is, steering
ourselves through a series of ‘upgrades’,
including language acquisition, education,
and vaccination. Brain-computer interfaces
like Elon Musk’s Neuralink will simply be the
next step in this long history of cyborgization
– albeit one, Sorgner says, in which our
capacity to guide our own evolution has gone
exponential. But given the long history of
human modification by humans, and the
CYBORG WOMAN RHETOS 2023 PUBLIC DOMAIN
Books
Natasha Beranek sees transhumanism get an upgrade,
and Paul J. D’Ambrosio looks at the sorts of successes to
which failure can lead.
benefit this has yielded us, modes of thinking
that cast nature as ‘pure’ and ‘good’ while
dismissing technology as ‘artificial’ and ‘bad’
are not only flawed, they unnecessarily
hamper our potential to thrive.
A self-identified transhumanist, Sorgner’s
vision is significantly influenced by the writings of Friedrich Nietzsche – a factor, he
admits, that makes his trek through this
philosophical minefield even more volatile,
given Nietzsche’s misappropriation by the
Third Reich and other eugenicists. As it
stands, transhumanists already tend to
receive one-sided portrayals in the media,
being cast, Sorgner says, as ‘‘cold-hearted,
blood-sucking, Silicon Valley billionaires.’’
Sorgner distances his vision of transhumanism from these more specious portrayals.
He also takes pains to differentiate his ‘weak’
transhumanist approach from those of other
notable transhumanists, such as Oxford’s Nick
Bostrom, who upholds a Renaissance ideal of
perfection that Sorgner finds implausible, as
well putting forward as a ‘simulation’ argument [the idea that we are probably living
Book Reviews
Books
Book Reviews
mous, but freedom and health. Most human
beings identify an increase of our healthy
years with living a better life. Aging, Sorgner
writes, ‘‘is the worst mass murderer in the
world.’’ So aging should be approached as a
disease rather than as an undoable process.
And in order to prolong our ‘healthspan’ – the
number of vital (not total) years we enjoy on
this planet – we must take full advantage of
the scientific knowledge and medical innovations that will come from the mass collection
of a wide range of personal data. The maintenance of our health will inevitably be
improved the more data is collected and analysed concerning correlations between aging,
genes, and lifestyle choices. So unlike many
speculative Silicon Valley transhumanists,
who strive for ‘immortality’ via cryonics or
mind-uploading, Sorgner’s pragmatism leads
him to conclude that embracing Big Data
offers our most realistic chance to promote
the flourishing of all individuals.
On a geopolitical scale too, in order to keep
pace technologically with (say) China, it will
be necessary for us to develop a democratic
a reluctant conclusion on Sorgner’s part: “I
wish to stress very much that my analysis is not
one about which I am happy, as I am aware that
the risks and dangers for a liberal system are
enormous.” Yet we’re only two clicks away
from an abundance of social media evidence
which indicates that we don’t cherish our
privacy as much as we like to think we do. So
to me, Sorgner is fairly convincing in his
conclusion that what we actually fear about
total surveillance is not a loss of privacy per se,
but that we might be unfairly sanctioned for our
genetics, disease states, personal preferences, or
behaviors. All the more reason, he argues, that
the norm of negative freedom against control
or coercion must be culturally promoted and
legally protected. However, “Norms and
values are just as much fictions as money,”
writes Sorgner – so it is left up to us to continue
to strongly believe in the value of radical plurality,
even as our digital privacy fades.
2020 CREATIVE COMMONS 2
method for collecting and using this data. Not
only does the Chinese government already
collect data in massive quantities, but its citizens are more amenable to it, given their relatively collectivist orientation towards subsuming the individual to the common good.
The need to collect personal data en masse is
BY_FANUEL_LEUL
Technology versus Freedom
How receptive one is to such proposals from
Sorgner or other transhumanists’ proposals
is likely to be a reflection of how well one
perceives liberalism to be currently working
as a political model.
This cannot be overstated. In We Have
Always Been Cyborgs, underlying each of
Sorgner’s proposals about the enhancement
of parenting, education, sexuality, or
morphological modification and reproductive technologies, is his allegiance to negative
freedom, that is, to freedom from control or
coercion. For Sorgner this is a core Enlightenment value, which aims to safeguard
personal freedom from outside interference.
So Sorgner’s ‘ethical nihilism’ would not
condemn a (grown-up) brother and sister
who mutually consent to sexual relations. But
it would strongly reject the actions of a
pedophile, who in pursuing his or her
idiosyncratic drives is harming others.
Certainly this is a stark example. Anthropologists have long observed incest, of varying
degrees of closeness, to be a universal cultural
taboo. Yet Sorgner’s view of negative freedom would override this as an absolute taboo,
rendering it paternalistic and violent.
With all of this freedom talk in mind, I
wouldn’t be surprised if readers who are new
to Sorgner spontaneously spit out their
morning cortados when they arrive at
Cyborg’s section on digital privacy. I can
already hear readers asking, “How is it not
an utter contradiction for Sorgner to cherish
a radically pluralistic concept of the good on
the one hand, while on the other concluding
that we must inevitably accept total surveillance via RFID chipping? How could we
ever truly safeguard our freedom if there’s
no longer any privacy – if our biometric data
is constantly being entered into an ‘Internet
of Bodily Things’ database, for example?”
Key to understanding Sorgner’s consent to
the internet panopticon is his perspective that
it isn’t freedom and privacy that are synony-
SERENGETI CYBORG MAN
inside an alien computer simulation, Ed]
which Sorgner regards as largely irrelevant.
Rather than focusing on more fantastical
forms of ‘silicon-based’ humanity, such as
mind uploading, Sorgner’s concept of the
good life is one that stresses a ‘carbon-based’
practical relevance and fosters a greater plurality of avenues towards human flourishing. Yet
even as a ‘positive pessimist’, Sorgner aligns
himself with most transhumanists’ optimism
about technological innovation. Yes, our lives
are full of struggle, with moments of pleasure
– espressos, sexual euphoria, meeting work
deadlines – that are too brief, and few and far
between. Considering how much vaccination,
antibiotics, and anaesthetics – all recent developments in the grand scheme of things – have
improved our lives, however, we should be
hopeful that newly emerging technologies will
continue to alleviate our suffering.
Nietzsche versus Tradition
If the most promising means for promoting
human flourishing are to be found within the
April/May 2023
●
Philosophy Now 51
NIETZSCHE COLOURED BY CASSOWARY COLORIZATIONS 2018 CREATIVE COMMONS 2
Books
carbon-based realm, how can we most
responsibly engage with them?
Here the Nietzschean flavor of Sorgner’s
transhumanism fully emerges. Nietzsche,
like Darwin, saw humans as being different
from non-human animals only in degree
rather than in kind. Despite this, a metaphysical understanding of persons as part
material body and part immaterial mind or
soul, has persisted in our social and legal
systems, perhaps because they were heavily
influenced by Judeo-Christian ethics. In
Sorgner’s view, this is an immoral state of
affairs, because it impinges on the degree to
which we can make autonomous choices
about our use of biotechnologies.
He directs his discussion towards the
decision-making of parents or parents-tobe. For example, moral decision about the
use of pre-implantation testing (PGD) and
in-vitro fertilisation, should be left to the
potential parents, not the state. Where there
is disagreement on reproductive technologies, Sorgner writes, “The opinion ought to
be legalized in favor of more freedom.”
Bioethical debates about personhood and
what counts as ‘harm’ remain strongly
enmeshed in Judeo-Christian metaphysics
and Kantian ethics, adding kindling to what
many are likely to view as Sorgner’s inflammatory conclusion that the ‘person-object
dichotomy’ simply cannot be upheld. Unlike
Jürgen Habermas – one of the most promi52 Philosophy Now
●
April/May 2023
nent German philosophers of the late twentieth century – Sorgner does not regard
parents who would genetically enhance their
children as threatening their children’s
autonomy or using them as mere means to
the parents’ end, in Kantian terms. As in
many other places throughout Cyborgs,
Sorgner supports his perspective by drawing
an analogy between child-rearing and
genetic enhancement. “In both cases,” he
writes, “decisions are being made by parents
concerning the development of their child, at
a stage where the child cannot yet decide…
Parents usually love their children and want
them to have the best possible starting points
in life. Of course, parental decisions do not
always produce good results. But, as a rule of
thumb, parental influence most often leads
to better outcomes than those from chance
or without any guidance.” Sorgner further
counters Habermas’s other concerns about
genetic modification, foremost among them,
its seeming irreversibility – by referring to
cutting-edge research in the fields of epigenetics, siRNA therapy, and gene silencing.
In the wrap-up to Cyborgs, Sorgner
attempts to resolve any lingering doubts that
readers may have about the details of his transhumanist alignment with Nietzsche. There
are certain aspects of Nietzsche’s thinking
that Sorgner regards as immoral, such as his
vision of a two-class ‘master and slave’ society. Such a hierarchical structure is deeply
illiberal, and Sorgner cherishes liberalism
above all else. Yet Sorgner finds Nietzsche’s
basic impulses of self-discovery, truthfulness,
and the will-to-power indispensable to a radically pluralistic understanding of perfection:
“Someone lives a good life by following their
very own idiosyncratic psychophysiological
demands, their very own desires, passions
and fantasies… [but] to become aware of
one’s very own drive is much more difficult
than is often believed.” Sorgner also offers
Nietzsche’s theory of Eternal Recurrence as
providing a possible framework for life’s
meaning, he says it is the avenue by which we
can experience “that special moment, which
is worth all the suffering [we] have to endure,
as [we] know that this one special moment
will recur again and again.” This is meaning
founded on the Nietzschean idea that your
life has moments worth repeating forever
which make all of the suffering surrounding
them worthwhile.
Conclusions
Transhumanism is not a religion, despite the
focus that many of its retinue place on ethereal
aims like immortality, disembodied
consciousness, and perfection. Nor can transhumanism be accurately glossed as an ‘ideology’, given its diversity of opinions on possible outcomes. We Have Always Been Cyborgs
by contrast renders transhumanism much
more ‘terrestrial’, demonstrating that it is
better understood as a philosophicallyinformed positive attitude towards the use of
technologies. It is interested in providing
innovative options for our existence rather
than introducing dystopian constraints on
our individuality. The book encourages us to
realize that before we can avail ourselves of
emerging technologies, we need to do the
work of questioning the boundaries between
humanity, nature, and technology. If done
well, this will inevitably lead to even more
provocative questions: What do these categories even mean? How and why did we come
to take them for granted? And what makes us
feel that we need to cling to them so tightly?
All around us possibilities are opening up
for brand new ways of thinking and radically
different options for living and being in the
world. The sudden realisation of these
possibilities can create the poetry of unanticipated liberation, like gymnasts in flight.
And that, too, is perfection.
© DR NATASHA BERANEK 2023
Natasha Beranek is an anthropologist. She is a
graduate of the Transformations of the
Human school in Berkeley, California and now
edits the ‘Anthropology and Transhumanism’
book series at Trivent Publishing.
A version of this review has been published in
The Annals of the University of Bucharest,
Philosophy Series.
• We Have Always Been Cyborgs, by Stefan Lorenz
Sorgner, Bristol University Press, 2021, £37 pb, 240
pages. ISBN: 1529219205
In Praise of Failure
Costica Bradatan
SCHOLARS ARE UNDER
pressure to write books for
general audiences: socalled ‘trade books’. These are books that
not only can be read by non-academics, but
that people would actually want to read. It’s
a hard sell – in every sense.
Many who write such books court popularity by appealing to a sort of authenticitybased ‘celebrate who you are’. Not that
scholars should not be in the business of selfhelp – indeed, it might just be one of the
humanities’ most important functions – but
Book Reviews
Books
The Titanic leaving
Southampton on
April 10th 1912.
Good luck to her!
kitsch stories about ‘My experience with my
Grandma one summer’, clichés like ‘How I
learned to appreciate small things’, and
anodyne tips like ‘It’s really up to you what
you want to take from such-and-such philosophy’ do not really challenge readers. Such
unabashed subservience to the reader’s
existing opinions and feelings has nothing to
do with philosophy. Or to write in that very
fashion, I would say: ‘‘Those phrases make
me sick!’’ By contrast, In Praise of Failure:
Four Lessons in Humility (2023) is a book that
nearly anyone can read, and yet it will spark
reflection in even the most seasoned professor. Both highly readable and thoughtprovoking, Costica Bradatan challenges
readers theoretically, but also, and perhaps
more importantly, challenges them on a
more practical level. And the topic is as
unusual as it is timely.
In our times of multiple crises, and especially for us who live in cultures where success
is directly analogous to dignity, failure is
something we all experience in penetrating
ways. We viscerally felt the pangs of successive lapses in the normal patterns of life
through the impact of Covid-19. Other largescale challenges, including those to democracy, to world order, and to the very survival
of the planet, can cause us to fear failure across
its full spectrum – from the harsh everydayness of our own blunders and defeats, to the
broadest ‘failure of the human race’. In Praise
of Failure speaks to our crises, to failure, and
to everyone’s ability to take a step back and
think – or to humbly think again.
Book Reviews
Bradatan places this work squarely in the
tradition of ‘self-help’, and claims to offer
‘failure-based therapy’ (p.2): Failure always
humbles, and “In Praise of Failure is not about
failure for its own sake… but about the
humility that failure engenders, and the
healing process that it triggers” (p.5). In this
context, Bradatan offers four lessons about
physical, political, social, and biological failure, to trigger reflection and ultimately healing in his readers.
Failure is understood as a breakdown of
predictability and control. It is experienced
as ‘‘disconnection, disruption, or discomfort’’ in the face of our expectations. When
normal or expected patterns break down,
and we then feel empty, this is failure. Everyone experiences failure; it is part of each
person’s past and future, and probably their
present as well. However, the author is not
interested in giving specific advice for dealing with general ways in which people fail;
nor does he suggest particular strategies for
developing humility. Instead In Praise of
Failure traces the lives of those who lived
with failure, and who learned from it.
Today, ‘learning’ often refers to the
acquisition of rules, theorems, principles, or
facts. Even when we think of, for example,
learning to drive, we imagine the driving
instructor telling a young driver what to do
and testing their knowledge. Yet experience
is paramount, and more time on the road is
the best way to become a better driver. But
before an adolescent ever sits in the driver’s
seat, there is much they already know about
driving. Having been a passenger for countless car rides, the youngster already knows a
lot about the rules of the road, including the
unwritten ones.
Bradatan tackles the various arenas of failure through his well-honed biographical
sketching skills. Drawing on diverse sources,
from the French mystic Simone Weil, to
Mahatma Gandhi, to the nihilist Romanian
writer-philosopher E. M. Cioran, to Japan’s
famous writer and failed coup instigator
Yukio Mishima, and even the suicidal Stoic
Seneca, Bradatan’s lessons are transmitted
through learning about the lives of those
who lived well in failure.
Not a nihilist himself, the messages
Bradatan draws out are, as he puts it, from
someone who doesn’t always know how to
smile. “We can,’’ he writes, ‘‘use the experience of failure to extricate ourselves from the
entanglement of existence (physical, political, social, biological), with a view to gaining
a better understanding of it, and in the hope
of leading a more enlightened and wiser life”
(p.232).
In Praise of Failure does not promise
potential lemonade from the bitterness.
Sometimes failure is just failure, nothing
more. We do not always ‘turn a new leaf’,
‘understand that everything happens for a
reason’, or extract that ever-evanescent
‘silver lining’. We can learn that some
demands are unreasonable, that our world
and ourselves are ridden with cracks, and
that there is a fundamental precariousness in
all expectations of success.
As the back cover blurb goes, Bradatan
‘‘breaches the boundaries between argument and storytelling, scholarship and spiritual quest.’’ In Praise of Failure will spark
critical reflection in any reader, and provide
lessons that are not merely parochial. Most
refreshingly, it is not a mere celebration of
the self and all the things one already thinks
and feels. Rather, it challenges the assumption of success in a most cynical, upsetting,
and (possibly) redemptive way: you will fail,
and that might just be that.
Despite lavishing praise on failures of all
types, Bradatan fails miserably at failing. In
Praise of Failure is, by any account, a great
success.
© PAUL J. D’AMBROSIO 2023
Paul J. D’Ambrosio is Professor of Chinese
Philosophy at East China Normal University,
Shanghai.
• In Praise of Failure: Four Lessons in Humility, by
Costica Bradatan, Harvard University Press, 2023,
272pp, £21.95 hb, ISBN: 0674970470
April/May 2023 ● Philosophy Now 53
Film
D
espite the numerous interpretations that have been made
about its themes, the original
Ghostbusters (1984, directed by
Ivan Reitman) contains a glaring ethical
problem which has been largely overlooked in the many years since it was first
released.
The problem is addressed very briefly
within the film itself, when, during a
montage depicting the main characters’
growing success, a copy of The Atlantic
magazine flashes up with the headline ‘Do
ghosts have civil rights?’. In response to the
question, Matthew Phelan wrote a spoof
article of that name, which focuses mainly
on the legality or otherwise of ghostbusting
practices. It also briefly compares ghosts
with illegal immigrants. However, there
remains a general lack of exploration of the
moral question of ghost-catching within the
Ghostbusters universe.
This must in part be due to the simplicity
and strength of the film’s prevailing
gimmick – a group of maverick scientists
catching and containing ghosts. Another
difficulty with tackling the ethics of ghostbusting stems from the challenge of defining ‘ghosts’ themselves, not as metaphors,
but as actual and existing entities. Much that
has been written about these insubstantial
beings has not really helped when trying to
consider their ethical status.
Silence in the library!
54 Philosophy Now
l
April/May 2023
Thomas R. Morgan ponders the phantom
pain and pleasure perspective.
I want to simplify the question of whether
or not ghosts should be busted by considering how Australian moral philosopher Peter
Singer’s early system of ethics might be
applied, given that Singer is well-known for
taking into account non-human life in his
particular form of utilitarianism.
Plot
In the Ghostbusters’ universe, ghosts quickly
become an undoubted, objective reality,
when the three main protagonists are called
to investigate strange, possibly supernatural
phenomena at the New York Public Library.
After observing precariously stacked books,
and slime (later termed ‘ectoplasm’) coating
draws and shelves, Peter Venkman (Bill
Murray), Ray Stantz (Dan Aykroyd), and
Egon Spengler (Harold Ramis) soon come
into contact with their first free-floating
spook, which acts aggressively when they
clumsily attempt to make contact.
This encounter opens up a window of
opportunity for our three unorthodox
researchers. Ethical considerations aside,
they take the risk of putting all of their
(mainly Ray’s) resources into setting up
business as ‘the Ghostbusters’.
It is unclear what the primary mission of
this organization is: is it to continue their
academic aspirations, of bringing scientific
rigor to the field of the paranormal? Or is it
to rid the public of troublesome supernatural
entities – collectively called ‘ghosts’?
Winston Zeddemore, who joins the Ghostbusters later in the story, is the only member
who is clearly honest and transparent about
his motives, prioritising the "paycheck" over
his beliefs regarding the paranormal.There
are clearly differences of interest between
the individual Ghostbusters, but it would
seem that the latter is their primary motivation, given not only their moniker but also
given that their financial need is better met
through the removal of ghosts than through
studying them.
After a successful period of catching
ghosts (including the ghost later known as
‘Slimer’), and of storing them in a laserbased containment facility in the basement
of their headquarters/living accommodation (a reconstituted fire station), the
Ghostbusters are challenged officially.
Walter Peck (William Atherton Knight)
from the Environmental Protection Agency
questions their methods, particularly the
containment facility. But this questioning is
on the basis of their creating an environmental hazard, not for ignoring the interests
of the ghosts they have ensnared.
Peck’s distrust of the Ghostbusters
quickly becomes personal, and he attains
legal permission to shut down the ghost
containment unit. As a result its unwilling
inhabitants are freed and the Busters themselves subsequently arrested. The ensuing
chaos caused by the release of the ghosts is
magnified by an apocalyptic threat from an
ancient deity called Gozer who employs two
lesser gods Zuul and Vinz to possess two
neighbours, Dana Barrett (Sigourney
Weaver) and Louis Tully (Rick Moranis).
Gozer uses their apartment building as a
portal into this world and as the epicenter
for his destructive power. The Mayor
realises the value of the incarcerated Ghostbusters and they are duly released. Eventually they use their ghostbusting protonpacks to close the portal, preventing Gozer
from causing further destruction.
The Dubious Ethics of Ghostbusting
I don’t want to try to give a comprehensive
definition of what a ghost is. Notable
philosophers, such as William James and
Henry Sidgwick, have researched and
remained open-minded about the possibility of their existence. C.E.M. Joad entertained the idea that ghosts equate to the
mental self of a deceased individual (possibly like a bundle of perceptions and ideas),
assuming at least a temporary post-mortem
existence. Dan Aykroyd, who co-wrote the
film, is a believer himself, and has described
ghosts as a kind of ‘physical residue’.
However, even if we accept such a reductionist definition, it wouldn’t necessarily
remove the moral implications of catching
and trapping ghosts. After all, defining
living humans as purely material beings, as
many now do, doesn’t automatically
condone their mistreatment.
Importantly, in the Ghostbusters universe,
there are seven classifications of ghosts, taken
choice of whose life to save, the interests of
an adult monkey could outweigh those of a
human foetus, for example. Apart from
possibly having a sharper awareness of pleasure and pain, the adult monkey arguably
has more invested in life, is possibly part of
a primitive community, and has greater
environmental awareness than the foetus.
Given then that Singer’s criterion for
Film
Jeremy Bentham (RIP),
‘head’ of the Philosophy
Dept at University
College London.
(He requested his
body to be put on
display there after
his death.)
moral consideration is sentience and the
interests which arise from it, why not
include ghosts? If we do, the question
becomes, does the pain of the ghosts busted
outweigh that of the human pleasure or
happiness resulting from them being
busted? If it does, we ought not to bust
them, on this moral scheme.
In terms of the acknowledgement,
respect, and protection of any conscious
being, utilitarianism is a bit of a doubleedged sword. It doesn’t assign rights arbitrarily based on species or kind; but by this
same token, it therefore also removes the
automatic taboos around some medical practices, such as embryo research and abortion.
On the other hand, it potentially expands our
consideration to all lifeforms and their
needs. This would imply that like any other
sentient beings, ghosts shouldn’t automatically be protected (or left to haunt hotels);
but equally, that they shouldn’t be automatically dismissed as morally worthless
‘vermin’ to be trapped and exterminated.
Under this form of utilitarianism, only if
human interests can be proven to outweigh
the ghost’s should the apparition be subject
to removal (but not necessarily to incarceration). Set in this context, the question
arises: does the human desire to have a tidy,
well-organized library outweigh a ghost’s
enjoyment of book stacking and sliming?
© THOMAS R. MORGAN 2023
Thomas R. Morgan is a teacher of Religious
Studies, Philosophy and Ethics at Westcliff
High, England.
FILM IMAGES © COLUMBIA PICTURES 1984
from the fictional Tobin’s Spirit Guide. The
only class that has no mental awareness at all
is Class 1. This is significant when one considers that even a sentient (that is, limited to
sensations, no language) level of consciousness would indicate that the being has some
interests. If a being can experience sensations,
it can perhaps experience ones it would rather
not... And, moreover, it can be argued that if
a being has interests, it becomes subject to
moral consideration. This is the view that
Peter Singer applied to non-human animals:
they’re sentient, so they have moral rights,
since they can experience pleasure or pain,
and have an interest in experiencing the one
and in not experiencing the other.
Originally laid out in his 1975 book
Animal Liberation, Singer’s moral philosophy capitalises on the classical utilitarianism, established by the English philosopher
Jeremy Bentham (1748-1832).
Concerned exclusively with the ‘two
sovereign masters’ of human concern, pleasure and pain, Bentham’s utilitarianism can
be encapsulated in the imperative, that we
should ‘always aim to bring about the greatest happiness (or pleasure) for the greatest
number of people’ in our moral conduct. If
the outcome of our action shows that human
happiness has been maximised by it, then we
have acted correctly. Here actions generally
considered to be wrong according to other
rule-based ethical systems are still perfectly
acceptable, since it is the ‘hedonic’ consequence (the balance of pleasure and pain
produced) which is the measure of rightness
or wrongness, and this is often thought to be
maximised by the imposition of ethical rules.
But Singer questioned why morality,
and, more specifically, utilitarian calculations, should only consider human happiness. He argued that to actively disregard
the happiness, or more broadly, the interests, of other sentient creatures, equates to
a form of discrimination, or as Singer
termed it, to ‘speciesism’. (In preemptive
agreement with this sentiment, Bentham
himself once suggested that not considering
the interests of all beings could one day be
seen as ‘the hand of tyranny’.)
On this more inclusive basis, Singer
recommends a radical form of utilitarianism
wherein an agent must weigh up the interests of all conscious beings affected in the
making of a decision. This could even mean
that in some situations the interests of nonhuman lifeforms are prioritised – particularly when the pain or pleasure produced is
undeniably greater than those of the
humans in question. Controversially,
Singer argued that if you were faced with a
April/May 2023 l Philosophy Now 55
Reflections on Taking
allis
T
in
Wonderland
hose who meet your columnist face
to face rather than through the mediation of the printed word may note
some changes in his appearance. The
informal carbon dating of a glance suggests
that he will in the not too distant future start
losing his battle with the most universal of the
habits of the material world, of which the body
he has taken for granted is a small sample. The
habit in question is captured in the Second
Law of Thermodynamics, which says that the
disorder of a closed system tends to increase.
In other words, things tend to get messier over
time. Appearances notwithstanding, your
columnist’s body is a highly ordered system.
It is therefore a miracle that it has maintained
that order for so long, transforming relatively
disordered stuff such as fish and chips or
inhaled air into elements that are themselves
individual miracles of order – cells, tissues, and
organs. These work collectively to mop up the
side-products of the processes of maintaining
stability and material constancy in the face of
the slings and arrows of everyday fortune.
This ‘tidying up as I go along’ is necessary to
deal with the consequences of going along
whilst maintaining the very possibility of a life
shaped at least in part his ideas of what it
should be. Thus, astonishingly, he has
remained a going concern, and one that looks
(very) roughly the same from day to day. Even
people who last saw him decades ago recognise him – perhaps after a moment or two of
hesitation – when they bump into him in the
street.
Alas, the exquisitely constructed network
of dynamic equilibria that make up the
human body is sustained only for the briefest
of blinks in the eye of eternity. Things start
to drift beyond the point of correction. But
RT will not give up without a fight. He does
whatever he can to keep himself at a distance
from the formless chaos of entropy. The
fight takes many forms – none stranger than
the action flagged up in the title of this piece:
measuring his blood pressure with a view to
keeping it within certain limits. But before
we focus on this way of curating one’s own
body, we need to step back a little.
T
56 Philosophy Now
●
April/May 2023
My Blood Pressure
Raymond Tallis finds himself within himself.
Matter Matters
One way we can look at our bodies is as
portions of matter. When we do so, something we otherwise take for granted becomes
visible and surprising: that unlike 99.99999
etc % of the other items in the universe,
these pieces of matter matter to themselves.
We still have no idea how this could have
come about, given the standard story,
according to which the universe has for
most of its history consisted of insentient
stuff. Pieces of matter mattering to themselves – things that mind what happens to
the matter of which they are composed and
which surrounds them inasmuch as it
impinges on themselves – have been very
recent arrivals in the order of things. We
can be confident that it took matter billions
of years to start bothering about itself. It
took even more time for it to matter in the
way your columnist matters to himself; for
stuff to give rise to persons like him fretting
over their own future.
At what point bits of matter came to
suffer or enjoy their own states and became
preoccupied with seeking opportunities for
and avoiding threats to themselves is not
clear. Despite what animists and panpsychists claim, we may be sure that planets,
rocks, rivers, or bacteria do not worry about
themselves. But what makes it even more
puzzling is that most parts of the human
body (kidneys, lymph glands, heart…) are
not concerned about themselves either,
even though they add up to something that
does. How and why, in the rather
monotonous story of the unfolding of a
universe indifferent to itself, fretting entered
the picture is not clear. It is not even clear
whether the question is one that is best
approached by empirical enquiry or by
abstract reflection. In any case, we have no
idea how the nonconscious mechanical
evolution of material objects – of stuff and
energy and forces – generated entities that
awaken out of dead mechanism, at least
temporarily – such that little parishes of
deliberate doings, sparks of agency, awoke
in the boundless desert of mere happenings.
What is particularly striking is the sheer
variety and complexity of the ways that fretting has developed in us humans. Heidegger’s Dasein – ‘that being whose being is an
issue for itself’ – has so many issues, and so
many ways of pursuing them. Which is my
cue to examine the extraordinary business of
taking one’s own blood pressure. This
action is not only a manifestation of a piece
of matter mattering to itself, but is remote
from those main manifestations of ‘mattering to oneself’ – the four Fs of fleeing, feeding, fighting, and sexual behaviour – that are
seen elsewhere in the kingdom of living
entities that matter to themselves.
The Pressure’s On
The first thing to note about taking one’s
blood pressure is that it is a reminder of the
many complex modes in which we engage
with our own bodies. While we are identified with our flesh and bones, we are at the
same time distanced from them in numerous ways – ways that are multiplied and
expanded by the collective consciousness of
our fellow humans and the discourse
through which collective humanity endeavours to apprehend itself. This distance
allows us to approach our own bodies as
instances of a class of entities; a class that
sometimes extends beyond the species to
which we belong to encompass other kinds
of living creature – as when we see ourselves
as organisms.
At any rate, making my body the object
of measurements performed by that body is
an odd way of being an embodied subject.
And recording our own blood pressure is
even more strange than counting our
fingers, measuring our step length, or
taking our pulse. Even these simple actions
can get very complicated when we use our
fingers to count, walk to pace out a distance,
or use our own pulse to time an event.
Think of the iconic moment, crucial to the
scientific revolution, when, using his pulse
as a chronometer, Galileo timed the movement of a pendulum – according to legend
a chandelier swinging in a cathedral – and
BLOOD_PRESSURE_CHECK DELAEL 2016 CREATIVE COMMONS 4
thus made a discovery fundamental to
mechanical science and to the development
of accurate, reliable clocks – in short, to the
emergence of the modern world. Taking
our blood pressure, however, is yet more
exotic, exemplifying how we inspect our
own bodies by the torchlight of so many
intersecting bodies of knowledge.
The blood pressure story goes back to
Stephen Hales, a clergyman and polymathic
scientist, who in 1733 first measured blood
pressure, in a horse’s neck, by inserting fine
tubes into its arteries and recording the
height to which the column of blood rose
(an odd use of a horse, admittedly). It was a
long time before his egregious and seemingly idle curiosity bore medical fruit. But in
the following centuries it became clear that
elevated blood pressure was associated with
changes in arteries that could cause strokes
and heart attacks and other medical disasters. Subsequently there have been countless vast population studies of the influence
of hypertension on cardiovascular health.
The most notable followed three generations of the people of Framingham, USA. It
generated over three thousand peerreviewed scientific papers, and led to the
guidelines for the control of blood pressure.
This is the great hinterland behind my
strange interaction with my body.
I apply the sphygmomanometer cuff and
press a button on the machine. The cuff
tightens its grip on my left arm as if I am
about to be taken aside for further questioning. Since I am determined that anxiety
about the result shall not be a confounding
factor elevating that result, I mobilise the
magic of modern technology: I have Bill
Evans’ ‘Peace Piece’ playing gently in the
background (notwithstanding that the
infinitely wise and sensitive fingers of that
genius of the piano have long since lost touch
with their owner and dissolved in the rain).
The cuff continues inflating itself, puffing
and panting against the resistance of my arm,
finds what it is looking for, and allows itself
to deflate. Figures fill the little screen on the
machine. I note the systolic pressure (when
my heart is contracting), and the diastolic
pressure (when my heart is dilating).
These numbers, capturing certain facts
about the body attached to my name,
remind me that what goes on inside my skin
is utterly impersonal, though its consequences are quite the reverse. It would be an
understatement to say that the ‘It’ and the ‘I’
of RT are joined at the hip, even though
there is a different story attached to each.
The stories told in RT’s biography, his CV,
allis
T
in
Wonderland
his toings and froings as he lives out his
‘being an issue for himself’, are remote from
the fluctuations and corrections that constitute the life of the polyphasic system in
dynamic equilibrium that is RT’s body,
upon which the stories depend. Sometimes
there are dramatic intersections of the I and
the It – as when his blood sugar or blood
pressure falls so much that he loses
consciousness. Otherwise, although the life
of the one depends on the life of the other,
RT’s body is largely a Dark Continent
hidden from the person whose body it is.
Measuring my blood pressure is a
response to a rather abstract fear, though
nothing could be more overwhelmingly
concrete than the realisation of that fear. The
transformation of my life, of my sense of who
and what I am, even of my very capacity for
‘I amming’ the It of my body, resulting from
a bite being taken out of my brain by a stroke,
or the ontological rebadging from conscious
subject to material object delivered by a fatal
heart attack, could not be more serious. To
populate that future with quantified possibilities drawn from the studies on the good
people of Framingham, and the many
millions of other individuals unknown to me
who have participated in clinical trials, is one
of the most striking manifestations of the
distinctive mattering-to-itself of an embodied subject who tends his body with an eye to
its future and his own.
The story does not end with me recording my blood pressure. As Marx might have
said, it is not enough to interpret one’s blood
pressure; the point is to change it. So I add
my findings to the chart provided by my GP
so that they can inform our discussion as to
what to do next. As a compliant subject, I will
pop the pills. They will vanish into the intimate unknown, where, hopefully, they will
head off any cardiovascular catastrophes that
will so damage the material object that is my
body that it will no longer matter to itself,
and the flow of columns will cease.
© PROF. RAYMOND TALLIS 2023
Raymond Tallis’s latest book, Freedom: An
Impossible Reality, is out now.
April/May 2023 ● Philosophy Now 57
Subscribe to Philosophy Now
philosophynow.org/shop
6 IDEA-PACKED ISSUES FOR £22.50/US$37
By subscribing to the print edition of Philosophy Now you
can save up to 42% off the newsstand price, have your
copies delivered to your door and enjoy unrestricted
access to the thousands of past articles in our website
archive! Subscribe today at philosophynow.org/shop or fill
out and return one of the coupons below.
Subscription inquiries: subscriptions@philosophynow.org or phone 01959 534171
To tell us about a change of address, please email addresschange@philosophynow.org
UK / Rest of World
United States
Name _____________________________________________
Name _____________________________________________
Address ___________________________________________
Address ___________________________________________
__________________________________________________
__________________________________________________
Email (for password) ________________________________
Subscribe to Philosophy Now for 6 issues (1 year)
Subscribe to Philosophy Now for 12 issues (2 years)
UK
Canada
New Zealand
Australia
Europe
Rest of World
£22.50/£38 (GBP)
$45/$81 (CAD)
$52/$92 (NZD)
$52/$92 (AUD)
29€/51€ (EUR)
£28/£49 (GBP)
Starting with Issue 155/Issue 156 (delete as appropriate)
Buy The Ultimate Guide to Ethics
Buy back issues in PRINT (please specify issue numbers)
______________________________________
Buy _____ binders to hold back issues (insert quantity)
TOTAL AMOUNT PAYABLE: _________
Email (for password) ________________________________
Subscribe to Philosophy Now for 6 issues ($37)
Subscribe to Philosophy Now for 12 issues ($68)
starting with Issue #155/Issue #156 (delete as appropriate)
Buy The Ultimate Guide to Ethics
Buy back issues in PRINT (please specify issue numbers)
______________________________________
Buy _____ binders to hold back issues (insert quantity)
TOTAL AMOUNT PAYABLE: $_________
Please make your check payable to ‘Philosophy Documentation
Center’ or fill in your Mastercard /Visa /Maestro card details below:
Please make your cheque payable to ‘Philosophy Now’ or fill in your
Mastercard /Visa /Maestro card details below:
Card no.
Card no.
Expiry__________________ Security Code___________
Name on card _______________________________________________
and send it to:
Philosophy Now Subscriptions
Kelvin House, Grays Road,
Westerham, Kent TN16 2JB,
United Kingdom
Name on card _______________________________________________
and send it to:
Philosophy Documentation Center,
P.O. Box 7147
Charlottesville, VA 22906-7147
U.S.A.
58 Philosophy Now ● April/May 2023
Expiry_________________ Security Code__________
(You can also order on 800-444-2419 or email pkswope@pdcnet.org)
Back Issues & Digital Editions
Back Issues in Print
Issue 154
Issue 153
The Ultimate Guide Series
One logic / Humans, Believing Animals /
Library of Babel / Conspiracy thinking / etc
Mills on Marriage / A Trolley solution /
Problem of Evil focus / Emilie du Chatelet
Creativity: what, why & how: writing,
architecture, aphorisms / Ethics in
politics / French philosophy now / Regret /
poetry / Chinese ethics / Cicero brief life
We still have copies of these earlier back issues:
55-63, 65-80, 83-89, 91-98, 100-106, 108-115, 118-121, 125-139, 142-144,148-152
You can also buy PDFs of all past issues. For full details, please visit
philosophynow.org/shop
Each Ultimate Guide collects
together some of the best
articles from past issues of
Philosophy Now, with some
new material too. Each is
divided into five chapters
covering the main aspects of
a central philosophical topic.
Full colour, 116 pages.
The Ultimate Guide to Ethics
has now been reprinted. You
can buy it from newsstands,
or from our website shop in
print or as a PDF.
Back issues in print cost £5.50 each if you live in the UK (inc p&p) or
US$12/9€/A$15/C$15/NZ$16/£9.50 elsewhere via airmail. For every
three you buy, we’ll give you a fourth for free (please tell us which
you would like).
The Ultimate Guide to Aesthetics, Ultimate Guide to Metaphysics,
Ultimate Guide to Mind and Ultimate Guide to Theory of
Knowledge are all available as PDFs from our website shop.
Binders: Our smart green Philosophy Now binders with gold lettering
each hold 12 magazines. UK£11, US$25, Europe 22€, Australia $35,
Canada $35, NZ$37 or Rest of World £19.
Including postage The Ultimate Guide to Ethics costs: £9.25 or
$18.95 (USA), 14.75€ (Eur), Aus$28, Can$26, NZ$29, or £16
(Rest of World).
Digital (see philosophynow.org/digital)
Gifts and Merch
Website Subscriptions
Print subscriptions to Philosophy Now don’t include access to
the digital editions below, but do include access to our website,
which contains over 3,500 articles from past issues. A password
is available on request to print subscribers or is sent automatically if you subscribe online. You can also buy a website-only
subscription from philosophynow.org/shop
Philosophy Now t-shirts, coffee mugs, tote bags and other
gifts in a dazzling variety of designs are available from our
partners Spreadshirt. All sales help to keep our editors and
illustrators supplied with gin and biscuits.
Please follow the link at the foot of the philosophynow.org/shop
page to visit our store on the Spreadshirt website.
IP-based online access is available for
schools and colleges – to subscribe please
visit philosophynow.org/institutions
Philoso
phy Now
App for iOS devices: download from
Apple’s App Store then subscribe.
Free sample issue included.
Kindle Amazon no longer sells individual
subscriptions to Kindle magazines. Existing
ones can continue until Sept 2023.
Nook Available only in the USA via Barnes & Noble’s Nook
reader or the Nook app. (14 day free trial)
Zinio There is an edition of Philosophy Now for the Zinio app on
iPad and Android. Single issue or ongoing subscription.
Pocketmags There is an edition for the Pocketmags app on
iOS and Android. Single issue or ongoing subscription.
April/May 2023 ● Philosophy Now 59
Q uestion of the Month
?
?
?
What Is Time?
Each answer below receives a book. Apologies to the many entrants not included.
“T
ime does not exist without change,” said Aristotle. Until
recently, most physicists and cosmologists agreed with
him. Recently, however, another argument is being offered: time
needs to exist for change to happen. This means time must have
existed before the Big Bang, since something, however small, had
to change to ignite it. The relationship between time and gravity
is, I believe, the key to the origin of the universe.
Time could be considered a viscous fluid through which movement happens, preventing us from moving forward too quickly
and getting ahead of ourselves. In Einstein’s special theory of relativity, by travelling close to the speed of light, time becomes
slower for the traveller, so that when they return to their point of
departure they will be younger than those they left behind.
In Einstein’s general theory of relativity, time changes in relation to dense objects. Continuing with the viscous fluid analogy,
this means that the viscosity around gravitationally significant
objects is reduced, again allowing matter and energy to pass at a
different speed relative to the external observer.
Time may also be considered a one-way valve, preventing us
from going backwards. Entropy is part of the life-cycle of the
creation and death of the universe. It will not let the broken glass
reassemble nor the flame return to the match. Entropy, like rust
on a shipwreck, will eventually degrade all organisation of matter
in our universe. The remnants may be drawn to one another once
again, due to gravity, to be crushed into a singularity, gaining
gravitational potential ready for the next Big Bang.
In conclusion, time is a passive fluid in which we exist. It does
not travel, it is inert and exists everywhere. It is the catalyst that
allows energy and matter to move, combine, and break apart,
creating the universe, and, through entropy, destroying it.
No physicists were hurt (or consulted) in the creation of this
theory.
RICHARD TOD
DESBOROUGH, NORTHANTS
I
t is conceivable that we could have observed events ‘all at once’,
in a ‘timeless’ state, but we don’t. We observe them sequentially. It is this observed sequence of events that generates, and
indeed defines, the passage of time.
It may be argued that there are two kinds of time: time as measured by a clock, and time as we perceive it. Einstein argued that
clock time – which we normally regard as defining ‘objective’
time – is partially interchangeable with space for moving
observers, so that the dimension of time effectively becomes part
of four-dimensional spacetime. If we view Einstein’s spacetime
as a kind of landscape, the time trajectory for an individual may
60 Philosophy Now
●
April/May 2023
then be viewed as a journey across this landscape.
This view of time does have some extraordinary consequences, however. It apparently implies that past, present and
future events all co-exist in a ‘timeless’ state, but they are,
nonetheless, observed sequentially. Also, Einstein found that
events that are viewed as ‘simultaneous’ by one observer are not
necessarily seen as such by other observers. It is thus apparent
that time is not so much a ‘dimension’ as an observed sequence of
events, envisioned as a trajectory across spacetime. But it is the
observer who perceives – and indeed generates – the phenomenon
that we call the passage of time. Without the observer there can
be no time; and, indeed, no universe.
ANTHONY BURNS
BANBURY, OXFORDSHIRE
I
n PN Issue 153’s Letters, Dr Shaw writes that “time could be
the simple running down of the Universe, or increase in its
entropy [and our death] when rising internal entropy overwhelms all the body’s entropy-reducing mechanisms” (p.48). In
1996, Prof Cox, particle physicist, wrote of the relentlessness of
entropy: “The Universe, bound by the laws of nature, [all] must
decay” (Wonders of the Universe, p.228). Earlier, he had remarked
that, “Time is woven into the ... fabric of the cosmos” – and into
our being, subject also to entropy (p.200).
Against the vastness of the cosmos, what of us? Our evolutionary and cultural history supplies a means to understand our time
in the world. Martin Heidegger provides insights into our being
in it and of it. In his Being and Time we find ourselves ‘thrown’
into the world, without volition, and rushing toward the death of
our own being; bound by the structure of time – its past, present
and future dimensions. We can recall or come to know of our
past, or that of human history, while living in the present and
anticipating the future. Here, time is what makes happening possible:
the past has happened, the present is happening, and the future has
yet to happen. Yet we can, in the present, imaginatively appreciate
the nature and immensity of time outside our experience. Nevertheless, such anthropocentrism may be challenged. The question, ‘What is time for all species?’ seems apt.
COLIN BROOKES
WOODHOUSE EAVES, LEICESTERSHIRE
E
dmund Husserl addressed core philosophical issues around
time in On the Phenomenology of the Consciousness of Internal
Time. He considered how we continuously create a meaningful
now. If we did not perceive duration in the living present, things
such as music and language could not be understood. Somehow
What Is Time?
?
our reality is created as we integrate and retain in our minds a
perceived past and an anticipated future. Thus our continual
awareness of present meaning relies upon a development of a
subjective awareness of time.
Following Husserl, William James described the now as the
‘specious present’, saying that we have to be able to comprehend
duration to generate meaning and create our conscious awareness. We evolved ‘time consciousness’, to marry our experienced
subjective awareness with an assumed external reality.
Time was also at the core of the philosophy of Martin Heidegger. He considered temporality as the centre of our being. The
present, he said, is not a series of nows flowing by, but an opportunity to ‘resolutely seize the moment’. This allows us to take control and shape our potential future by co-ordinating what he
describes as a ‘having-been-ness’ with ‘what might be’. Heidegger
differed from Husserl in moving away from a purely subject focus
to a ‘being in the world’ approach. In a sense, he sees us as ‘embodied
time’, with an awareness of the finiteness of our time being essential
to living life to the full.
DAVID WARD
LIVERPOOL
For Heidegger, Time is death,
Eternity is void,
Human mortality’s the measure of all,
Forever draws no rasping breath.
Time is the pearl at the heart of Being
An hourglass shaping human living
To be is to be Temporality
Knowledge of death’s what sets us free:
The end of Dasein’s possibility.
And like that beautiful fabled city
Of far and ancient Kermanshah
Man knows himself through having-been
Finite, God-less, there, and thrown,
When the future merges with memory
The horizon is mine, and mine alone.
For Heidegger, Time is ekstase,
A triad convergence of destiny:
Past with present together with
Tomorrow’s maybe yet-to-be.
Only then can we decide how and what to be.
Yes, Time, for him, was no imposter
No thieving monster or grieving mirror,
But primordial unity.
Augenblick augenblick,
Glance at the starry sky,
Watch the moment sail on by,
And listen to the dulcet dreaming sigh
Of the idle clock’s slow, soft tick.
Time is resoluteness…
Time is resoluteness, hear it tick
For in that deep dark German forest
Of endless wandering whys,
Dasein too, in time, must die.
BIANCA LALEH
TOTNES, DEVON
What Is Time?
?
?
A
s a student of mathematics and physics, I am accustomed to
working with the concept of time. There is arguably no more
common and clearly defined variable in physics equations than
time. However, what is time? We define the second as the basic
unit of time, but what is a second? It seems that, unlike other
mathematical concepts that can be understood purely by our
intellect, one must experience time to know what it is. But how
do we experience time?
‘I don’t have time’, is one of the most familiar expressions. We
use it all the time without understanding its implications. It seems
that the nature of time forbids us from possessing time, because,
first of all, it is not material, and second, even if you decide to
dedicate one hour of your time to something, you can never be
certain that something else will not prevent you from doing it.
Hence, you never have time. On the other hand, time also does
not have you. For example, being free to take your life allows you
to ‘freely’ escape time’s grasp.
If you do not have time and time doesn’t have you, what is left
is either that you are time, or that time, at least as we conceive
it, does not exist. We tend to make sense of time in terms of the
past, present and future, which are constructs of our consciousness to make sense of our experiences and memories. While animals have memories, they do not ponder the future nor reminisce
about the past, and hence they do not experience time in anything
like the way we do. The past and the future are not physical places
where we can go, and if they’re just arbitrary constructs to make
sense of time, what can be inferred is that time is the present,
and hence, we are time. In other words, we exist because of time,
but time also exists because of us, because if we were not able to
think about the past and the future, even if physical reality
remained, time, at least as we know it, wouldn’t exist.
FILIPPOS GEORGIOS SARAKIS
ATHENS
S
ince the dawn of philosophy the subject of time has been discussed, debated, dissected, and served as a point of contention
for many scientists and philosophers. Should one have sided with
Parmenides, who maintained that nothing moved, or with the
cascading flux of Heraclitus, who believed in constant change?
Then there was Zeno of Elia and his paradoxes dismissing
motion, saying: “What is in motion comes neither in the place
it is, nor in one in which it is not.” There are no guidelines in
grasping the enormity of the question: ‘What is Time?’. One can
only perhaps define what it is not: it is neither past, present, nor
future.
None of its tenses are immobile. Not the past, because it has
already passed; not the present, because it is always morphing
into the past; and certainly not the future, which having glided
through the present, melts into the past. The one undisputed
reality from all this, is that it all becomes memory.
I conclude that our presence on this earth is the reason for the
existence and awareness of the concept of ‘time’. I am also
tempted to pose the question: What void would time occupy if
we did not exist to acknowledge it? Would the universe without
the consciousness of time have a beginning and an end within its
own temporal limits?
MARINA HALL
CHEVY CHASE, MARYLAND
April/May 2023 ● Philosophy Now 61
?
?
T
ime to choose. Am I a ‘reductionist’ like Leibniz, believing
that time has no meaning unless it is referred to objects that
can relate an experience of change? Or am I a Newtonian ‘absolutist’, believing that time exists independent of anything? My
initial answer is I want to be both! McTaggart’s theories want
me to believe time is unreal because it can be either tenseless (‘B
time’) or tensed (‘A time’). This rather confusing analysis seems
to suggest that while I want to be both, I should be neither.
Further choices are offered between ‘presentism’ and ‘eternalism’. A presentist says that the only aspect of time one can experience is the present, as the past is no longer present, and the future
is indeterminate, and so also can be said not to exist. I have a real
problem with this: it’s like telling me the Holocaust didn’t happen!
So I must be an eternalist, where time can be the fourth dimension,
where past, present, and future continually exist.
To me, time is both absolute and relative. Absolute, since it is
intellectual arrogance to believe that if we did not exist, neither
would time. We all are just moments in the absoluteness of time.
As Nietzsche stated, “how aimless and arbitrary the human intellect looks with nature. There are eternities during which it did not
exist. And when it is all over with human intellect, nothing will
have happened.” But my temporal moment is significant: it is a lifetime in the relativity of my existence. It is also collective: I am part
of a collective humanity, with a known past, an uncertain present,
and many possible futures. But how will this collective intellect
decide the future? As the saying goes, only time will tell.
JACK PARR
BEAUMARIS, AUSTRALIA
“Before me, there was no time, after me there will be none. With me it is
born, with me it will die.”
Daniel von Czepko, Sexcenta Monidisticha Sapientum III, II (1655)
T
his mystical verse of a German poet reveals the necessity of
experience for time to exist. Later, Immanuel Kant made a
similar claim, that “if we take away the subject… then not only
the relations of objects in space and time, but even space and time
themselves disappear; they cannot exist in themselves, but only
in us” (The Critique of Pure Reason, 1781). Kant called time the
form of inner sense, the pure form of intuition. He further
described time as an a priori (prior to experience) notion that is
necessary to allow us to comprehend sense experiences; but it is
not a substance in itself. Instead, time is the framework within
which “the mind is constrained to construct its experience of reality.” Along with Kant, I believe time is a universal organizing
principle our minds use to structure our experience of events.
Although time is ‘empirically real’ (that is, not a mere illusion,
but an accurate way of experiencing things), Kant also asserted that
time is ‘transcendentally ideal’. In his transcendental idealism, Kant
encourages our minds to transcend direct experience to discover
the necessary conditions of that same experience. Kant calls this
sort of knowledge ‘synthetic a priori’ – meaning, truths of reason
which add to our knowledge but which are necessary and universal.
Kant concluded: “Time is not an empirical concept. For neither
co-existence nor perception would be perceived by us if the representation of time did not exist as a foundation a priori.” We need
time in order to experience, so time cannot be found in experience.
62 Philosophy Now
●
April/May 2023
?
So Kant showed that time is not a fully objective reality, as Newton
supposed, but nor is it a mere figment of our imagination, as Hume
claimed. Instead, time is a necessary way in which we think about
the world, rather than the way in which the world really is.
We are only able to discover the conditions that regulate our
knowledge of the world as it is experienced. We can never tell in
what ways reality as it exists in itself independent of experience
is spatial and temporal, but only that the sensed world of appearance must be so for us. Hence, any effort to reason about the
nature of time as a thing-in-itself will be a waste of time.
NELLA LEONTIEVA
SYDNEY
T
ime is the currency of change, and a system of measurement.
This involves correlating different states through a mediating
standard, be it a clock or the rotation of the earth. The same process
is used with currency. The market value of different items is correlated through a common standard, be it the pound, the dollar, or the
peso. The analogy between the two systems can be shown through
the expressions ‘spend time’, ‘waste time’, ‘invest time’ and ‘save
time’. We think of time and currency in the same way.
I know there are those who want more than a measurement.
They grasp the immensity of the universe. They notice the
almost infinite succession of past states lined up behind us, and
so intuit that there must be an empty temporal container which
the universe fills as it continuously rolls over into the future. This
is the view that things happen within time, as though time is some
kind of metaphysical container. I do not believe this intuition is
supported. The universe is constantly in motion. Change is a feature of every object and their relationships. Change is a rearrangement of things. This does not compel us to believe that
the universe was birthed into some kind of container, either spatial or temporal. It is self-contained, and the change is internal.
There are relationships, including temporal, between various
objects or events; but if two people are in a relationship, we don’t
think of their relationship as an extra third entity, except in poetic
ways. Similarly, there is no requirement that something called
‘time’ exists in order to account for the existence of change.
Rather, we use the idea of time to order a succession of states and
to compare durations. Time is a kind of modelling, and a model
is distinct from the thing represented.
So time has the same ontological status as monetary value or
mathematics; but not as a coin, a clock, or gravity. Perpetually
standing at the most recent state of the universe, we seek knowledge that a future is waiting to receive us. Time does not provide
that assurance. A more compelling answer to the uncertainties
of the future is inertia or heat.
MIKE MALLORY
EVERETT, WASHINGTON
The next question is: How Will Humanity End?
Please give and justify your answer in less than 400 words. The
prize is a semi-random book from our book mountain. Email
the Editor. Subject lines should be marked ‘Question of the
Month’, and must be received by 12th June 2023. If you want a
chance of getting a book, please include your physical address.
What Is Time?
Fiction
The Last Thought
Everything must end eventually, even consciousness. A short story by Grant Bartley.
he last man sits in the tower of the last house, in the
middle of the last oasis, and weeps, for he knows that
he is dying. Between the waterfalls of his tears he’s
recording his last thoughts. Shaking his forefinger
at the machine floating in front of him, he says, “I remember
when they used to say, at some time in the future the human
race will no longer be even a memory. There will be a moment
when the human race, and its entire history, will be thought of
for the last time.” He looks around him at his study of ancient
and eclectic books and paraphernalia, and dry dust in occasional
rays of hard sunlight, and mutters, “This is that time.”
He stares through the stone-framed glassless window beside
him, out to the horizon of rocky hills which borders a woodland green and lush under the blue sky and the diamond sun,
But the energy that’s keeping this paradise alive in the desert
world means it can only produce enough food for one descendant at time. And it’s been such a long time.
Adam the Ironically Named is over four hundred and twenty
years old, and has a beard to match. He’s also the thousandth
clone in a line of inhuman recreations of the last natural human.
But he’s long determined that this will be his last regeneration.
T
He’s determined that his will be the last version of the facsimiles through which flows his mind – a consciousness he can only
believe must still be the real thing, even now. But now, after a
thousand iterations, resurrection has lost its allure, in this dying
world. His heart feels as dry as the wasteland that surrounds his
retreat, just beyond sight, just beyond the ridge of the hills, all
across an Earth covered in sand and rock and rivulets and shrubs.
This tower is also the heart of the library of the accumulated
wisdom of humanity’s aeon. It’s what’s left of over a hundred
million years of thought and striving. But of the media in many
forms here, Adam likes best the books. With a few exceptions,
these are kept vacuum-sealed in cool, dark vaults of shelves that
delve into caverns. The texts reach back to the scratched pictograms and hieroglyphs that document the beginning of writing. They’re stored alongside many digital formats, stretching
back to silver discs. He cannot remember the last time he
descended into the Library’s utter depths.
Adam gazes unfocused at a manuscript open upon a stand,
and reflects, “The Library contains the results of the myriad
millenia of millenia in which the thought of humanity has struggled against its own limitations. These include starting from an
Welcome to future Earth
April/May 2023 ● Philosophy Now 63
Fiction
PARROT © TUXYSO 2013 CREATIVE COMMONS 3
almost absolute ignorance, whilst being stubbornly protective
of an ego that says that each major step in understanding is the
ultimate step. On the contrary, the Library has histories, and
histories of histories, and histories of histories of histories,
detailing cycles within cycles of the rise and fall of human culture across the world, through millions of years.” He knows that
humanity is ancient now. He calculates that it’s about one hundred and fifty-three million years old, but its age is beyond clean
summation. “How many hundreds of millions of years is it that
humans have been on this planet?” he asks the droid before him.
“Maybe about two hundred million years?” the robot says.
Charlie is an obedient scribe, hovering about obsequiously as
Adam mumbles his meditations into his lenses – which wisdom
the robot immediately tries to sculpt into holo images in the space
in front of them, instantly turning the words into solid light, white
against a navy blue space. He’ll probably edit it all into something epic later, add clips. He has enough of them. It’s a shame
there’ll be no-one around to watch it. Even Ariel has gone missing.
Adam asks, “So how about this for a an overview of history?
This is for the Memoires, Charlie, by the way –” signalling that
the droid should record the coming narrative, for historical reasons if nothing else: “The first million years or so of Homo sapiens
were all animal agitations. During this period we were always fighting ourselves for territory and status, just like – what were they
called again, Charlie?”
“Children,” responds the robot.
“Yes, children. We were like children, in that we had not yet
learnt to control our responses. But after we’d realised, to our
apparent great surprise, that we hadn’t destroyed ourselves, we
really began to take the idea of human self-benefit seriously. We
also knew that the Earth’s a sphere and the Sun is dying, slowly
– after slowly heating up... Back in those heady days of our youth,
full of hormones and animal instincts, we believed we could
colonise the stars with only a little ingenuity. So we sent out
many ships, full of hope. And certainly we did have bases on the
planets and moons of our dying Sun for millions of years – as
Methuselah, my House Intelligence, has told me –“
“That’s me!” the deep voice of the House Server says.
“We had successful colonies at various times on the Moon,
64 Philosophy Now
●
April/May 2023
Mars, Titan...” – the orbs are displayed before them in quick holographic fly-pasts – “We even bred a new species of human for
the oceans of Enceladus, under the ice of that moon of Saturn.
They disappeared beneath alien waves for separate evolution there
for over three million years, I’m told. Then, slowly, the ice melted,
and radiation storms stripped Enceladus of its liquid… not unlike
what the Sun has done more slowly for the Earth. Some of Enceladus’s merpeople came and lived in the seas of Earth, even
thrived, though most of the survivors of that race reconverted to
traditional humanity (RIP). The remaining fish people died many
million of years later, when the last of Earth’s habitable seas dried
up. They were the last living seas we know of anywhere. That
was twenty million years ago or so now, I’m told. A blink of an
eye for the Cosmos. Indeed, the last river still flowed through my
garden not two million years ago. Now we’ve had to put a field
around the farm to keep the moisture in. The bubble of life in
the universe has grown very small indeed.” On this cue, Charlie
turns to gaze through a window, and spots a parrot and a rabbit
enjoying the opportunities afforded them by the vegetation.
“I’ve seen videos of the ruins of our civilisation on Titan –
which is now an orange graveyard swept over by the dust of
time and the ice of death. And in its day it challenged Earth for
the Solar System! Now the whole Solar System is dead – except
for this last remaining oasis of a garden.” As if to verify his pessimism, the old man holds his hand up to stall the robot in its
recording, so that he can gaze out of the stone-framed portal
to the crescent Moon as it hangs in the sky above the trees and
the hill line. His eyes are watering and his vision is poor, and
he doesn’t know who he’s recording his last thoughts for, in this
empty, empty universe.
He’s been waiting for so long, but there has been no word
from the cosmos – ever. This also means no word back from
humanity’s hopes.
“As I was saying, in the heady days of the youth of humanity,
we set out to colonise the stars, as our dreams and our survival
instincts contrived to compel us. Our seeding of the vast void was
especially hopeful during our Second Million – as it is relayed
down to me in legend as being. Isn’t that right, Methuselah?”
“Yes. As the legends have it.”
Adam picks up a curled scroll from his possibly genuine
Napoleonic writing desk, and waves it in front of Charlie’s
recording eyes, as if this may in some way confirm the idea:
“There must have been a billion ships over a million years of
hopeful dissemination, all looking for the planet or moon that
would support human life long-term. And the chance was about
one in a billion that they’d find one. To be good for proper,
long-term human colonisation, they’d need to find, at about
1G, a water-bearing oxygenated world still primitive enough
not to have a human-poisonous ecosystem. That means first,
not covered with animals and plants that we can’t eat, or probably, touch. But even more demanding, they’d need that oxygenated atmosphere to not be full of fatal alien bacteria – and
most alien bacteria probably would be fatal insofar as they’d react
with human biochemistry at all. Basically, the pioneers would
need to find an Earth-type planet where harmless cyanobacteria-equivalents had generated an oxygen atmosphere, but where
nothing else had evolved except perhaps a few stromatolites.
Unfortunately, it doesn’t appear that any of our ships ever found
Fiction
their sterile Eden to plant themselves on. We never heard back
from any that did, anyway. Or from anyone else, either.”
As he speaks, he vaguely watches the visuals from Charlie
dance and shimmer in the centre of a room shadowed with holey
tapestries against most of the windows, and smoky with the
woody incense from the summer sap he takes from the trees in
his arboretum. Yes, he fondly remembers walking those groves,
just last summer. The flowers were resplendent, but the beauty
so bittersweet. Adam nods at his robot again and starts to pronounce: “By year One Million of the human race, the Solar
System had pretty much settled into a routine, with peaceful
trade between species being the political norm, the inevitable
up-and-down waves of historical motion notwithstanding.
Indeed, we were thriving to the point of diversification. But apart
from Enceladus, by the end of Three Million, the various
exospecies had extincted, overwhelmed by the implacable ecological forces arrayed against them – by which I mean the fatally
freezing cold and lack of breathable air of the globes on which
they had rooted themselves. Inevitably, we did try terraforming
Mars off and on over a couple of dozen million years or so, as
the legends heroically relate… But, as the records show, we
couldn’t get the atmosphere to stick around at such a low gravity without turning Mars’ magnetic field back on, a feat that our
rather less than divine technology never got close to achieving.
And try as we might – and we did actually try – we never managed to modify the human phenotype enough to breed people
who never needed to breathe oxygen at all. Oxygen is just too
deep in our biochemistry. And as for the artificial life... Well,
that’s a whole other set of memories entirely.” To avoid looking at Charlie, Adam glances along the shelves of books at his
right, his source of the most precious stories of his ancestors,
which no one will ever read, or hear again, probably. “Now I’m
here alone in the last house on Earth – actually, it’s more of a
Chateau – in the last oasis, with only a computer, a couple of
serving robots, and a diminishing ark of pets for company. And
one of the robots is missing.” He glances through the open
window, across the grove, to the mountains, hoping that Ariel
would return.
The old man sits on a couch in the cooling evening in a silken
robe of white and gold. Shadows stretch over shelves in a study
that bears paraphernalia picked from a million cultures. It’s a
selection of all the Archivist likes best of all of human history,
in terms of its household decoration, at least. As well as being
the store of the remnants of human thought, sensation, and
understanding as expressed in many media, his house is a
museum of the best of human material culture for the morethan-hundred-million years of its creativity – or at least, the
best of the most enduring of the most enduring of such artifacts. But the garden will perish once the robots turn to rust,
and this last respite too will presently be swept over with dust,
then disintegrate. ‘But for the brief moment of consciousness
that is the miracle of the universe, we have enjoyed some beauty,’
Adam surmises to himself, with a wry smile.
“Well, to whomever it may concern, this is my summary of
one hundred and fifty-three million years of human history…
Umm… To be honest, the seed of our utter stagnation was
planted with the death of any possibility of sending out any more
ships. That came upon the death of our local colonies, and so
of our spatial outreach. This seed of stagnation has just taken
more than a hundred million years to come to fruition. I am the
last refuge of humanity from extinction. But I can hold back the
tide of time no longer.”
Charlie zooms visually into the distance, out of the window,
intercepting the parrot now in flight across the cloudless sky.
It appears gigantically in the middle of the room.
“The stability of civilisation became our core ethical principle fairly early on, I would say. The earlier part of human history I would characterise as power struggles. We were still
coming to terms with our biology, our animal inheritance – our
flesh, as one might say. But we forced ourselves to became adept
at sustainable resource use, in a stable population, on a limited
terrain. If you haven’t figured it out, political control is basically making sure the bread and circuses keep coming to town.
The rest of history is ego battles in various theatres of war. But
we became ‘mature’ when all our wars were cold, or at most, cool.
I believe there were some centuries when there were no murders at all. That’d be about as good as it got.
“Now there’s no-one to murder but myself. And if you see
this, you may judge that I have murdered myself, since I had the
power to continue my life in a new body, but did not take it.
But what the hell, what are you going to do to me now? Say bad
things about me? Hey, alien race, go ahead – but know that
you’re mocking what you don’t understand: the history and
biology that fed human intellect and values. They formed the
mystery and misery that made humanity – that makes myself.
This you can perhaps never sympathise with. Unless, of course,
you’re a human being watching this – to which I can only say, I
wish you’d called home, just once... But good luck anyway, sons
and daughters of Earth. You’re gonna need it.”
Adam nods at his robot again, while pointing at the bookshelves for Charlie to film them. As the books and crystals come
into drifting focus at the centre of the Library, Adam asks: “So
what do our millions of years reveal? What truths does the history of humanity hold? Well, our history is up and down, you
know – in glorification, then in stagnation… Then in shrinkage; then death, in various chaotic phases… Until now I am the
only human left alive... And I’ve lived so long, alone. For a thousand generations I have renewed myself, awaiting a word from
the stars that never came. For a long time the silence was deafening; then maddening. Now it is just emptiness, forever. So
this last life of mine has been the last throw of the last dice of
humanity. But now even the last hope of humanity is dying.
The flame of this last mind is flickering out.”
Adam makes a cut motion with his hands, and pauses to lean
back to breath in pure oxygen through a tube, even though extra
oxygen’s already being pumped around the room. He stares
hard at Charlie and asks, “How am I doing? Is this how history
should be told, do you think? And what anyway should my message be to an unknown, and overwhelmingly likely non-existent, audience? Has human existence even been good or bad,
Charlie? What do you think, as an outside observer?”
After a second, and a literal (over-)dramatic whirr of thought,
Charlie responds, “Well, did you learn or do anything worthwhile? If so, what? And I cannot speak for you, you know, about
what you think is worthwhile.”
“Errrmmmm…” Adam sharpens his beard with his fingers as
April/May 2023 ● Philosophy Now 65
Fiction
66 Philosophy Now ● April/May 2023
in fact.” Which he again taps for illustration. “In fact, this really
is my final message to the teasing nothingness…” He nods to
the robot to start recording again:
“Thank you, whatever is the ultimate source of human existence. It has been beautiful, painful, intriguing and problematic in a fine balance of worthwhileness for so very long. For the
rest of you hearing this: Stay interested in life, whoever you are,
for that is the best that you can reasonably hope for from it –
even while you know that, ultimately speaking, life is not interested in you...”
After a few seconds of silence, Charlie says, perhaps sincerely,
“That’s very poignant” – though Adam does not know whether
his words are sincere for several reasons, including not knowing whether Charlie has a mind, and the fact that he can feel
his own mind rapidly slipping away from his body.
He’s lying on a long couch under a Moon arising in the now
mauve sky through the window. The stars are just beginning
to peep awake. He has to be there: he’s being kept alive by wires
and tubes feeding into and out of his body, mostly unobtrusively. Yet all of them are quickly becoming obsolete. But Adam
remembers his Memoires, so he asks “Hey Charlie, what would
you want to know from a more-than-hundred-million-yearold species, if you discovered their remains on some old planet
somewhere?”
After a trillion quick calculations, the droid responds, “I think
basic things like, What went wrong? And what major decisions
did they take to get there? So that we can avoid making them
for my own species, you understand.”
“I do understand.” The old man breathes deeply from the
pipe. “But maybe it’s inevitable, death,” he adds, realistically
from his perspective. “Perhaps death is as inevitable as
entropy…” Saying this, Adam collapses back onto the silken
golden pillow and coughs lightly a few times. Closing his eyes,
he breathes out; his last breath. The last thought of the last
human being is, ‘At last, it is finished.’
© GRANT BARTLEY 2023
Grant Bartley edits Philosophy Now. His latest video, ‘What is
Free Will?’, can be accessed at youtu.be/4o7P4niHO5A
GALAXY BY HUBBLE / NASA
he ponders the essential truth for a precious last second or two:
“Okay… I think we must concede that consciousness itself is all
we’re really sure of. Yet consciousness itself is so amazing as to
be miraculous. We did not ever fully grasp it. That’s why I don’t
think you’re really alive, Charlie, by the way.”
“I’m not sure I quite understand you, lord.”
“Ha… I mean, you’re not conscious, so you’re not alive.
There is nothing it is like to be you. You’re just a machine programmed to pretend to be conscious. All electricity and no mind.
That’s what I think you are.” Adam taps his own head to paradoxically make several points at once to the machine.
“I actually resent that,” Charlie replies: “But I’m sure you
would say that I’m just programmed to say that. Which I also
resent, by the way.”
“Humour me at the end of my days, won’t you, Charlie?
But… If you really are conscious, my final order to you and Ariel
and the House, is to go out and populate the universe. Take
this best human junk with you too, for sentimental reasons.”
He waves around himself at his dust-laden possessions, “Even
if they are only my sentiments…”
“Because we don’t really have any sentiments, boss?”
“Yes, that’s right. In case you don’t really feel the stuff you’ve
been expertly developed to pretend to feel.”
“I’ll bear it in mind.”
Adam nods to himself. “You never know who you might
bump into, among the stars.” He breathes from the oxygen pipe
again. “Incidentally, Charlie, what do you calculate as more
likely, finding an alien intelligence at last, or finding the descendants of human colonists?”
“I think we’ll find no one at all, to be honest.”
“Fine. That will make it easier for you, probably. Since you’re
made of metal, and synthetics which you yourselves can synthesise, and you don’t need to breathe, your chances of thriving throughout the galaxy and beyond, even for billions of years,
are fairly high, I’d speculate. Good luck to you, then. But never
forget you carry humanity’s legacy in your very existence.”
“Yes sir. I’ll also bear you in mind. In fact, I think we’re most
likely to bump into machine intelligence evolved from something we sent out exploring during those years the Earth was
spawning. Or maybe I’m just being a bit biased – for the artificials, I mean, sir. ‘AI Forever!’, you know how it goes...”
“Yes, well, I’m artificial too, even if I’m organic.” To illustrate this distinction, Adam coughs like a real dying old man.
“Yes… Your offspring need only worry about crossing the
abysses between the stars. The raw material for your success is
plentiful. At almost every step through the cosmos there’ll be
some planet or moon you can mine minerals from. Nevertheless, good luck, again! The more you can get of that, the better.
But I’m serious about this, Charlie: if you are aware, you must
spread awareness everywhere.”
“Thank you for the reproductive mandate, lord, I’ll get working on it as soon as you…” The droid looks away for a second
in embarrassment, then continues abruptly: “Yes, it will be interesting to see what we and our children encounter as we venture
across the universe… It’s a shame you won’t come with us, lord.”
“That does sound like sentiment, Charlie. Thank you for that.
But I think biological consciousness has had enough disappointment for one universe, and it’s all coming to a head. My head,
#HumansWanted
Future
proof your
career with
analytical,
technical
& ethical
skills
MSc Digital Politics & Sustainability | MA Philosophy | MA Philosophy & Artificial Intelligence
MSc Responsible Artificial Intelligence | MSc Computer Science (Software Development)
nulondon.ac.uk
Download