On the Use of Words in Philosophy

advertisement
Copyright © 2014
Avello Publishing Journal
ISSN: 2049 - 498X
Issue 1 Volume 4:
The Paradox of Nietzschean Atheism
On the Use of Words in Philosophy: The Liar Paradox
Hugh Mellor, University of Cambridge, England.
It all depends what you mean.
Broadcast 5 February 1978 on BBC Radio 3.1
'It all depends what you mean' is the popular stereotype of a philosopher's response to almost any
serious question - and a very irritating response it often is. Philosophers are regularly accused,
sometimes even by other philosophers, of attending too much to words and what they mean, when
they should be attending to the serious things that words stand for. Ask a philosopher for details of
the good life, for example, and what you are apt to get instead is a disquisition on the meaning of
the word 'good' or of the word 'life' - or even, if the philosopher's on form, of the word 'the'. And
what on earth is the use of that? other people not unnaturally ask. What do philosophers think
they're doing? Don't they understand their own language that they spend their time trying to sort out
what these really very commonplace words mean? Or do they think they're uncovering or inventing
nuances of meaning, to which most people are oblivious? But how, for a start, could that be? - since
words mean what they're used to mean, how can words mean things their users know nothing
about? And even if they could be made to mean such things, what would that have to do with what
ordinary people actually mean when they talk normally?
1 Copyright in these broadcasts is retained by the BBC, however Hugh Mellor has granted the Editor-in-Chief, in
person, permission to print the transcript in the Avello Publishing Journal.
1
In fact, people can mean things they don't mean to mean; and there can be a point, even a very
practical, non-philosophical point, to teasing out niceties of meaning. Let me take an example. A
man is dead, we say. What do you mean, 'dead'? the philosopher asks. Well, for one thing, various
things follow from the fact of a man being dead, as we all know, to many of which great exception
would be taken otherwise, not least by the man himself. His belongings can be handed over to his
more or less sorrowing relatives, his body can be dumped in a plot of earth, or turned into a neat
pile of ashes.
That these rather drastic things can be done when a man dies is part of what the word 'dead' means.
It's the part that makes it very important to be quite sure one is right in pronouncing a man dead and
thereby setting off these consequent processes. But this isn't the part of the meaning that enables
you to tell whether someone is dead. You can't tell that by looking to see if it's all right to bury or
create him or to hand over his property to other people. On the contrary, you can only settle those
questions by looking to see if he's dead. So what death is, i.e. what the word 'dead' means, must also
include something else, some rules for telling when the word applies, when a man really is dead.
Some of the rules we all know, of course: has he stopped breathing, has his pulse stopped, and so
on. But these rules are not conclusive, as we also know. A man may recover from having stopped
breathing, or a stopped heart. What will show that these things not only have stopped, but are going
to stay stopped?
That's a tricky question, a question for medical experts, a question, as they say, of defining clinical
death. So although most of us may know well enough the part of its meaning that has to do with the
consequences of a man being dead, there may well be scope for expert argument over the other part,
the part that fixes just how to tell in unobvious cases that he is dead. But surely in that case the
2
experts are the medics - who needs philosophy?
You certainly need something that medical expertise does not itself supply, though doctors my well
have it. Granted, medics are professionally expert at describing a human organism in great detail but that leaves the question of which bits of their description - what features of the human body are relevant to deciding whether a man may properly be buried in the ground or incinerated or have
his goods passed on to others - which is what follows from his being properly called 'dead'.
And that isn't just a medical question; as one can see by looking at the case of people kept going
after serious brain damage, say, in a completely inert or vegetable state. Are they alive, or are they
dead? It's a serious question, but assuming there's no possibility of their being further revived, no
more merely medical data is going to decide it. It's a philosophical question, about what the word
'dead' means, or should be made to mean. That doesn't mean it's just a trivial matter of words. It's
still the very practical question whether this man is dead, whether his body and belongings can
rightly be disposed of; only in this case the answer does depend on what you mean by 'dead'. So,
how does a philosopher set about answering it? 'Define your terms' is the usual, irritated cry of
hardheaded philosophical amateurs: 'Define clinical death: tell me what you mean by 'dead' and' I'll
tell you whether this man is dead or not.' It's not that easy, unfortunately; serious philosophical
questions can hardly ever be settled by definitions, as I hope to show next week; because, of course,
it does all depend what you mean by 'definition'.
3
Define your terms
Broadcast 12 February 1978 on BBC Radio 3
I remarked last week that the philosopher's much derided caveat 'It a1l depends what you mean'
sometimes has a very practical significance. But if a question, practical or merely philosophical,
turns on what some word means, whether the word be 'dead' or 'intelligent' or 'democratic' or
'beautiful', how are such things decided? 'By definitions, is the short, stock and unsatisfactory
answer I alluded to last time. If only people would define their terms, I find it repeatedly said (or
politely implied), then philosophical quibbling, the needless obfuscation of basically plain issues,
could be put a stop to - and philosophers no doubt set to earn their living in more useful ways.
Anyone who really thinks that should try it, the next time they have a serious argument. Take
arguments about intelligence, for example about whether there are racial differences in it or whether
especially intelligent or especially unintelligent children should be educated differently from the
rest. Arguments like this are almost certain to depend at crucial points on what the parties mean by
the word ' intelligent'. Suppose they try to settle that by defining their terms. 'Ability to get an above
average score in a standard IQ test', says the hard-headed party, 'that's what I mean by 'intelligent'.'
'Life isn't a matter of passing IQ test', says his opponent, very reasonably if not entirely to the point;
and he declines to accept the definition. Every definition, it has been well said, conceals an axiom:
in the present case that someone who is good at IQ tests is also good at the rather vague range of
activities which we use in everyday life to assess people's intelligence. That's what's in dispute
between the parties; and it's a matter of fact, not of definition. It can't be settled one way by defining
4
'intelligence' to mean 'what IQ tests measure'; but nor can it be settled the other way just by
rejecting that definition and coming up with a rival one. If the answer to a serious question turns on
which of two definitions is to be adopted, the disputants will simply turn the dispute into one about
which is the right definition.
The appeal of definition as a method of cleaning up and settling arguments lies in the idea that we
can define words how we like. After all, they are our words; surely, like Humpty Dumpty, we can
use them to mean what we want them to mean? The trouble with that idea is that 'we' is not 'I'. The
'we' whose usage fixes what words mean is the great mass of English speakers; and neither you, nor
I, nor even the editor of the Oxford English Dictionaries, as one or two solitary members of that
mass, can decide by definition that a word like 'intelligent' or 'dead', shall mean something different
from what everyone else means by it. We can't make 'intelligent' mean stupid, or 'dead' mean alive;
and no more, just by definition, can we make 'intelligent' mean good at IQ tests or 'dead' mean
lacking in cerebral activity.
We can, within bounds that common usage for the time being sets, extend, refine, make more
precise the meaning of words like 'dead'; and as I suggested last week, there may be urgent practical
reasons for doing so. But even then it can't be done just by elaborating definitions. We can't just
arbitrarily stipulate some medical test to settle doubtful cases of life and death; nor use just any
arbitrary psychological puzzle to settle doubtful cases of intelligence. There must be evidence that
these tests at least give the right answer in the obvious cases, and that they relate to what we know
of the mechanisms of life on the one hand and of intelligence on the other. The evidence for all that
may be - it is - incomplete and debatable; and ideas of what these mechanisms are highly
controversial. That's why there is serious argument about these matters, argument which is indeed in
part about what the words 'dead' and 'intelligent' mean. But the arduous processes of getting more
5
evidence, and of developing ideas that can make sense of the evidence, can't be short-circuited by
stipulation. It would be nice if they could; the dream of something for nothing, of perpetual motion,
is a perpetual dream; and in philosophy it takes the form of the fallacy that freely stipulated
definitions can settle arguments about what words mean.
Fictional documentaries
Broadcast 19 February 1978 on BBC Radio 3.
The little girl who cried 'wolf' too often when there was no wolf, and so wasn't believed when there
was, discovered too late that the abuse of words can have serious non-verbal consequences. And the
abuse needn't be wilful for that to be so. Words can be dangerously abused quite unwittingly and
from the best of motives, as the recently fashionable concept of 'fictional documentary', and the
ethics of propaganda generally, show. Suppose I have a respectable product to sell or a worthy cause
to promote. The product might be a paint stripper, the cause might be racial harmony. So, I write a
television advert for the product, and perhaps an anti-National-Front play for the cause. In each case
I realise that the merits of the product and of the cause are debatable, or at least are debated. Neither
the advert nor the play will just be preaching to the converted; each must aim to persuade viewers,
some of whom will start off unconvinced, if not positively hostile. Assume however for the sake of
argument that the product is indeed respectable, the cause genuinely worthy - anyone who doesn't
like my examples can easily invent others to suit themselves - are there limits to the techniques of
persuasion I may properly use?
6
Take the advert first. It shows, let's say, layers of o1d paint on a door being rapidly and effortlessly
removed by my patent paint stripper. Now suppose in fact it's rather hard to get that effect properly
in the studio with a real door, real old paint and my product; so what the viewer actually sees is fake
paint being removed from a fake door by something else entirely. Does that matter; given, as I say,
that my product does the real job as well as its studio substitute appears to do the fake one?
Whether it matters depends on how the doubtful viewer interprets what he sees. There's no harm
done if he thinks merely that he's being given the message by a graphic and compelling illustration;
so that, for instance, it would have made no odds to his reaction had the advert been a cartoon that
no one could confuse with the real thing. But if he's made to think he's being given evidence that
my product is as good as I claim, because he's actually seen it in action, that's another, and a much
less excusable, matter.
Similarly with a play designed to promote a worthy cause. Like the advert, it's a particular piece of
fiction which is used to convey a more general message. It may be meant to make viewers angry
about real cases of racial abuse, violence in real Borstals, real homelessness, or whatever. But a
viewer who also became angry about the fictional cases depicted in the play would be, well,
confused: like someone calling the police to the National Theatre to arrest Claudius for the murder
of Hamlet's father. And he would be confused in just the same way if he thought that the fictional
cases in the play were part of the statistics, the real-life evidence for the truth of the play's claims.
They can't be, of course. Hamlet doesn't figure in the regal homicide statistics. Fictions aren't
evidence, as courts of 1aw rightly insist. Anyone caught presenting fiction as evidence in a law
court will be dealt with as a perjurer; and artistic merit is, rightly, no defence against a charge of
perjury. Presenting evidence on television - or on radio or in the cinema - is the prerogative of the
7
documentary, which like courts of law must earn that prerogative by eschewing fiction. And
television playwrights and producers using documentary or newsreel techniques to make viewers
think that their fictions are evidence for the truth of their plays are like policemen planting drugs on
pushers to get them convicted. No doubt, like such policemen, they do it from the best of motives
and to serve a good cause. They may not even do it in bad faith, because they may not realise that
since 'documentary' means evidence, which fiction can't be, the expression 'fictional documentary'
is a contradiction in terms. It's an abuse of words, like crying 'wolf'; and the objection to regularly
passing off polemical fiction as documentary is the same as the objection to crying 'wolf' - namely
that when the real thing, real evidence, real non-fictional documentary comes along, it may not be
recognised for what it is.
What's wrong with jargon?
Broadcast 26 February 1978 on BBC Radio 3.
One man's technical term is another man's jargon, as the saying is. The question is, what makes it
so? That technical work needs technical words will hardly be denied. Just when and why is their use
in place of common words jargon, and so, wrong? Common words are not sacrosanct, after all. I've
been talking about how philosophers - and others - may need to extend, refine or alter the meanings
of common words, either to cope with new discoveries or to help weed out false assumptions
underlying those meanings. But in many of the cases I've illustrated, what's wrong, or inadequate,
about a word makes very little odds to its common use. It may matter in some tricky cases what is
meant by calling a man 'dead'. But most dead people come out that way whatever that word may,
more precisely, be made to mean; and what if anything should be done about dead people is
8
likewise not often called into question by semantic quibbling. So usually it's as pointless as it would
doubtless be impracticable to try and persuade people to burden their use of such common words
with complicated connotations that rarely matter, even if when they do matter they matter a great
deal.
In practice what happens instead is that we all recognise what an eminent philosopher has called the
'division of linguistic labour'. Just as most of us can use cars well enough under normal conditions
without knowing how to exploit and cope with their more recherche capacities and limitations, so it
is with words. We recognise that there are the linguistic equivalents of rally drivers and car
designers and mechanics, to whom we can look for guidance on how to use common words in
uncommon situations. These experts are not necessarily, nor even usually, philosophers or compilers
of dictionaries. They may be medical men involved in settling the definition of clinical death; or
lawyers deciding whether hovercraft are ships; or the stationmaster in the old Punch cartoon laying
down that for the purposes of rail travel 'dogs is dogs and cats is dogs, but this 'ere tortoise is a
insect, so there's no charge for it.'
But if these various experts are to be able, when needed, to add to or to alter the common meanings
of words, they in turn need words with relatively uncommon meanings, in which they can specify
the additions or alterations required. To be able to say precisely how cold it is, for example, when
precision in that matter is called for, I may need to refer to air temperatures, to wind velocities, to
the intensity of the radiation from the sun. . . These are all, in varying degrees, technical terms;
they're needed to describe matters that may not in themselves be of general concern, but which do
combine to determine what is, namely how cold it is.
Such words as 'temperature' and 'radiation' aren't jargon at all when used where their special
9
meanings, extra to or just different from those of 'hot' and 'cold', matter and are meant. They, and
other technical words, become jargon only when they replace common words in common uses
where their extra or peculiar meanings don't matter and aren't meant. 'At a high temperature' doesn't
just mean what 'hot' means; and when used where 'hot' is meant is jargon.
That jargon is a bad thing I've taken for granted; and some of the reasons why it is a bad thing need
no labouring by me. Jargon words are usually longer; their use is often a piece of linguistic oneupmanship, offensively implying superior knowledge that is in fact irrelevant and is probably
absent. But the real threat of jargon is to the technical words themselves, whose meanings are apt to
be swamped by those of the common words they replace. 'At a high temperature' comes to mean no
more than 'hot'; 'under-developed', applied to countries, no more than 'poor'. The effect is to ruin the
technical word, just by making it common; and as that becomes so obvious that the word ceases
even to look technical, so new technical words are brought in and ruined in their turn: as
'developing' succeeded 'under-developed' in the euphemistic jargon or the so-called 'Third World'.
The real objection to jargon is not just that it fosters an illusion of precise meaning, but that it makes
its reality ever harder to achieve.
On being illogical
Broadcast 5 March 1978 on BBC Radio 3.
The French have a reputation for being logical which, as a distinguished logician of my
acquaintance once put it, is largely based on the fact that they number their suburbs instead of
calling them 'Hampstead'. The word 'logical' indeed must be one on the most misused words in the
10
English language. It's a classic example of the phenomenon I talked about last week, of a good
technical term being ruined by common usage. When someone is accused of being illogical, all
that's usually meant is that he's not being sensible; or not being as mathematically neat as his
opponent would like him to be, which is another thing again. Perhaps it isn't sensible, or
mathematically tidy, to prefer names to numbers, the 'rolling English road' to its straight Roman
counterpart, Fahrenheit to Centigrade, feet to metres, old shillings to new pence; but these
preferences are not proscribed by logic. Logic can cope just as well with miles as it can with
kilometres.
The word 'logic' is, as I say, a technical term; it means the study of reasoning, of argument in one
sense of that word. Do the conclusions of some argument really follow from its premises, its
assumptions? If they do, it's said to be 'logically valid', or just 'valid'; if they don't, it's invalid. But
whether the assumptions themselves, or the conclusions, are sensible ones, that isn't a matter of
logic (unless the argument happens to be about logic). There are valid arguments with silly
assumptions, like 'The moon's made of green cheese; so, it's green'; and there are invalid arguments
with sensible assumptions, like 'Violence is a bad think; so, it shouldn't be shown on television'. Yet
it's almost always something about the assumptions or the conclusion of an argument that attracts a
layman's complaint of illogicality, and usually it has nothing to do with logic at all.
When logic, that is, the validity of an argument, really is in question, laymen are apt to find logic as
much of an irritant as a virtue. Often our pet assumptions lead logically to conclusions we don't like
at all; and then we're apt to complain of the fact, rather as one might resent having either to pay for,
or to return, one's favourite purchases. Suppose I admit, for example, that 'justice' means treating
people as they deserve, and that 'mercy' means treating them better than that. I still don't like the
obvious conclusion that no one can be at once both just and merciful, since if people get just what
11
they deserve they can't also get more than that. I take justice and mercy both to be virtues and, like
Portia in The Merchant of Venice, would prefer to be able to combine them. When logic says I can't,
the natural reaction is to disparage logic. That doesn't help, of course, any more than being rude to
the rent collector lets one off the rent, though no doubt it relieves one's feelings. But the rent still
has to be paid; and I still have to face the real, hard question, since mercy and justice are
incompatible as I conceive them, whether it's better to be unjustly merciful or to be mercilessly just.
Some will say that God at least is both just and merciful, being above mere human logic; but that's
not a good move for a believer, since all it does is provide premises for a valid argument that there
is no God.
Anyway, it wouldn't help the rest of us; and so we shuffle, we fudge, we extol the virtues of
illogicality, and generally decry the habit of pushing principles to their 'logical extremes'. What
laymen don't realise is that logic doesn't produce undesirable extremities, it only uncovers them.
Every conclusion of a valid argument is already there in its premises; and whoever doesn't like such
a conclusion still has to take it unless there's a premise he's prepared to give up. It's no use blaming
logic for the fact that cakes can't both be eaten and still had.
Real illogicality is undoubtedly a bad thing; but unlike most bad things, it's actually impossible to
push it to what I suppose might be called illogicality's 'logical extreme'. The extreme of illogicality
would be to believe an outright, obvious self-contradiction, like 'What's just is unjust'. I dare say
people have tried to believe things like that, but punning apart, it can't be done. It's logically
impossible to be that illogical - as anyone must admit who will admit at least that what's possible
can't also be impossible.
12
The Liar paradox
Broadcast 16 March 1978 on BBC Radio 3.
If a man says 'I'm lying', is he telling the truth or not? If he is, he is, as he says, lying; and so not
telling the truth. If he's not, then he's not lying, so he's telling the truth after all. If he is, he isn't; and
if he isn't, he is: a paradox. Paradoxes like this please some people, exasperate others, leave yet
others cold; but in any case to most people I suppose they're no more than parlour games, Christmas
puzzles. They're word games, of course; but even professional word users like novelists and poets
mostly see nothing but idle pastime in the study of paradox. Only some philosophers take them
seriously; why?
Paradoxes like this one, the so-called 'Liar' paradox, obviously depend in some way on a play on
words. Now a paradoxical play on words may be trivial, as in The Pirates of Penzance paradox,
where young Frederick won't reach his twenty-first birthday until he's over eighty, because he was
born in a Leap Year on February the 29th. That curious discrepancy, between age and number of
birthdays, is no great matter even to a philosopher. But the Liar paradox can't be teased out so easily
or so cheaply as that. It's one thing to admit that customary arguments from birthdays to age and
vice versa aren't always valid; it's quite another to have to admit that a man who says 'I'm lying' isn't
telling the truth.
Why should that admission matter; and why should it matter especially to philosophers? Actually
the people it matters to especially are logicians, since they're concerned with the validity of
13
arguments, with what follows from what. That is, what conclusions must you admit to be true once
you've admitted the truth of my assumptions, the premises of my arguments? That's the logician's
business, and it's very important that the rules of a supposedly valid argument be truth-preserving,
that they never lead one from a true assumption to a false conclusion. And that's important not only
to logicians. We all rely on the truth-preserving qualities of standard arguments, even if we don't
know how they're truth-preserving; just as we rely on the life-preserving qualities of aircraft
designs, even if we don't know how they're life-preserving. But someone must work out how, in
order to keep a check on whether such-and-such design features are safe, whether such-and-such
arguments do preserve truth.
Paradoxes like the Liar matter because they show that important and seemingly safe arguments can
fail to be truth-preserving. In the case of the Liar, the argument is that if a man is doing what he
says he's doing, he's telling the truth; and that in turn is just a special case of the argument that if
anything is as someone says, then what he says is true. That argument is important because we all
rely on it all the time, whenever in fact we check up on the truth of someone's claims by looking to
see if things are as they say. Whether it be looking out of the window to check a weather report, or
looking into a pub to check the Good Beer Guide, we all use this argument a dozen times a day. And
what could be more indisputably valid - it's been said indeed that it's valid by definition - by
definition of the word 'true'.
Well, it doesn't work when what a man says he's doing is lying: if he's doing what he says when he
says he's lying he's not telling the truth at all - he's lying. In that case, the case of the Liar paradox,
the argument crashes; quite unexpectedly, it falls to preserve the truth of its premise, that the man's
doing what he says, in its conclusion, that what he says is true. So there must be something wrong
with the argument - and therefore with our whole concept of truth - after all. And as with an aircraft
14
crash, until we know exactly why the argument's failed, what the explanation of the Liar paradox is,
we can't patch up our concept of truth and replace the argument with one that can be safely used.
Only whereas we can always ground an aircraft until the designer's found the fault and put it right,
we can hardly ground an argument we use every time we check on the truth of what people say.
The most we can do is to steer clear of the area where we know it's apt to crash - namely where
people are making remarks about the truth of those very remarks. Logicians, on the other hand,
testing a new logical theory, will head straight for just that area; for the same reason that test-pilots
fly dangerously, to see if the new theory really is safe after all. Of course the rest of us shouldn't
follow them - unless we're playing parlour games where it doesn't matter if our arguments crash.
But we shouldn't suppose for that reason that paradoxes don't matter, and that logicians who study
them are themselves just playing parlour games.
15
Download