Uploaded by Sorix

Text — 1-18-1

advertisement
A Beginning Course
In Modern Logic
Michael B. Kac
University of Minnesota
Contents
A Note to the Student.....................................................................................................1
Introduction ....................................................................................................................3
Part I Getting Ready......................................................................................................6
1 Knowledge and Reasoning .........................................................................................7
2 Statements, Truth and Validity .................................................................................12
3 Logic and Language....................................................................................................35
Part II Sentential Logic .................................................................................................41
4 Negation and Conjunction .........................................................................................42
5 Arguments Involving Negation and Conjunction ....................................................51
6 Conditionals, Biconditionals and Disjunction ..........................................................64
7 Deduction in Sentential Logic ....................................................................................74
8 Indirect Validation ......................................................................................................92
9 Lemmas ........................................................................................................................103
10 Truth Tables ..............................................................................................................113
11 Validity Revisited ......................................................................................................122
12 Logical Relations .......................................................................................................129
13 Symbolization ............................................................................................................140
14 Evaluation of Natural Arguments ...........................................................................149
Part III Predicate Logic.................................................................................................160
15 Individuals and Properties .......................................................................................161
16 Quantification in Predicate Logic............................................................................171
A BEGINNING COURSE IN MODERN LOGIC
17 Deduction With Quantifiers ....................................................................................185
18 Relations and Polyadic Predication.........................................................................207
Part IV Further Topics…………………………………………………………………. 219
19 Logic and Switching Circuits………………………………………………………220
20 Sets and Set Algebra……………………………………………………………….248
21 Sentential Logic as Set Algebra……………………………………………………262
22 Summation and Epilogue…………………………………………………………..269
Appendix 1 Rules and Principles of Deductive Logic……………………………….274
Appendix 2 Solutions to Starred Problems…………………………………………..275
2
Acknowledgement
I would like to express my thanks to Rocco Altier, Jan Binder, and Jeffrey Kempenich, all of
whom have helped to make this text better than it was before their intervention; whatever
shortcomings remain are my fault, not theirs.
A Note to the Student
This course is likely to be different in a number of ways from others you have taken. One major
difference is that whereas in many courses your main job is to memorize information and then
regurgitate it at exam time, in this one the principal emphasis is on analysis and reasoning. There
are, to be sure, certain things you’ll have to learn and keep in memory for future use but the
emphasis is elsewhere. You will be frequently called upon to apply what you’ve learned to
situations you have not previously encountered and to use your own ingenuity in coping with
them. In this respect, the study of logic is a lot like mathematics. But that’s no accident: logic is a
mathematical discipline. This is so in two senses: logic is the underpinning of all mathematical
thinking, and it also has a structure which can only be studied from a mathematical point of
view.
It goes without saying that there will be times when the going will be difficult. Bear in mind,
however, that thousands of students have mastered this subject, and that this is is true even of
ones who didn’t think of themselves going in as much good at math. (I know what I’m talking
about, because I myself was one of those students.) This is one of those subjects which works on
what’s sometimes called the Lightning Principle: for a while it’s all dark and then there’s a flash
of light after which everything is clear. I can’t guarantee that you’ll have this experience, but
many people have had it who never thought they would.
Here are some suggestions that should prove helpful.
1. This course proceeds in cumulative fashion, everything building on what has come
before. For this reason IT IS ABSOLUTELY ESSENTIAL THAT YOU NOT FALL BEHIND. The loss of
even two days can be fatal.
2. Make sure you always have lots of paper on hand.
3. Always write in pencil rather than in ink. You’ll need to do a lot of erasing.
A BEGINNING COURSE IN MODERN LOGIC
2
4. While it’s a good idea to take notes, primarily to assure that you’re paying attention to
what’s being said in the lectures, don’t get carried away. Your focus should always be on
understanding what’s being said, not on getting it down on paper.
5. If you’re having trouble,
SEEK HELP IMMEDIATELY FROM THE INSTRUCTOR OR THE TEACHING
ASSISTANT FOR THE COURSE.
(This is especially important toward the beginning.) Many
failures that could have been avoided come about from letting things go until it’s too late.
The same is true about asking questions in class. The only stupid question is the one you
DON’T ask, and you can be sure that there are others who would like to ask the same one.
6. Practice makes perfect, so you should practice a lot. To help you do so, you can find
answers in Appendix 2 for all problems marked with an asterisk. There are also a number
of problems that are worked in the text itself; these are referred to as Demonstration
Problems. The idea is to do a problem and then check your answer against the one given.
But try to resist the temptation to look at the answer until you have one of your own and
are reasonably confident that it’s correct, or if you are truly, absolutely stumped. When a
problem gives you difficulty, a good strategy is to work through the solution and then go
back to the problem later and try it again on your own.
7. Study in a group if you can. Several heads are always better than one. The best approach
is for everyone in the group to work on the same problem and then compare answers —
and not to look at the answer (if there’s one given) until you’ve compared the ones
you’ve come up with on your own.
8. Be persistent. It’s often darkest just before the dawn.
There’s one more important thing to remember. In logic, things are always done the way they
are for a reason and the reasons ultimately can be traced back to a small number of basic
principles. After a while you’ll find the same ideas coming back again and again, the only
difference being that they’re being applied in new ways to new situations. When you reach the
point where you begin to see that happen, you’re well on your way.
Introduction
At the table next to you as you are having lunch you overhear Ms. Smith and Mr. Jones having a
conversation about computers, including the following exchange:
Ms. Smith: I bought a Newtown Pippin yesterday.*
Mr. Jones: Was it expensive?
Ms. Smith: All Newtown Pippins are expensive.
Note that Ms. Smith does not answer Mr. Jones’s question with a simple ‘yes’ or ‘no’. Ms. Smith
expects Mr. Jones to make an INFERENCE from what she’s said — specifically, to infer that the
answer to his question is affirmative. If Mr. Jones succeeds in doing this (as he no doubt will
unless he’s a complete dolt) we say that he has DEDUCED the CONCLUSION that Ms. Smith has
purchased an expensive computer from the PREMISES (a) that Ms. Smith has purchased a
Newtown Pippin computer, and (b) that all Newtown Pippin computers are expensive. The
conclusion is said to FOLLOW LOGICALLY FROM the premises, or, equivalently, the premises are
said to LOGICALLY ENTAIL (or to IMPLY) the conclusion. Mr. Jones’s inference consists in
recognizing that this is so.
We make inferences constantly in daily life, often without being aware that we are doing so.
(Chances are, for example, that you recognized without having to give it any prolonged thought
that in our little dialogue Ms. Smith was implying that the answer to Mr. Jones’s question is
‘yes’. If so, you were making an inference, possibly without knowing it.) In other situations,
often rather specialized, we may be called upon to make inferences that are more difficult and
require considerable thought. Solving mathematical and scientific problems, for example,
typically requires a highly developed inferential capability.
The process of making inferences is NOT a mechanical one. While certain parts of the process
can be carried out according to a prescribed method, there are many cases in which you have to
use your own imagination in deciding how to proceed. Indeed, part of the purpose of this course
is to develop your imagination in the required way.
*
The Newtown Pippin is a variety of apple — the fruit, not the computer. To my knowledge,
there is no actual make of computer so named.
A BEGINNING COURSE IN MODERN LOGIC
4
Not all the inferences we make are correct. Furthermore, there are those who will try to dupe
us into making incorrect ones. A ploy much favored by such people is illustrated by the
following example. A television commercial tells us that all sorts of successful people drive
Whizbangs. The implication is that if you’re successful then you drive a Whizbang, and it’s the
advertiser’s hope that you will think ‘If I drive a Whizbang, it means I’m successful’. But the
inference you have made — if you’re gullible enough to have made it (and advertisers depend on
there being people who are that gullible) — is not a correct one. If, for example, you have just
lost all your money in the stock market you’re about as unsuccessful as it’s possible to be but
you won’t get your money back by going out and buying a Whizbang.
If the maker of the commercial wanted to play fair, then the slogan should be ‘If you drive a
Whizbang then you’re successful’. If you then infer ‘Aha, if I go out and buy a Whizbang then
I’ll be successful too’ you would be correct in your inference. Unfortunately, the advertiser is
almost certainly lying to you: there are no doubt many unsuccessful people driving Whizbangs.
And since advertisers are not permitted to lie in this way, they try to capitalize on your lack of
sophistication in making inferences instead.
In the fourth century B.C. the Greek philosopher Aristotle turned his formidable intellect to
the task of elucidating the underlying principles of correct inference, thereby founding the
subject that, following the practice of the ancient Greeks, we call LOGIC. Why should anyone
study this subject? I can think of five main reasons:
1. As the Whizbang example shows, if you aren’t careful unscrupulous people can mislead
you by tempting you to reason incorrectly. The study of logic is a good corrective to the
tendency to do so.
2. Many intellectual disciplines require the ability to reason logically — mathematics, the
natural and social sciences and philosophy among them. The better you are at such
reasoning, the easier it will be to master those disciplines.
3. We live in a technological universe which would not be possible without logic — not
only because scientists and engineers engage in logical reasoning to do their jobs but also
because logic is built into the technology itself. For example, complex electronic circuitry
of the kind used in computers incorporates the principles of logic into its very design and
computer programming is partly an exercise in logic.
4. Doing logic is good mental exercise, helping to develop precision and clarity of thought
as well as the ability to deal with abstractions and to undertake complex mental tasks.
A BEGINNING COURSE IN MODERN LOGIC
5. The subject has an intrinsic beauty. Like music or architecture, it’s an art form.
Actually, I would personally give one more. It’s incumbent on an educated person to have
some knowledge of and appreciation for the major developments of human intellectual history.
Logic is one of these — perhaps the most important one. If you don’t know anything about it
then you’ve closed your mind off to any real understanding of what many people regard as the
most important single attribute of the human species: the ability to reason. If you’re willing to
grant that open minds are better than closed ones, then no further justification is needed.
New terms
inference
deduction
premise
conclusion
5
Part I
Getting Ready
1
Knowledge and Reasoning
We have three main ways of acquiring knowledge. One is by experience. For example, if you
want to know what steamed duck’s feet taste like, there is no sure way to find this out except to
eat some steamed duck’s feet.
Another way of acquiring knowledge is by reasoning. Mathematical knowledge is obtained
this way — when you try to solve a mathematical problem you do so by a process that goes on
entirely inside your own head. Suppose, for example, that your job is to solve for x in the
algebraic equation x + 3 = 5. You do it by applying a reasoning process which begins with the
observation that you can get x by itself on the left-hand side by subtracting 3 from both sides.
The next step is to actually do the subtraction, giving you x = 2.
The third and perhaps most common way to obtain knowledge is to combine the first two:
you learn certain things from experience but, once that knowledge is in hand, you make
inferences from it to learn other things. Here is a simple example. Suppose that you already
know that if steamed duck’s feet taste like swamp water then you won’t like them. Never having
tried them, however, you don’t know what they taste like. So you eat some and discover that they
do indeed taste like swamp water.* And from this you can infer that you don’t like them and,
armed with this new knowledge, you avoid them forever after.
This is an example of a kind of inferential process that is carried out quite easily — indeed,
people often make inferences like the one just described without even being aware that they’re
doing so. But sometimes the process becomes more subtle and more complex. Here is an
example.
Before us are three caskets one of which (but only one) contains a treasure. The caskets are
numbered I, II and III and on the lid of each is an inscription:
*
At least that’s what they tasted like the one (and only) time I ever tried them.
A BEGINNING COURSE IN MODERN LOGIC
I.
Lucky the one that chooseth me,
For here there doth a treasure be.
II.
Somewhere there is a treasure hid
But it is not beneath my lid.
III.
Somewhere beneath the shining sun
Is treasure found — but not in Casket Number One.
8
We are given one additional piece of information: at most one of the inscriptions is true. Is there
enough information here to enable us to find the treasure without actually opening the various
caskets and looking inside? Let’s see.
Before we undertake the task, let’s be sure that we understand exactly what we have been
told in advance. In particular, the statement that at most one of the inscriptions is true must be
clearly understood: it means that NO MORE THAN ONE IS TRUE, and IT IS ALSO POSSIBLE THAT ALL
THREE ARE FALSE. But the second of these possibilities is ruled out by the inconsistency of I and
III.
Although it isn’t explicitly stated, we also know one further fact: there are only three
possibilities as to where the treasure can be: it’s in Casket I, Casket II or Casket III. Let’s then
consider each of these in turn.
Possibility 1. If the treasure is in Casket I then the inscription on its lid is true, and so is
the one on the lid of Casket II. But we have been told in advance that no more than one
inscription is true, so this possibility is eliminated.
Possibility 2. If the treasure is in Casket II then the inscriptions on both this casket and
Casket I are false. This is consistent with everything we’ve been told.
Possibility 3. If the treasure is here then the inscriptions on Casket II and Casket III are
both true. This again is inconsistent with what we have been told in advance, so this
possibility is eliminated.
Since Possibility 2 is consistent with all the given information, and is the ONLY possibility
that is consistent with it, we can accordingly infer that Casket II contains the treasure.
Notice that consideration of each possibility requires making an inference. Furthermore, it
involves a special kind of reasoning called CONDITIONAL or HYPOTHETICAL reasoning. Reasoning
of this kind consists of saying ‘Suppose thus-and-such. If this is so then here are some other
A BEGINNING COURSE IN MODERN LOGIC
9
things which must also be true.’ The key word here is suppose. We’re not saying that we know
for a fact that thus-and-such is indeed so. What we’re doing is saying that IF it’s so then so are
certain other things. IF the treasure is in Casket I, then more than one inscription is true; IF the
treasure is in Casket II then only the inscription on the lid of Casket III is true; and so on. Having
investigated the consequences of the various possible suppositions — or HYPOTHESES, or
ASSUMPTIONS, as suppositions are also often called — we then find that only the consequences of
the second hypothesis are consistent with everything else we know. Therefore we are compelled
to accept that as the only viable alternative.
Another very important word has appeared several times in this example and our discussion
of it: consistent. We say above that the consequences of the hypothesis that the treasure is in
Casket II are consistent with what we have been told in advance; similarly, the consequences of
the other two hypotheses are inconsistent with what we have been told. This then gives us a
general way of characterizing hypothetical reasoning: make an assumption and see if its
consequences are consistent with what you’ve been told previously. If not, then that assumption
is wrong. If the consequences are consistent with the previously given information, then the
assumption has a chance of being right. If it turns out to be the ONLY assumption consistent with
the previously given information and if that information is itself correct, then you must accept
the truth of the assumption. If more than one assumption turns out to be consistent with the
previously given information, and if that information is correct, then you will have to do
something more to decide between the remaining alternatives, though you may have narrowed
the number of available possibilities.
In the preceding paragraph we used the qualifying phrase if the (previously given)
information is correct. This is very important. Strictly speaking, we have no GUARANTEE that
there is a treasure in any of the caskets, let alone in Casket II. We could have been lied to about
that. Or we might have been deceived as to how many of the inscriptions are true. Perhaps the
treasure is in fact in Casket I, in which case two inscriptions are correct. In other words, we’re
back to hypothetical reasoning again: IF it’s truly the case that one of the caskets contains a
treasure and that at most one of the inscriptions is true, then we can find the treasure by the line
of reasoning that we’ve adopted. But since we have no assurance that these assumptions are true
the best we can do is to show that only one of the available possibilities is consistent with them.
This is in fact a very common situation. There are many times when we don’t know for sure
whether what we’ve been told is true. If we believe what we’ve been told, this may be nothing
more than an act of faith. Reasoning can’t justify that faith — all it can do is help us determine
what other things are consistent with what we’ve agreed, for whatever reason, to accept as true.
A BEGINNING COURSE IN MODERN LOGIC
On the other hand, we can also learn that our faith was misplaced. Suppose we open Casket II
and find it empty. Then we know that we have been deceived.
10
The two ideas we have just discussed — hypothetical reasoning and consistency — are at the
heart of what we will be studying in this course. The goal is to see what things are (and aren’t)
consistent with prior assumptions; and sometimes it can be shown that there is something wrong
with the assumptions themselves because they are inconsistent with each other. Reasoning of this
general kind plays a role in many different situations and many different fields of study. But our
goal here is not to deal with the specifics of any one such — rather, it is to explore a kind of
intellectual process that plays a role in all of them regardless of the particular subject matter with
which they deal.
New Terms
conditional (hypothetical) reasoning
hypothesis
consistency
Problems
Here are four mathematical problems which can be solved even by people without much
mathematical background. Try to solve at least the first one. An answer for each problem is
given in the back of the book but try to avoid looking at it until you have made a serious effort
to solve the problem. The idea is not to test your problem-solving ability but to get your mind
working in the right way for this course.
*1.
Imagine that you have assembled, by the bank of a river, a wolf, a goat and a cabbage and
that you are to ferry them across the river in a boat so small that you can only take one at a
time. The difficulty is this: if you leave the wolf and the goat unattended, the wolf will eat
the goat; and if you leave the goat and the cabbage unattended, the goat will eat the
cabbage. How can you get all three to the other side of the river in such a way as to never
leave the wolf and the goat alone together, nor the goat and the cabbage, while still ferrying
only one across at a time? (You may make any number of trips back and forth.)
*2.
Suppose that you are given two containers, one with a capacity of 5 liters and another with
a capacity of 3 liters. There are no markings on either to indicate the point at which it
contains a specified amount less than its capacity. Show how, using just these containers,
A BEGINNING COURSE IN MODERN LOGIC
to measure out exactly 4 liters of water. (You can empty and refill a container any number
of times, and you can pour the contents of one into the other.)
*3.
A certain square is such that the number of centimeters equal to its perimter is the same as
the number of square centimeters equal to its area. What is the length of one side of this
square? (Assume that you are dealing exclusively with whole numbers.)
*4.
[CHALLENGE] A certain rectangle which is not a square is such that the number of
centimeters equal to its perimeter is the same as the number of square centimeters equal to
its area. What are the respective lengths of the base and the height of this rectangle?
(Again, assume that you are dealing with whole numbers.)
11
2
Statements, Truth and Validity
A STATEMENT (also known as a PROPOSITION,* ASSERTION or
linguistic expression which can be true or false. Some examples:
DECLARATIVE SENTENCE)
is a
All Newtown Pippin computers are expensive.
When demand exceeds supply, prices tend to rise.
The angles of a triangle sum to a straight angle.
Some expressions which are not statements:
Are all Newtown Pippin computers expensive?
Run away!
the angles of a triangle
Consider now the following group of statements:
(1)
a. All Newtown Pippin computers are expensive.
b. The computer just purchased by Ms. Smith is a Newtown Pippin.
c. The computer just purchased by Ms. Smith is expensive.
*
Some authors use this term to refer not to an actual statement but to its content. We will
normally use the term statement here.
A BEGINNING COURSE IN MODERN LOGIC
13
The third statement bears a certain relationship to the first two — namely that (1a-b) jointly
entail (1c). Put in a different way, IT IS IMPOSSIBLE FOR THE STATEMENTS (1a-b) TO BOTH BE TRUE
AND FOR (1c) TO BE FALSE.
The statements in (1) together form what in logic is called an ARGUMENT.* The statements
other than the last one are called the PREMISES of the argument (from Latin praemissum, meaning
‘that which is put before’), and the last statement the CONCLUSION. An argument is said to be
VALID if, but only if, the conclusion is entailed by the premises. The argument in (1) is thus one
example of a valid argument.
Although it’s probably obvious why it is impossible for the premises of this argument to be
true and the conclusion false, let us take a moment to consider just why this is so. First, look at
the diagram in Fig. 2.1:
Fig. 2.1
Diagrams of this kind are called VENN DIAGRAMS (after the nineteenth century British logician
John Venn). The two overlapping circles represent sets: the left one one represents the set of
Newtown Pippin computers, the right one the set of things that are expensive. The area of
overlap (the region numbered 2 in the diagram) represents the set of things which are BOTH
Newtown Pippin computers and expensive. The other parts of the diagram correspond to those
things which are Newtown Pippin computers and are not expensive (region 1) and those things
which are expensive but are not Newtown Pippin computers (region 3). Consider now the
*
This word is used in ordinary conversation in a number of different ways. For example,
sometimes it is used to mean ‘dispute’, as in ‘Sandy and I had an argument yesterday’. When we
use the term in this course, it will always be in the sense just explained. This will be true of a
number of other terms as well, which have special meanings in the context of the study of logic.
A BEGINNING COURSE IN MODERN LOGIC
14
statement (1a). What this statement says is that the set of non-expensive Newtown Pippin
computers has no members — is ‘empty’ or ‘null’, in the language of set theory. (We will have
more to say about this idea in Chapter 20.) We will indicate that a set is empty by shading the
relevant portion of the diagram, as shown below.
Fig. 2.2
Now consider the statement (1b). Because nothing is located in the shaded area, but because the
object purchased by Ms. Smith is a Newtown Pippin computer, this object must be located in the
area of overlap, as shown in Fig. 2.3.
Fig. 2.3
But since everything in the area of overlap is in the set of expensive things, that means that the
computer purchased by Ms. Smith is one of these things. Hence (1c) must be true if (1a) and (1b)
are.
A BEGINNING COURSE IN MODERN LOGIC
15
Here is a second way of demonstrating the validity of the argument. Suppose that we
consider (1b), the second premise, first. The associated diagram is as follows:
Fig. 2.4a
Here the branching (forked) arrow indicates that the computer is somewhere in the left-hand
circle, though we do not know in which of the two subregions: the statement (1b) speaks only to
the computer’s manufacturer, not its price. But now consider (1a) and recall that according to
this statement the extreme left area is empty. That means that the computer could not be in that
region of the diagram, which we indicate by placing an ‘X’ over the left fork of the arrow —
indicating that the associated possibility has been eliminated — as shown in Fig. 2.4b below.
Fig. 2.4b
A BEGINNING COURSE IN MODERN LOGIC
16
Notice that while Fig. 2.4b looks a little different from Fig. 2.3, the two diagrams convey the
same information: that the region in which the computer purchased by Ms. Smith is located,
according to statements (1a-b), is the region of overlap.
Here now is another example of a valid argument:
(2)
a. No viral disease responds to antibiotics.
b. Gonorrhea responds to antibiotics.
c. Gonorrhea is not a viral disease.
To see that (2) is valid, we again resort to a Venn diagram:
Fig. 2.5
Alternatively, we could have begun with the second premise, which tells us that gonorrhea is
somewhere in the right-hand circle, either in the area of overlap or to the extreme right. The first
premise then elimintes the first of these possibilities since it tells us that the first area is empty.
Consider now an example of an INVALID argument:
(3)
a. All Newtown Pippin computers are expensive.
b. The computer purchased by Ms. Smith is expensive.
c. The computer purchased by Ms. Smith is a Newtown Pippin.
The invalidity of this argument can be seen by consulting the diagram below.
A BEGINNING COURSE IN MODERN LOGIC
17
Fig. 2.6
As in argument (1), the first premise tells us that the region consisting of non-expensive
Newtown Pippin computers is empty. The second premise, statement (3b), tells us that the
computer purchased by Ms. Smith is expensive, and therefore somewhere in the right-hand
circle. Hence, it’s possible that this computer is in the area of overlap (as indicated by arrow A),
and is thus a Newtown Pippin; but there is nothing in statement (3b) to assure us that this is the
ONLY possibility. For all we know, Ms. Smith has purchased an expensive computer from another
manufacturer (as indicated by the right tine of the fork, which is not X-ed out). So it’s possible
for the conclusion to be false even though the premises are true, which makes the argument
invalid.
Let us now consider another example:
(4)
a. All national capitals are seaports.
b. Kansas City is a national capital.
c. Kansas City is a seaport.
Is this a valid argument? Yes it is. This might at first seem like an odd statement since both the
premises and the conclusion of this argument are false. Nonetheless, given the way we have
defined the term valid, (4) fits the definition. In case this is not immediately obvious, let’s
consider a little more closely just what the definition does — and does not — say. It says that in
a valid argument it’s impossible for the premises to all be true and the conclusion false. Notice
now that there is nothing in this statement which requires that any of the statements in the
argument — premises or conclusion — actually BE true. Rather, it says only that IF the premises
are all true then the conclusion must be as well. To put the matter in a slightly different way: in
A BEGINNING COURSE IN MODERN LOGIC
18
an imagined alternative universe in which (4a-b) were true, then (4c) would also have to be true.
What makes the argument valid is not that it tells us the truth about the world as it actually is, but
only that it tells us that were the world to be as depicted in (4a-b) then it would also have to be as
depicted in (4c). In other words we’re engaged here in hypothetical reasoning — just as we were
when we considered the various possibilities in the puzzle of the three caskets.
Another way of looking at the situation is to resort, as we have done previously, to diagrams.
Diagrammatically, if (4a) is presumed true then we have the situation shown below:
Fig. 2.7
and if (4b) is presumed true then we have the situation shown in the next diagram.
Fig. 2.8
A BEGINNING COURSE IN MODERN LOGIC
19
But then (4c) would also have to be true as well since Fig. 2.8 shows Kansas City located in the
right circle, representing the set of seaports. The argument is accordingly valid for exactly the
same reason as (1) — which, it will be noted, it closely resembles.
Here’s another way of making the point. First, let’s consider a consequence of the definition
of validity, to wit: IF AN ARGUMENT IS VALID AND ITS CONCLUSION FALSE, THEN AT LEAST ONE OF ITS
PREMISES IS ALSO FALSE. In fact, a little thought will reveal that this is really just another way of
saying that it’s impossible, in a valid argument, to have premises which are all true and a
conclusion which is false — a restatement, in other words, of the original definition of validity.
So the definition of validity really tells us two things:
1.
A valid argument is one in which the truth of the premises guarantees the truth of the
conclusion .
2.
A valid argument is one in which the falsity of the conclusion guarantees the falsity of
at least one of the premises.
We can now use the statement 2 as a way of showing the validity of (4) by reasoning as follows.
Since (4c) is false we can draw the following diagram:
Fig. 2.9
Both forks are X-ed, indicating that Kansas City is not anywhere in the right-hand circle. There
are now two possibilities, as shown below:
A BEGINNING COURSE IN MODERN LOGIC
20
Fig. 2.10
If Kansas City is not anywhere in the right-hand circle then it is in the non-overlap part of the
left-hand one OR in NEITHER of the two circles (as indicated by the leftmost forking branch, which
points, as it were, to ‘thin air’). If the former, then it’s not a seaport, which is inconsistent with
(4a); and if the latter, then it isn’t a national capital, which is inconsistent with (4b). In other
words, the falsity of (4c), the conclusion of the argument, tells us that one of the premises is false
also — so the argument is valid.*
Many beginning students of logic nonetheless find the point just made about validity
puzzling — even troubling. Nor is this feeling hard to understand. For even if you’re now
persuaded that (4) is valid, ACCORDING TO THE DEFINITION OF VALIDITY THAT WE HAVE GIVEN, that
leads only to another question, which is: why define validity this way? Doesn’t all of this simply
mean that there is something wrong with the definition? For surely if falsehood is bad and
validity good then how can three bad things (like the false statements of which (4) is composed)
*
In this argument, both of the premises are false but that isn’t necessary. If an argument is valid
and its conclusion false, that tells us that the premises are not all true — in other words, at least
one of the premises is false — but it’s perfectly possible for only one of them to be false.
Constructing an example of such an argument is left as an exercise for you to try on your own.
A BEGINNING COURSE IN MODERN LOGIC
21
add up to one good one — a valid argument? Let’s see what we can do to try to clarify what’s
going on.
First of all, there’s no question but that a person who concluded that (4c) is true on the basis
of a prior belief in the truth of (4a-b) would be making a serious mistake. But the error involved
isn’t one of REASONING — rather, it’s a lack of knowledge about certain facts pertaining to
geography. So the claim that (4) is a valid argument shouldn’t be taken as a claim to the effect
that there’s nothing whatsoever wrong with it — there’s just nothing LOGICALLY wrong with it.
The real issue is whether the premises would, IF TRUE, give enough information to justify
acceptance of the conclusion.
Now let’s come back to the definition of validity and consider what it’s supposed to
accomplish. Our interest is in justified inferences from premises. What makes such an inference
justified? Answer: THAT IN MAKING THE INFERENCE WE HAVE FOLLOWED A PROCEDURE WHICH, IF
APPLIED TO TRUE PREMISES, IS GUARANTEED TO YIELD A TRUE CONCLUSION: case in point, argument
(1). So the definition is in effect telling us that justified inferences ARE STRICTLY A MATTER OF
PROCEDURE. Since the procedure which gets us from the premises (4a-b) to the conclusion (4c) is
the same as the one which gets us from (1a-b) to (1c), then at least insofar as procedural issues
are concerned there is nothing wrong with (4) even though there might be plenty of other things
wrong with it.
Here is an analogy that may help in understanding this point. A housing inspector is
interested in determining whether a certain house is in compliance with the building code. This
requires an evaluation of such things as its construction, the current state of the materials, the
wiring, the plumbing and so on. A house which satisfies the requirements of the code in all
respects could nonetheless be deficient in others. It might, for example, be painted a garishly
unattractive color or decorated and furnished in appallingly bad taste. It might therefore be a
house that you wouldn’t want to live in, but it would at least be SAFE to live in. A valid argument
is rather like a house that is properly constructed. You may not like either the premises of the
argument or the conclusion, just as you might dislike the exterior color or the furnishings of a
house; but if the argument is valid, then it’s at least constructed in such a way as to assure that
the truth of the premises guarantees the truth of the conclusions just as a house that is up to code
will at least not fall down around your ears. Like the building inspector, who doesn’t care about
the furnishings or the décor but is interested only in the physical details, we’re interested in
whether an argument has been put together in the appropriate way regardless of whether or not
we agree with any of the statements of which it consists.
A BEGINNING COURSE IN MODERN LOGIC
22
But we can go farther: for it turns out that if we were to disallow arguments containing false
statements, we would severely limit our ability to make justified inferences. Indeed, there’s a
powerful method of reasoning which depends on the possibility of a valid argument having false
premises. The general strategy is as follows. To show that a statement is false, assume for the
sake of argument that it’s true and show that this assumption entails a falsehood. Hence the
assumption that the original statement is true becomes untenable. Here’s an example. Two
students studying for a zoology exam have a difference of opinion as to whether whales are fish.
One student, Pat, maintains that they are and the other, Sandy, seeks to persuade Pat otherwise
by the following argument. Let’s suppose, says Sandy, that whales are fish, as you claim. Now,
fish have gills, right? (Pat agrees.) So if whales are fish, then they have gills, right? Again Pat
agrees but suddenly remembers that the textbook says that whales do NOT have gills and is forced
to concede that Sandy is right — whales are not fish after all. Here’s an outline of what’s just
happened.
At issue: Are whales fish?
Sandy’s opening move: Suppose that whales are fish.
Further premise (accepted by Pat): If whales are fish, then they have gills.
Valid argument:
If whales are fish, then they have gills. (Premise —
agreed to by both Pat and Sandy)
Whales are fish. (Premise — assumed for sake of
argument)
Whales have gills. (Conclusion — recognized by Pat to be
false)
Since the conclusion of the argument is false, one of the premises must be false. The second
premise is agreed by both parties to be true so the first must be false. (You may find it helpful to
cast your mind back to the strategy used in the puzzle of the three caskets in Chapter 1.)
Later on we will see a variant of this strategy — called REDUCTIO AD ABSURDUM (Latin for
‘reduction to absurdity’) — which makes deliberate use of valid arguments with false premises
and false conclusions and which is one of the most common logical techniques used in
mathematical proof.
In the case of (4), we had a valid argument in which false premises entail a false conclusion.
But it is also possible for false premises to entail a true conclusion, as in
(5)
a. All national capitals are on the Missouri River.
A BEGINNING COURSE IN MODERN LOGIC
23
b. Kansas City is a national capital.
c. Kansas City is on the Missouri River.
Here, as in (4), both premises are false but the conclusion is true; furthermore, the conclusion can
be obtained from the premises by exactly the same procedure employed in regard to (1) and (4),
so this argument too is valid.
Another point of some importance is that in a valid argument with a false conclusion, not all
the premises need be false. Thus, compare
(6)
a. All cities on the Missouri River are national capitals.
b. Kansas City is on the Missouri River.
c. Kansas City is a national capital.
Here both the first premise and the conclusion are false, but the second premise is true.
An important point to bear in mind is that if valid reasoning is a matter of proper procedure,
then improper procedure will lead to invalidity EVEN IF THE PREMISES AND CONCLUSION OF AN
ARGUMENT ARE ALL TRUE. Thus, the following argument is invalid even though it contains only
true statements:
(7)
a. All state capitals are seats of government.
b. St. Paul, Minnesota is a seat of government.
c. St. Paul, Minnesota is a state capital.
This argument is the diametric opposite of (4): the geography is impeccable, but a person who
drew the conclusion (7c) on the basis of the premises (7a-b) would be committing a severe error
of reasoning. (Showing why this is so is left to you — see problem 10 at the end of this chapter.)
Let’s go back now to argument (1), which is valid, and note that it has the general structure
shown below.
a.
All __ are __ .
A
b.
B
__ is a(n) __ .
C
A
A BEGINNING COURSE IN MODERN LOGIC
c.
24
__ is __ .
C
B
Each blank is filled in by a grammatically appropriate word or phrase, and the idea is that if two
blanks have the same letter underneath them then they must be filled by the SAME word or
phrase. Thus, if we fill the blanks labelled A by Newtown Pippin computers, the ones labelled B
by expensive and the ones labelled C by the computer purchased by Ms. Smith then we obtain
argument (1). But we can show, by use of a Venn diagram, that ANY argument which conforms to
this general scheme is valid, no matter what choice is made for the fillers of the various blanks
(subject only to the requirement that they be grammatically appropriate). The diagram looks like
this:
Fig. 2.11
Now simply fill in for A, B and C whatever was filled into the corresponding blanks and you’ll
see that the result is always that statement (c) is true if (a) and (b) are.
Now, just for practice, we are going to consider some further arguments of a more
complicated kind. Here is another example.
(8)
a. All poodles are dogs.
b. All dogs are canines.
c. All poodles are canines.
A BEGINNING COURSE IN MODERN LOGIC
25
To show that this is a valid argument we will again resort to Venn diagrams — except that in this
case we must use more complex ones in which three overlapping circles are involved. The reason
is that whereas in the examples considered up to now we had to worry about only two sets, in
this example we have to deal with three: the set of poodles, the set of dogs and the set of canines.
The first premise concerns the relationship between the first and the second of these, the second
premise the relationship between the second and the third, and the copnclusion the relationship
between the first and the third. So our initial setup looks like this:
Fig. 2.12
The sets represented by the various numbered regions are as follows:
1. canines that are neither poodles nor dogs
2. canines that are poodles but not dogs
3. canines that are both poodles and dogs
4. canines that are dogs but not poodles
5. poodles that are neither canines nor dogs
6. poodles that are dogs but not canines
7. dogs that are neither poodles nor canines
To validate the argument we follow steps much like the ones that we followed before. Thus,
from (8a) we obtain
A BEGINNING COURSE IN MODERN LOGIC
26
Fig. 2.13
and from (8b) we obtain
Fig. 2.14
The one region of the circle representing the set of poodles that is left unshaded is region 3, so all
poodles are in this area. But region 3 happens to also be in the circle representing the set of
canines. Hence there are no noncanine poodles — that is, all poodles are canines.
A BEGINNING COURSE IN MODERN LOGIC
27
For Future Reference: To evaluate an argument in which two sets are involved — such as
(1) — a 2-circle diagram is sufficient. To evaluate an argument like (8), in which three sets
are involved, a 3-circle diagram is necessary.
Here is a valid argument that requires yet another extension of our diagrammatic method.
(9)
a. Some dogs are not poodles.
b. All dogs are canines.
c. Some canines are not poodles.
Notice that the first premise of this argument contains a word we have not encountered before:
some. For reasons that will become apparent in a moment, it’s better to defer consideration of
premises containing this word until premises containing the word all have been considered.
Hence, we begin by considering the second premise and obtain
Fig. 2.15
The first premise of (9) tells us that a certain region of the diagram in Fig. 2.15 is nonempty —
has at least one occupant. Whenever we wish to indicate that a region is nonempty we do so by
marking it with a plus sign:
A BEGINNING COURSE IN MODERN LOGIC
28
Fig. 2.16
Note that the region marked is one whose occupants are dogs which are not poodles — so the
argument is valid.
Now, why did we consider the second premise first? Consider what would happen if we had
begun with the first premise. According to (9a), in the absence of the information provided by
(9b), we must consider three possibilities, as shown in the next figure:
Possibility 1
Possibility 2
Fig. 2.17
Possibility 3
A BEGINNING COURSE IN MODERN LOGIC
29
But the second premise tells us that there are no noncanine dogs, which rules out both the second
and the third possibilities. Now, there is nothing wrong in principle with proceeding in this way,
but it clearly makes things more complicated. So, a word to the wise: IF AN ARGUMENT CONTAINS
TWO PREMISES ONE OF WHICH INVOLVES all WHILE THE OTHER INVOLVES some, CONSIDER THE ONE
CONTAINING all FIRST.
Here now is an example of an invalid argument involving both all and some.
(10) a. Some dogs are poodles.
b. All dogs are canines.
c. All poodles are canines.
Again we take the second premise first, since it contains all.
Fig. 2.18
From the first premise, we obtain
A BEGINNING COURSE IN MODERN LOGIC
30
Fig. 2.19
According to the conclusion, we should have the following situation:
Fig. 2.20
But this is not what Fig. 2.19 shows. Although part of the relevant area is shaded in Fig. 2.19,
part of it remains unshaded. Fig. 2.19 is thus insufficient to rule out the possibility of noncanine
poodles.
A BEGINNING COURSE IN MODERN LOGIC
31
New Terms
validity
argument
statement
Venn diagram
Problems
1. Construct a valid argument on the model of (1) in which the first premise is false and the
second true. Then construct one in which the second premise is false and the first true.
*2.
Draw Venn diagrams consistent with the following statements:
a. All those who bark are dogs.
b. Only dogs bark.
How do these diagrams differ?
*3.
Is the following argument valid? Explain why or why not, using Venn diagrams to support
your answer.
a. Only dogs sing soprano.
b. Fido sings soprano.
c. Fido is a dog.
*4.
Is the following argument valid? Explain why or why not, using Venn diagrams to support
your answer.
a. Only dogs sing soprano.
b. Fido is a dog.
c. Fido sings soprano.
A BEGINNING COURSE IN MODERN LOGIC
5. Is the following argument valid? Explain why or why not, using Venn diagrams to support
your answer.
a. No cat barks.
b. Puff does not bark.
c. Puff is a cat.
6. Is the following argument valid? Explain why or why not, using Venn diagrams to support
your answer.
a. No dog sings soprano.
b. Fido is a dog.
c. Fido does not sing soprano.
7. The following argument is valid. Explain why, using Venn diagrams to support your answer.
a. Every politician is a crook.
b. Leo is not a crook.
c. Leo is not a politician.
8. The following argument is invalid. Explain why, using Venn diagrams to support your
answer.
a. Every politician is a crook.
b. Leo is not a politician.
c. Leo is not a crook.
*9.
Here are two arguments:
Argument 1
a. Everything that is human and has a heartbeat is a person.
b. The fetus is human and has a heartbeat.
c. The fetus is a person.
32
A BEGINNING COURSE IN MODERN LOGIC
Argument 2
a. Only something viable outside the womb is a person.
b. The fetus is not viable outside the womb.
c. The fetus is not a person.
There are four possibilities:
1. Both arguments are valid.
2. Both arguments are invalid.
3. Argument 1 is valid but not Argument 2.
4. Argument 2 is valid but not Argument 1.
Which of the foregoing is correct, and why?
10.
Show the invalidity of argument (7) from this chapter. In doing so you might find it helpful
to consider the following equally invalid argument:
a. All state capitals are seats of government. (true)
b. Minneapolis, Minnesota is a seat of government. (true*)
c. Minneapolis, Minnesota is a state capital. (false)
*11.
Is the following argument valid or invalid? Explain your answer.
a. All dogs are canine.
b. Some dogs are canine.
*12.
*
Determine for each of the following whether it is valid or invalid, supporting your answer
by the use of Venn diagrams.
Minneapolis is the seat of Hennepin County.
33
A BEGINNING COURSE IN MODERN LOGIC
I. a. No dogs are felines.
b. Some mammals are dogs.
c. Some mammals are not felines.
II.
a. All dogs are canines.
b. Some mammals are dogs.
c. Some mammals are canines.
III.
a. All dogs are canines.
b. Some mammals are not dogs.
c. Some mammals are not canines.
13.
Determine for each of the following whether it is valid or invalid, supporting your answer
by the use of Venn diagrams.
I.
a. No dog is feline.
b. All poodles are dogs.
c. No poodle is feline.
II.
a. All dogs are canines.
b. Some mammals are not canines.
c. Some mammals are not dogs.
34
3
Logic and Language
Reasoning has both a private and a public side. Insofar as it is a mental process then it is private,
something that only the person engaging in it has access to. And although we have no way of
knowing exactly what is going on inside the head of another person, we do have a device at our
disposal by which one person can become aware of at least some aspects of the mental states of
another. This device is called language.
The importance of language for logic is that language is the vehicle by which we convey to
others the elements of our reasoning. An argument consists of statements, and statements are
linguistic objects. For this reason the study of logic can’t be separated from consideration of
certain facts about language.
Languages like English, French and Chinese are what are called NATURAL languages. They
are the ones that we use in the usual conduct of our daily lives and that we learn in the course of
the normal socialization process. But there are ‘non-natural’ languages as well — such as the
ones that we use to program computers. The practice has also developed among logicians of
using a special language that is particularly adapted to their purposes. Learning logic consists in
part of learning how to use this language, so we shall say a few words about it early on.
Let’s begin with an obvious question. Why make up a whole new language just for the
purpose of studying logic? What’s wrong with continuing to use a natural language like English?
The answer is that there’s nothing wrong IN PRINCIPLE with using natural language but for
many of the logician’s purposes there are a lot of difficulties IN PRACTICE with doing so. Let’s
consider just a couple of them.
As we saw from the preceding chapter, what makes an argument valid is a matter of
procedure. Depending on what language the argument is constructed in, the exact way in which
A BEGINNING COURSE IN MODERN LOGIC
36
you implement this procedure will have to be tailored to the particular language you choose. For
one thing the vocabulary is different, but so is the grammatical structure. At a certain level,
however, the same principles apply regardless of what language you’re dealing with. So the
choice was made to create a new language, a sort of Esperanto, that would be common to all
logicians regardless of the natural language that they speak natively and which makes it possible
to give just one formulation of the procedures.
To see the second difficulty, consider first the following simple argument, with only one
premise.
(1)
a. Pat and Sandy left early.
b. Pat left early.
It’s easy to see that this argument is valid. Now consider another, closely similar, case:
(2)
a. Pat and Sandy or Lou left early.
b. Pat left early.
Is this a valid argument? It turns out that there’s no way to tell. The reason is that the premise,
(2a), is AMBIGUOUS — can be interpreted in more than one way. One possible way of interpreting
this statement is as saying that two people left early: Pat and another person who is either Sandy
or Lou, though we don’t know which. If we assume this interpretation, then if (2a) is true so is
(2b) and the argument is valid. But this is not the only way to interpret (2a), which could just as
well be understood as saying that there are two possibilities as to who left early: Pat and Sandy
on the one hand, Lou on the other. If the sentence is interpreted this way, then the argument is
not valid, for the premise gives us no way of knowing which of the two possibilities actually
occurred.
To a degree we can get around this difficulty — in writing, at any rate — by care in the use
of punctuation. So, for example, we could perhaps use commas to indicate which of the two
senses of (2a) we intend:
Pat, and Sandy or Lou, left early. (first interpretation)
Pat and Sandy, or Lou, left early. (second interpretation)
A BEGINNING COURSE IN MODERN LOGIC
37
(I say ‘perhaps’ here because the puncutuation intended to force the second interpretation seems
rather unnatural.) Unfortunately, there are cases where this option is not open to us. So consider
the argument
(3)
a. The older men and women left early.
b. The women left early.
Here again we have an ambiguous premise. One possible interpretation is that the older men and
ALL of the women left early, in which case (3b) is true. But there is another interpretation on
which it is the older men and also the older women who left early, in which case (3b) need not be
true since the premise tells us nothing about any of the other women. But there is no obvious
way to indicate the intended interpretation of (3a) by using punctuation.
Now, it needs to be pointed out that this situation presents us only with a complication, not
with an insuperable difficulty. For we could take examples like (2) and (3) as telling us that
arguments aren’t valid or invalid in any absolute sense, but valid or invalid relative to specific
interpretations assumed for the statements of which they consist. So, for example, we could say
that (3) is valid relative to the interpretation of (3a) according to which all the women left but not
relative to the other interpretation. But it would be simpler to just set things up so that we never
get ambiguity in the first place, and the special language used by logicians is set up to enable us
to do just that.
The device that we will use for avoiding ambiguities of the kind just illustrated is familiar
from another context, namely ordinary arithmetic — where problems of the same kind arise. For
example, suppose you are asked to compute the value of
3+5´9
Notice that there are two ways in which you could proceed: first add 3 and 5 and then multiply
the result by 9 — as shown in the diagram below.
A BEGINNING COURSE IN MODERN LOGIC
38
Fig. 3.1
Suppose on the other hand that your first step is to multiply 5 by 9 and then to add 3 to the
result: in this case, your answer is different:
Fig. 3.2
Given the original expression, how are you to know in which order to carry out the addition and
multiplication operations? The answer is that you don’t know — the expression as given doesn’t
tell you. To pin it down precisely, you must use parentheses: if you write
(3 + 5) ´ 9
then this indicates that the operations are to be performed in the sequence shown in Fig. 3.1. If
you want the other sequence of operations, you must write
3 + (5 ´ 9)
There is yet another reason for the adoption of an artificial language, and it has to do with the
notion of validity. Recall that whether an argument is valid or invalid depends not on the truth of
either the premises or of the conclusion but on whether the premises (true or not) entail the
conclusion (true or not). Another way of saying the same thing is that the CONTENT of the
statements of which an argument is composed is irrelevant — we care only about the FORM of the
various statements, and of the argument as a whole. Here is a simple illustration. Both of the
following arguments are valid:
(4) a. Pat left early.
b. Lou left early.
c. Pat left early and Lou left early.
A BEGINNING COURSE IN MODERN LOGIC
39
(5) a. Roses are red.
b. Violets are blue.
c. Roses are red and violets are blue.
In fact, you can always construct a valid argument by taking any two statements you want as
premises and then stringing them together with the word and to form the conclusion. Here is a
somewhat different way of saying the same thing. Let S and T be statements. Then a valid
argument can always be formed by arranging them as shown below.
a. S
b. T
c. S and T
In our artificially constructed language we will strip away all the nonessentials pertaining to the
content of statements, which means that in general we will not have any way of knowing
whether they are are true or not. But that doesn’t matter as long as we have a way of knowing
when arguments in which these statements appear are valid or invalid.
The study of logic via the medium of artificially constructed languages is commonly termed
SYMBOLIC LOGIC — though, for reasons which will become clear later on a better term would be
SCHEMATIC LOGIC (or, perhaps, ABSTRACT LOGIC). The particular languages that we will be
introducing here are slight variants of the ones created for the same purposes by Alfred North
Whitehead and Bertrand Russell in their celebrated work Principia Mathematica (1910-1913). It
is perhaps also worth noting that there is now a computer language called Prolog which has wide
currency in the artificial intelligence research community and which very closely resembles the
second of the two languages that we will introduce in this course.
New Terms
natural language
ambiguity
symbolic logic
A BEGINNING COURSE IN MODERN LOGIC
40
Problems
1. Suppose that I wish to prove to you that my attorney is also a dry cleaner. I reason as follows:
All attorneys press suits and everyone who presses suits is a dry cleaner. Therefore my
attorney is a dry cleaner. What (if anything) is wrong with this argument?
*2.
The following English sentences are ambiguous. Describe the ambiguity in each case.
a. I believe you and Pat broke the window.
b. The woman who knows Pat believes Lou saw Sandy.
3. Here are two statements (slightly rewritten) that actually appeared in the news media:
a. The daughter of Queen Elizabeth and her horse finished third in the competition.
b. A convicted bank robber was recently sentenced to twenty years in Iowa.
Both of these sentences are ambiguous. Describe the ambiguity in each case.
4. The sentence All the people didn’t leave is ambiguous. Give paraphrases which indicate the
two meanings.
5. Give paraphrases that will show the two meanings of the sentence The logic final won’t be
difficult because the students are so brilliant.
6. Give paraphrases that show the two meanings of each of the following ambiguous sentences:
a. Mary is a friend of John and my sister.
b. Mary is a friend of John’s boss.
c. University regulations permit faculty only in the bar.
Part II
Sentential Logic
4
Negation and Conjunction
In Chapter 2, the validity or invalidity of the arguments presented there always depends on words
like all, some and no. These words are called QUANTIFIERS and the associated principles of logical
inference make up a branch of the subject called (appropriately enough) QUANTIFICATIONAL
LOGIC. By contrast, the arguments (4) and (5) from Chapter 3 are examples of a type of argument
in which the conclusion consists of the two premises connected by the word and. The validity of
this argument doesn’t depend on anything inside the two sentences (such as the presence of a
particular quantifier), but only on the two sentences TAKEN AS WHOLES being combined in a
certain way to form the conclusion. The branch of logic that deals with such arguments is called
SENTENTIAL LOGIC (sometimes PROPOSITIONAL LOGIC or STATEMENT LOGIC) and which we shall
refer to by the abbreviation SL. In this chapter we begin introducing the special language which
will be used for SL and to which we shall give the name LSL — short for ‘language for
sentential logic’. We will return to quantificational logic in Part III.
There are two kinds of symbols in LSL: letters of the alphabet, such as S and T — called
LITERALS — and additional symbols, which we will introduce gradually, called SIGNS. Literals
represent statements whose specific content and internal structure we choose to ignore, while
combining literals with signs enables us to create larger statements out of smaller ones.*
Statements in this language are commonly referred to as FORMULAS. A formula consisting of just
a literal by itself is said to be SIMPLE; all other formulas are COMPOUND.
The truth or falsity of a statement is called its TRUTH VALUE. If a statement is true, it is said to
have a truth value of 1, and if false to have a truth value of 0.
Pick a formula at random, say S. Then we can create a new (compound) formula in our
language by prefixing the symbol ¬ to it to form ¬ S. The symbol ¬ is called the NEGATION SIGN
*
In principle it is necessary to make provision for an infinite number of literals. We do this by
allowing not only each letter of the alaphabet to count as a literal but also any sequence of
repeated letters. Thus not only is S a literal of LSL, so are SS, SSS, SSSS and so on.
A BEGINNING COURSE IN MODERN LOGIC
43
or NEGATION OPERATOR and the formula ¬ S is called a NEGATION of S. The following rule tells us
how to interpret the negation sign:
a. If S is true (has a truth value of 1) then ¬ S is false (has a truth value of 0).
b. If S is false (has a truth value of 0) then ¬ S is true (has a truth value of 1).
English-speaking logicians often read the negation sign, for convenience, as the English word
not and ¬ S as ‘Not S’. Sometimes they will use the more prolix ‘It is not the case that S’.
Notice that our rule for forming negations says that we can do so by prefixing ¬ to any
formula whatsoever. Thus ¬¬S is also a formula/statement, as are ¬¬¬S, ¬¬¬¬S and so on. Nor
does the choice of literal matter: so ¬T, ¬¬T, ¬¬¬T and so on are also all statements in LSL.
Formulas containing large numbers of negation signs can be simplified by means of the
following rule: successive occurrences of ¬ ‘cancel out’ — for example, ¬¬S and S have the
same truth value. For suppose S is true; then ¬S is false and ¬¬S is accordingly true. Similarly, if
S is false, then ¬S is true and ¬¬S is false. So we can always replace ¬¬S by S (or vice-versa)
knowing that we have not changed anything crucial. We henceforth refer to this as the
CANCELLATION RULE (or CR).
Now pick two formulas, say S and T. Then we can create a new formula by connecting S and
T via the sign Ù: S Ù T. This compound formula is called a CONJUNCTION of S with T and is
interpreted according to the following rule:
a. If S is false, S Ù T is false;
b. if T is false, then S Ù T is false;
c. otherwise, S Ù T is true.
In other words, the conjunction of two statements is true if, BUT ONLY IF, both of the conjoined
statements (or CONJUNCTS, as they are commonly called) are true. Or, to put the same thing a
different way, a conjunction is false if, but only if, at least one of the conjuncts is false. That is,
to say that S Ù T is false is to say that:
a.
S alone is false; or
b.
T alone is false; or
c.
both are false.
A BEGINNING COURSE IN MODERN LOGIC
44
It accordingly follows that the conjunction of a false statement with any other statement is
always false.
The sign Ù is called, not surprisingly, the CONJUNCTION SIGN (also, because of its shape, the
INVERTED WEDGE). This sign is often read, again for convenience, as and by English-speaking
logicians. The reason is that in English when two statements are combined by and, as in
(1)
Ms. Smith owns a Newtown Pippin computer
and
Mr. Jones owns one too.
the entire statement is true if, and only if, both of the underscored statements are true.
Deciding what truth value to associate with a literal is completely arbitrary. Once you have
made a decision, however, then certain other things follow. Suppose, for example, that we have
decided to associate the value 1 with the literal S and 0 with the literal T. Then the compound
formula S Ù ¬ T has the truth value 1. This is so because our rule for interpreting the negation
sign tells us that if T is false then ¬ T is true — which means that S Ù ¬ T is a conjunction of true
statements — while our rule for interpreting the conjunction sign tells us that a conjunction of
true statements is itself true. Diagrammatically:
Fig. 4.1
As in the examples involving arithmetic from Chapter 3, the diagram shows the order in which
we carry out the computations, except that now we’re computing the truth values of compound
statements based on the previously assumed truth values of their literals. The first step is to
compute the value of ¬ T, which is 1 since T has a truth value of 0. We then compute the value
of the entire statement, which is also 1 since two true statements are linked by the conjunction
sign.
In the example just given, there is no problem about determining the order in which the
computations are supposed to be performed. But now consider a formula like ¬ S Ù T. There are
A BEGINNING COURSE IN MODERN LOGIC
45
two possibilities: first evaluate ¬ S and then compute the value for the entire formula or first
compute the value of S Ù T and then determine the effect of applying the negation sign to it.
Which order we choose can make a difference. Suppose, for example, that S and T are both false.
The results of the two different orders of evaluation are shown below.
Fig. 4.2
Fig. 4.3
Note that the final truth values are different in the two cases.
We resolve this ambiguity by adoption of the following convention: if the negation sign
directly precedes a literal then it applies ONLY to that literal. If we mean a formula to be so
interpreted that a negation sign applies to a larger ‘chunk’ of material, then we enclose the chunk
in parentheses, as in ¬ (S Ù T). Hence, leaving our example formula without parentheses tells us
that we are to evaluate in the manner shown in Fig. 4.2; placing parentheses around the
conjunction tells us that we are to evaluate the conjunction first and then the negation thereof —
as shown in Fig. 4.3. In a more complex case, like ¬ (S Ù T) Ù U, ¬ still applies only to S Ù T —
in other words, ¬ always applies to the SHORTEST applicable part of a formula.
A BEGINNING COURSE IN MODERN LOGIC
46
The portion of a formula to which a given sign is intended to apply is called the SCOPE of the
sign. In the formula ¬ S Ù T the scope of the negation sign is the literal S, and the scope of the
conjunction sign consists of ¬ S on the left and T on the right. In the parenthesized version of the
formula, ¬(S Ù T), the scope of the conjunction sign consists of T on the right and S on the left,
and that of the negation sign consists of the compound formula S Ù T.
In some cases, parenthesization does not matter and can accordingly be omitted. For
example, no matter how we parenthesize the formula S Ù T Ù U, we will always get the same
results. This is so because according to our rule for interpreting conjunctions, this formula is true
if all three literals are true, but only if this is so. It thus doesn’t matter whether we start by
computing the value of S Ù T and then that of the entire formula or whether we first compute the
value of T Ù U and then that of the entire formula. This fact about conjunction is called the LAW
OF ASSOCIATIVITY (or the ASSOCIATIVE LAW) for conjunction.
Warning. Although there are cases where parenthesization does not matter, it is clear that
there are many cases in which it does. Consequently, YOU MAY OMIT PARENTHESES ONLY IN
CASES WHERE YOU HAVE BEEN EXPLICITLY TOLD THAT IT DOES NOT MATTER — AS IN THE CASE
And if you are ever in doubt as to whether it matters or not, ALWAYS ASSUME THAT IT
DOES. You can’t go wrong by putting parentheses in, but you CAN go wrong by leaving them
out.
ABOVE.
Although the truth value of a compound formula depends on the truth values assumed for its
literals, we noted earlier that truth values are assigned to literals in a completely arbitrary
fashion. That is, we may choose whatever truth value we wish for a given literal, subject only to
the restriction that if there are multiple occurrences of that literal in a formula then the literal has
the SAME truth value in ALL occurrences. So, for example, in the formula ¬ (S Ù T) Ù ¬S, we are
free to select 0 or 1 as the truth value for S, although having selected one or the other we are
required to assume that this value applies in both places where the literal occurs. On the other
hand, the choice we make for T is completely independent of the choice we make for S: we
could, for example, assume S to be true and T false or vice-versa. There are, in fact, four
possibilities:
both S and T are false
S is true and T is false
S is false and T is true
both S and T are true
A BEGINNING COURSE IN MODERN LOGIC
What we do NOT allow is the assumption, say, that T is false and S is true in its first occurrence
in the formula but not its second. So, for example, if we assume that S is true and T is false then
the computation of the truth value for the entire formula has to look like this:
We can now describe some simple procedures for constructing valid arguments involving
negations and conjunctions. Example: given S Ù T as a premise we can validly infer the
conclusion S. For if S Ù T is true then neither S nor T is false; hence S must be true. In the next
chapter we will look in more detail at arguments in LSL in which negation and conjunction are
involved.
New Terms
literal
sign
formula
simple
compound
47
A BEGINNING COURSE IN MODERN LOGIC
truth value
48
negation
conjunction
conjunct
scope
associativity
Problems
*1.
Suppose that S and T are false and U is true. Compute the truth values for ¬(S Ù T) Ù U
and for ¬S Ù T Ù U. Show the steps in the computations via diagrams like the ones used in
the chapter.
2. Suppose that S is false and T and U are both true. Compute the truth values for each of the
following via diagrams like the ones used in the chapter.
a. ¬(S Ù T) Ù U
b. ¬S Ù T Ù U
c. S Ù T Ù ¬ U
d. ¬ S Ù ¬ T Ù ¬U
e. ¬ (S Ù T Ù U)
*3.
For each of the formulas in the previous problem, find a combination of truth values for S,
T and U which makes the formula as a whole come out with the opposite truth value of the
one it has given the values assumed previously.
4. Find a combination of truth values for S, T and U such that DIFFERENT truth values are
obtained for S Ù ¬T Ù U and for S Ù ¬(T Ù U). Then find a combination of truth values such
that the two formulas have the SAME truth value. Explain your answer, making use of
diagrams to show the steps in the various computations involved.
A BEGINNING COURSE IN MODERN LOGIC
49
5. In the early years of the twentieth century, a group of Polish mathematicians developed an
alternative symbolism for negation and conjunction which enables parentheses to be
dispensed with. In this ‘Polish notation’ as it is commonly called, negations are formed the
same way as in LSL but conjunctions are formed in a different way. Instead of writing S Ù T,
in Polish notation we would write ÙST. The understanding is that the negation sign always
takes THE SMALLEST COMPLETE FORMULA TO ITS IMMEDIATE RIGHT as its scope while the
conjunction sign takes the FIRST TWO SUCH FORMULAS TO ITS RIGHT as its scope. The
differences that are represented in LSL by parenthesization are represented in Polish notation
by the positions in which signs appear. Thus ¬ÙST corresponds to ¬(S Ù T) while Ù¬ST
corresponds to ¬S Ù T.
Here now are some examples of how to translate between LSL and Polish notation. Suppose
first that we are given the LSL formula
S Ù (¬T Ù U)
The best way to do the translation is in a series of steps beginning with the original and with
some intermediate stages which mix the two systems. The original is the conjunction of S
with ¬T Ù U so we start by pulling out the first conjunction sign:
ÙS[¬T Ù U]
where the brackets indicate the part of the formula that’s still untranslated. So now we go to
work on the untranslated part, which is the conjunction of ¬T and U; consequently we again
pull out the conjunction sign to obtain
ÙSÙ¬TU
Now let’s consider the process in the opposite direction. Since the Polish formula begins with
a conjunction sign, that tells us that the formula as a whole is a conjunction; the task then is
to determine what the conjuncts are. The first one is clearly S, so the second one is the
remainder of the fomula. We thus obtain
S Ù [Ù¬TU]
The bracketed part is again a conjunction, the conjuncts being ¬T on the one hand and U on
the other. This then gives us
S Ù ¬T Ù U
A BEGINNING COURSE IN MODERN LOGIC
50
which conforms fully to the rules for constructing compound statements in LSL.
Suppose on the other hand that we are given the LSL formula S Ù ¬(T Ù U). As before we
begin by pulling out the first conjunction sign:
ÙS[¬(T Ù U)]
Since the scope of the negation sign is the entire conjunction T Ù U, we translate the
conjunction and obtain
ÙS¬ÙTU
To translate the other way, we first note that the formula as a whole is the conjunction of S
with the remainder:
S Ù [¬ÙTU]
The bracketed part of the resulting formula is the negation of the conjunction of T and U, so
we obtain
S Ù ¬(T Ù U)
On the basis of this information, do the following:
A. Translate the following LSL formulas into Polish notation:
a. S Ù ¬T Ù U
b. S Ù ¬(T Ù U)
c. ¬(S Ù ¬T)
B. Now translate the following formulas in Polish notation into LSL:
a. ÙS¬T
b. Ù¬ÙSTU
c. Ù¬SÙTU
d. ¬ÙSÙTU
5
Arguments Involving Negation and Conjunction
We’ve already noted that given any two statements whatsoever, the argument consisting of these
statements as premises and their conjunction as the conclusion is valid. We can express this fact
in symbolic terms by means of what is called a VALID ARGUMENT SCHEMA (plural SCHEMATA):
(1)
S
T____
SÙT
(Valid)
In the schema, the symbol representing the conclusion appears below the line; thus, anything
appearing above the line stands for a premise. The word Valid which appears in parentheses after
the conclusion tells us that it is legitimate to infer the conclusion from the premise(s). We may
think of the LSL literals S and T as representing arbitrarily selected statements (recall that we
can assign truth values to literals in an arbitrary fashion) and of the schema as telling us that in
the event that we should assign the value 1 to each of the premises then (because of the rule for
interpreting the conjunction sign) we are required to do so for the conclusion as well. Here now
is another example:
(2)
SÙT
S
(Valid)
The validity of (1-2) is obvious enough not to require extensive comment. Now consider two
more examples that are a little less obvious:
(3)
(4)
¬(S Ù ¬T)
S__________
T
(Valid)
S__________
¬(¬S Ù ¬T)
(Valid)
A BEGINNING COURSE IN MODERN LOGIC
52
To see the validity of (3), consider first what conditions must be met in order for the first
premise to be true. Since the statement is the negation of a conjunction then the conjunction itself
must be false, which in turn means that S or ¬T must be false. But the second premise tells us
that it isn’t S, so it must be ¬T. But if ¬T is false, then T is true. We can lay out the steps in this
reasoning process as follows:
1. Assume that the premises are true.
2. Since ¬(S Ù ¬T) is one of the premises, it’s true. Hence, S Ù ¬T is false.
3. If S Ù ¬T is false there are three possibilities:
a. S is false and ¬T true;
b. S is true and ¬T false;
c. both S and ¬T are false.
4. But S is a premise itself, so it’s true, eliminating both possibility (a) and
possibility (c). We are thus left only with (b).
5. If ¬T is false then T is true.
Recall now from Chapter 2 that if an argument is valid and its conclusion false, then one of
its premises must be false. We could thus also show the validity of (3) by working in the
opposite direction — that is, by showing that if the conclusion is false, one of the premises must
be as well. Here’s what happens if we try it in this way:
1. Assume the conclusion is false.
2. ¬T is thus true.
3. If S (one of the premises) is true then S Ù ¬T is true, so ¬(S Ù ¬T) is false. But this is
a premise.
4. If ¬(S Ù ¬T) is true then S Ù ¬T is false. But since ¬T is true (see 2 above), S is false.
But S is a premise.
What this line of reasoning shows us is that if the conclusion is false then if either of the
premises is true the other is false. In other words, it’s impossible for the premises to both be true
if the conclusion is false, and the schema is accordingly valid.
A BEGINNING COURSE IN MODERN LOGIC
53
Which of the two approaches to choose — working from the premises to the conclusion or
from the negation of the conclusion to the negation of one of the premises — is entirely a matter
of personal preference; do whatever is easiest given the problem to be solved.
Now let’s look at (4). Here are two ways to proceed:
1.
Assume that S is true.
2.
Therefore, ¬S is false.
3.
Hence, ¬S Ù ¬T is false and its negation is accordingly true. But ¬(¬S Ù ¬T) is the
conclusion.
Alternatively, we could do the following:
1. Assume that ¬(¬S Ù ¬T) is false.
2. Hence ¬S Ù ¬T is true.
3. But then ¬S is true and S is accordingly false. But S is the premise.
Before continuing we pause for a moment to point out that it follows from the definition of
validity that if a schema with a single premise, like (2) or (4), is valid, then so is the schema
obtained by taking the negation of the conclusion of the original as premise and the negation of
the premise of the original as conclusion. Thus, corresponding to (2) and (4) we also have:
(5)
a.
b.
¬S
___
¬(S Ù T)
(Valid)
¬¬(¬S Ù ¬T)
¬S
Here now are two examples of argument schemata which are NOT valid.
(6)
(7)
S____
SÙT
(Invalid)
¬(S Ù T)
¬S
(Invalid)
A BEGINNING COURSE IN MODERN LOGIC
54
The invalidity of (6) is due to the fact that the truth of one of the conjoined statements in a
conjunction isn’t enough to make the whole conjunction true: it is necessary that BOTH be true.
But T doesn’t appear as a premise in the schema, so its truth cannot be concluded from the
information given. In the case of (7) it suffices to observe that if a conjunction is false then one
of the conjoined statements is false, but it need not necessarily be the first one.
In some cases where an argument contains a single premise it is possible to validly infer
either from the other — that is, the two sentences are MUTUALLY ENTAILING. For example, from
¬¬S we may infer S and vice-versa. In such cases we say that the premise and the conclusion are
LOGICALLY EQUIVALENT (or just EQUIVALENT). We will distinguish such cases by using a double
line to separate the premise and the conclusion:
(8)
¬¬S
S
(Valid)
(This is just a restatement of the Cancellation Rule (CR) from the preceding chapter.) Here is
another example, called the LAW OF COMMUTATIVITY or COMMUTATIVE LAW for conjunction:
(9)
SÙT
TÙS
(Valid)
This schema tells us that the truth value of a conjunction doesn’t depend on the order of the
formulas conjoined — which should be clear enough when it is considered that a conjunction is
true if the conjoined formulas are both true and false otherwise, regardless of the order in which
they occur.
Note that we can also state the Associative Law for conjunction as an equivalence schema:
(10) (S Ù T) Ù U
S Ù (T Ù U)
Demonstration Problems
1. Show that the schema
¬(¬S Ù ¬T)
¬S __
_
T
(Valid)
A BEGINNING COURSE IN MODERN LOGIC
55
is valid.
Solution. If the premises are assumed to be true then ¬S Ù ¬T is false. Hence ¬S is false, ¬T
is false or both are false. But ¬S is a premise, hence true; consequently ¬T is false. But then
T is true.
Alternate solution. Suppose that T is false. Then ¬T is true. But then it is impossible for
both of the premises to be true. Suppose that the first is true. Then ¬S Ù ¬T is false, which
means that ¬S (the second premise) is false. Suppose that the second premise is true. Then
the conjunction ¬S Ù ¬T is true and its negation — the first premise — is false.
Note. In principle you can show that a schema is valid in either of these two ways. In this
particular case, the first way is a little easier but sometimes the second way is easier.
2. Show that the schema
¬(S Ù T)_
¬S Ù ¬T
is INVALID.
Solution. Suppose that the premise is true. Then S Ù T is false. This in turn means that one of
three things is true:
a. S is true and T is false;
b. S is false and T is true;
c. both S and T are false.
It is thus certainly possible that (c) is true, in which case the conclusion is true; but there are
two other possibilities as well, neither of which is ruled out by the premise. Hence it’s
possible for the premise to be true and the conclusion false.
For Future Reference: As the foregoing shows, the negation of a conjunction of statements
does NOT entail the negations of BOTH conjuncts. It entails only that AT LEAST ONE of the
conjuncts is false.
3. Show that the schema
A BEGINNING COURSE IN MODERN LOGIC
56
¬(S Ù T)______
¬(¬¬S Ù ¬¬T)
is valid.
Solution. By CR ¬¬S is equivalent to S and ¬¬T is equivalent to T, so the conclusion is
equivalent to the premise. Hence the premise entails the conclusion.
Remark. Notice that we have actually shown something stronger than what was asked for,
namely that the premise and the conclusion in the schema are equivalent. It follows from the
equivalence that the premise entails the conclusion, and that the argument is accordingly
valid — which was what was to be shown.
You might have noticed that in the solution to the last demonstration problem we made an
implicit assumption: that if CR applies to the formulas S and ¬¬S it applies to ANY pair of
formulas that can be similarly related — such as T and ¬¬T. Similarly, we assumed in the first
problem that if the rule for interpreting the conjunction sign operates as it does in the formula S
Ù T it will operate the same way in ANY formula in which it appears — that is, no matter what the
actual conjuncts are, the conjunction as a whole is true if both conjuncts are true but not if either
or both of them is false. In so doing, we have made use of an important principle called the RULE
OF UNIFORM SUBSTITUTION (henceforth called US) and which we now state explicitly:
In a valid argument schema, any formula can be substituted for a given literal without
changing the validity of the schema provided that the new formula replaces ALL OCCURRENCES
OF THAT LITERAL.
So, for example, if we apply US in CR, substituting T for S, then we obtain the equivalence
schema
¬¬T
T
Here is an example.We remarked earlier that the schema (5), here repeated as (11), is valid:
(11) S__________
¬(¬S Ù ¬T)
A BEGINNING COURSE IN MODERN LOGIC
57
Hence, according to US, so is
(12) R_________
¬(¬R Ù ¬T)
since R replaces all the occurrences of S and no other literals. But the formula by which we
replace the occurrences of a literal need not itself be a literal: it can be a compound formula.
Consequently, the following schema is also valid:
(13) R Ù U
¬(¬(R Ù U) Ù ¬T)
The idea behind the rule is that since arguments are valid or invalid because of their form rather
than because of the content of the statements of which they’re composed, no change in an
argument which preserves its form will affect its validity (or lack thereof). This means among
other things that if we already know that a schema is valid — say (11) — we can show a
different schema, like (12) or (13), to be valid as well by showing that they’re of the same form
as the original, meaning that either is obtainable from the original by uniform substitution.
Note that US is qualified: it requires that the formula which replaces a literal must replace
ALL occurrences of the literal. It should be fairly obvious why this restriction is imposed; for
example, if we were to replace only the first occurrence of S in (11) by R we would obtain
R_________
¬(¬S Ù ¬T)
which is invalid. To understand why, recall our earlier rule which says that the assignment of a
truth value to a literal is independent of the assignment of a truth value to any other. Suppose,
then, that we assign the value 1 to R and the value 0 to both S and T. Then ¬S Ù ¬T is true and
the entire conclusion, which is the negation of this formula, is false. It is thus possible for the
premise in this schema to be true and the conclusion false — so the schema is invalid.
It is also extremely important to note that, as stated, US allows us to replace literals with
formulas of any kind (including compound formulas) but DOES NOT SANCTION REPLACING
COMPOUND FORMULAS BY LITERALS. To see why such substitutions are not permitted, consider the
schema
A BEGINNING COURSE IN MODERN LOGIC
(14) S Ù T
S
58
(Valid)
Suppose now that we replace the premise, which only occurs once, with R. This gives us
(15) R
S
(INvalid)
Since the assignment of truth values to literals is arbitrary, R could be true and S false in (15), so
here is a case where substituting a literal for a compound formula in a valid schema fails to yield
another valid one. So, bear in mind: US DOES NOT ALLOW THE SUBSTITUTION OF LITERALS FOR
COMPOUND FORMULAS!
Another important point about US is that IT WORKS ONLY FOR VALID SCHEMATA. That is, if you
start with a valid schema, any schema obtained from it by US is also valid; but if you start with
an invalid schema, there is no guarantee that a schema obtained from it by US is invalid. For
example, substituting S Ù T for R in the invalid schema (15) yields the valid schema (14).
Consequently, YOU CANNOT SHOW THE INVALIDITY OF A SCHEMA BY SHOWING THAT IT’S OBTAINABLE
BY US FROM AN INVALID SCHEMA.
Here now is a second substitution rule, called the RULE OF SUBSTITUTION OF EQUIVALENTS (or
SE as we shall call it henceforth):
A given formula may be replaced by an equivalent one without affecting its truth value, or
that of any larger formula containing it.
This means that any formula in an argument schema can be replaced by an equivalent one (or a
part of the formula can be replaced by something equivalent to it) without altering the validity
(or invalidity) of the schema. Consider, for example, the following schema:
(16) S
T
¬ ¬ (S Ù T) (Valid)
Recall the schema (1), here repeated as:
(17) S
T____
SÙT
A BEGINNING COURSE IN MODERN LOGIC
59
According to SE, we may replace any formula with an equivalent one without affecting its truth
value — and hence, without affecting the validity of the schema. Since ¬¬(S Ù T) is equivalent
to S Ù T by CR, we can accordingly replace the conclusion of (17) by that of (16). Since the truth
value of the statement is not affected by the substitution, neither is the validity of the argument.
To better understand what’s going on here, consider the following Venn diagram, where P
represents the set of states of affairs in which the premises of an argument are all true and C the
set of states of affairs in which the conclusion is true:
What it means to say that an argument is valid is that there is no state of affairs in which the
premises are all true and the conclusion false — in other words, that region 1 of the diagram is
empty. Now, suppose that a statement in the argument (a premise or the conclusion — it doesn’t
matter) is replaced by an equivalent statement. Since the replacement does not affect the truth
value of the statement replaced, the set of states of affairs in which the various statements
comprising the new argument are true is exactly the same as the one in which the statements
comprising the original are true — so the resulting Venn diagram will look exactly the same.
Hence, the argument obtained by means of the substitution is valid if the original is valid, and
invalid if the original is invalid.
But there is more to the story, namely that you can replace any SUBSTATEMENT of a larger
statement by an equivalent one without affecting the validity of the overall argument. For
example, the following is a valid argument schema:
(18) S
T
¬¬(T Ù S)
A BEGINNING COURSE IN MODERN LOGIC
60
Notice that this schema is almost identical to (16), the only difference being that the statements T
and S in the conclusion occur in the opposite order. But since T Ù S is equivalent to S Ù T, (18)
is valid if (16) is.
What this amounts to saying is that if you replace one substatement in a larger statement by
an equivalent one, the result of the substitution is equivalent to the original. (For example,
¬¬(T Ù S), the conclusion of schema (18) is equivalent to ¬¬(S Ù T), the conclusion of (16).) To
understand why this is so it helps to consider more closely what it means to say that two
statements are equivalent.
When we defined equivalence earlier, we defined it as mutual entailment by two statements.
If you think about it a little, you should realize that mutually entailing statements MUST HAVE THE
SAME TRUTH VALUE, and sentences which must hve the same truth value are mutually entailing.
(Be sure you understand this point before going on!) Now, consider again the conclusions of (16)
and (18):
(19) a.
¬¬(S Ù T)
b.
¬¬(T Ù S)
Since S Ù T and T Ù S are equivalent, then they must have the same truth value; but then so must
¬(S Ù T) and ¬(T Ù S), and also the statements (19a-b) taken as wholes. But we can generalize
from this example. Every statement in LSL (so far, anyway) is either a literal, a negation or a
conjunction. If you replace a literal by a statement equivalent to it, the resulting statement must
have same the same truth value as the original; similarly, if you replace a negated statement by
an equivalent one, then the resulting negation must have same the same truth value as the
original, and if you replace a conjunct of a conjunction by an equivalent statement, then the
resulting conjunction must be equivalent to the original one. No matter how complex the
example becomes, repeated application of these three principles enables you to show that if a
given substatement of a larger statement is replaced by an equivalent one, the result of the
substitution is equivalent to the original. To illustrate, let’s consider the statement
(20) ¬(S Ù T Ù ¬¬U)
and suppose that we substitute U for ¬¬U, to yield
(21) ¬(S Ù T Ù U)
Since U is equivalent to ¬¬U, and replacing one conjunct of a conjunction by an equivalent
statement yields a conjunction equivalent to the original, T Ù U is equivalent to T Ù ¬¬U,
A BEGINNING COURSE IN MODERN LOGIC
61
and, by the same token S Ù T Ù U is equivalent to S Ù T Ù ¬¬U; but then, since replacing the
negated statement in a negation by an equivalent one yields a negation equivalent to the original,
(21) is equivalent to (20).
Demonstration Problems
4. Show that
S
T __
TÙS
is valid.
Solution. According to Schema (1),
S
T __
SÙT
is valid. By the Commutative Law for conjunction, S Ù T is equivalent to T Ù S, hence by
SE we can substitute the latter for the conclusion of Schema 1.
5. Show that
S Ù (T Ù U)
U
is valid.
Solution. According to the Associative Law for conjunction the premise in our schema is
equivalent to (S Ù T) Ù U. Hence we can substitute for the premise to obtain
(S Ù T) Ù U
U
By the Commutative Law (S Ù T) Ù U is equivalent to U Ù (S Ù T) so we can substitute
again to obtain
A BEGINNING COURSE IN MODERN LOGIC
U Ù (S Ù T)
U
which is derivable via schema (2) by substitution of U for S and (S Ù T) for T.
New Terms
schema
equivalence
commutativity
uniform subsitution
Problems
1. Show that each of the following is a valid argument schema.
*a.
S
T_________
¬(S Ù ¬ T)
b.
S Ù T______
¬(¬S Ù ¬T)
c.
¬S
¬ T________
¬(S Ù T)
d.
¬S Ù ¬T
¬(S Ù T)
*e.
¬ T____________
¬(¬(S Ù ¬T) Ù S)
62
A BEGINNING COURSE IN MODERN LOGIC
*f.
S Ù (T Ù U)
T
g.
S
T
¬¬(T Ù¬¬S)
2. Show that each of the following schemata is INVALID.
*a.
¬(S Ù ¬T)
¬S
T
b.
¬(¬S Ù ¬T)
S
c.
¬(S Ù T)
¬S
63
6
Conditionals, Biconditionals and Disjunction
To this point we have been looking at statements which may be thought of as corresponding to
ones in English involving the words not and and. We’re now going to look at some other kinds
of sentences, involving the words if and or. For example:
(1)
a. If Alice lives in Minneapolis, then she lives in Minnesota.
b. Alice lives in Minneapolis or she lives in St. Paul.
As we shall soon see, the meanings of if ... then and or as they are used in such statements can
actually be explained in terms of negation and conjunction. After we’ve seen how this is done,
we’ll consider another method for showing the validity of sentential schemata and in so doing
we’ll lay the groundwork for a very powerful method of doing the same for schemata involving
quantifiers. This will be made possible in part by an analysis of the quantifiers all, some and no
which which also relates their meanings to the meanings of and and not. This in turn will make it
possible for us to capitalize on similarities between quantificational and sentential logic that
would otherwise not be apparent.
We begin by introducing a new sign, which we define by means of the following equivalence
schema:
(2)
¬ (S Ù ¬T)
S®T
(Valid)
That is, we use this schema to define S ® T as an abbreviation for ¬ (S Ù ¬T). This abbreviated
form is called a CONDITIONAL and is commonly read ‘If S then T’.
Many students find this explanation of the meaning of if ... then puzzling (to say the least!),
so here is a detailed explanation of what’s involved. We’re going to start with a sentence which
doesn’t even contain the words if or then, namely
(3)
Every equilateral triangle is isosceles.
A BEGINNING COURSE IN MODERN LOGIC
65
But this sentence can be paraphrased as an equivalent one in which if ... then
namely:
(4)
DOES
appear,
Every triangle is such that if it is equilateral, then it is isosceles.
This in turn can be equivalently re-rendered as:
(5)
No triangle is such that it is equilateral and it is not isosceles.
Finally, we paraphrase (5) as
(6)
Every triangle is such that it is not the case both that it is equilateral and it is not
isosceles.
Now compare the underlined portions of (4) and (6). Since (4) and (6) are equivalent, and differ
only in the underlined portions, the underlined portions are equivalent. Consider now how we
might render the underlined part of (6) in LSL. If we let E stand for ‘It is equilateral’ and I for ‘it
is isosceles’, then (6) can be translated into LSL as
(7)
¬(E Ù ¬I)
which, according to schema (2) is equivalent to E ® I (LSL’s version of ‘If E, then I’). So there
is method in the madness after all!
In a conditional, the statement to the left of the arrow (S in our example above) is called the
ANTECEDENT (sometimes the PROTASIS) and the statement to the right (T in the example) the
CONSEQUENT (sometimes the APODOSIS). Inverting the antecedent and the consequent of a
conditional produces what is called the CONVERSE of the conditional: thus T ® S is the converse
of S ® T.
It is of the utmost importance to understand that THE TRUTH OF A CONDITIONAL
ASSURE THE TRUTH OF ITS CONVERSE. This can be seen by comparing statements like
(8)
a. If you’re in Minneapolis then you’re in Minnesota.
(true)
b. If you’re in Minnesota then you’re in Minneapolis.
(false)
DOES NOT
However, under some conditions, both a conditional and its converse are true. For example, it
can be proven that if a triangle is equilateral (has all three sides equal) then it is equiangular (that
is, all its angles are equal as well); but the converse can also be proven. This can be expressed as
a conjunction of conditionals: ‘If a triangle is equilateral then it is equiangular and if it’s
A BEGINNING COURSE IN MODERN LOGIC
66
equiangular then it’s equilateral’. (Actually, most mathematicians would abbreviate the second
conditional to the phrase ‘and conversely’.)
The conjunction of a conditional and its converse is often abbreviated in accordance with the
following equivalence schema:
(9)
(S ® T) Ù (T ® S)
S«T
The statement S « T is called a
biconditional.
(Valid)
BICONDITIONAL,
and S and T the
COMPONENTS
of the
Note also the following:
(10) S « T
S®T
(Valid)
(11) S ® T
S«T
(Invalid)
To see why the first schema is valid recall first that by (9), S « T is equivalent to the more
complex formula (S ® T) Ù (T ® S). Then, since the schema
SÙT
S
is valid we can obtain another valid schema by uniform substitution. Substitution of S ® T for S
and T ® S for T gives us
(S ® T) Ù (T ® S)
S®T
We then need merely use SE to substitute for the premise using schema (9) and we obtain
schema (10). It’s left to you to show that schema (11) is invalid (see problem 8 for this chapter).
Consider now the formula ¬(¬S Ù ¬T) This is, in effect, denying that S and T are both false
— hence asserting that at least one of them is true. We will abbreviate such formulas according
to the equivalence schema
A BEGINNING COURSE IN MODERN LOGIC
(12) ¬(¬S Ù ¬T)
SÚT
67
(Valid)
S Ú T is termed the DISJUNCTION (also called the ALTERNATION) of S and T and is commonly read
‘S or T’. (S and T are accordingly called the DISJUNCTS of the larger statement.) Some students
initially find this a bit odd since English sentences of the form S or T are commonly understood
as asserting that S is true or T is true, but not both. However, it’s easy to construct examples in
which or is used in essentially the same way as our new sign for disjunction. So consider the
sentence
You are a U.S. citizen if you were born in the United States or your parents are U.S.
citizens.
Now, it’s clear that there is no intent in this sentence to deny citizenship to persons who were
BOTH born in the U.S. AND have parents who are citizens. In certain contexts — mathematical
writing, for example — the word or is always understood in this way unless some special
indication is given to the contrary, say via the use of the qualifying expression but not both. The
sign Ú represents or understood in this ‘inclusive’ way. Hence: A DISJUNCTION IS TRUE IF, AND
ONLY IF, AT LEAST ONE OF ITS DISJUNCTS IS TRUE. (Contrast this situation to that which obtains in the
case of a conjunction of statements, which is true if and only if BOTH of the conjuncts are true.)
Demonstration Problems
1. Show that the schema
S®T
S ___
T
is valid.
Solution. By definition, S ® T is equivalent to ¬(S Ù ¬T). Hence we can subsitute for the
first premise to obtain
¬(S Ù ¬T)
S ______
T
This in turn is schema (3) from the previous chapter, which we validated there.
A BEGINNING COURSE IN MODERN LOGIC
2. Show that the schema
S®T
¬T___
¬S
is valid.
Solution. We begin, as we did before, by substituting for the first premise.
¬(S Ù ¬T)
¬T
¬S
According to the first premise, S Ù ¬T is false. But according to the second premise, ¬T is
true. Hence S must be false, in which case ¬S is true.
3. Show that the schema
S __
SÚT
is valid.
Solution. By definition, S Ú T is equivalent to ¬(¬S Ù ¬T). Substitution in the schema given
yields schema (4) from the previous chapter.
Alternate solution. Suppose that the conclusion is false. Since S Ú T is equivalent by
definition to ¬(¬S Ù ¬T), ¬S Ù ¬T is true. Hence ¬S is true, so S is false.
In regard to the relationship of conjunction and disjunction, if we had wanted to we could have
proceeded in exactly the opposite way, that is, first introducing the disjunction sign and then
defining conjunction in terms of negation and disjunction. In other words, the following
equivalence holds:
(13) ¬(¬S Ú ¬T)
SÙT
(Valid)
The formula ¬S Ú ¬T says that of the two formulas S and T, at least one is false. To negate this
formula is accordingly to say that neither is false, that is, that both are true. But by the same
68
A BEGINNING COURSE IN MODERN LOGIC
69
token, a conjunction is true if and only if neither of the conjoined formulas is false, so the
entailment works in the opposite direction as well.
We will henceforth refer to the schemata (12-13) together as the ‘and-or Rule’ or just AOR.
In the previous two chapters we formed compound formulas with only two signs: one for
negation and one for conjunction. In this chapter we introduced three more. It is important to
understand, however, that these signs do not add any ‘expressive power’ to our language — that
is, ANYTHING THAT CAN BE EXPRESSED WITH THEM CAN ALSO BE EXPRESSED WITHOUT THEM. So
having them is not necessary; however, it is very convenient, as we shall see.
Demonstration Problems
4. Rewrite each of the following formulas so that it contains no signs other than ¬ and Ù.
a. ¬(S Ú T)
b. ¬ S ® T
c. ¬(S « T)
Solutions.
a. ¬S Ù ¬T
b. ¬(¬S Ù ¬T)
c. ¬(¬(S Ù ¬T) Ù ¬(T Ù ¬S))
Discussion. In each case the way to proceed — at least until you gain some facility — is
completely mechanically. This means that you apply the equivalence schemata by which we
define the signs Ú, ® and « making use of SE and US. For example, in the case of (a) we
can proceed as follows:
¬(S Ú T)
¬¬(¬S Ù ¬T)
Def. of Ú
¬S Ù ¬T
CR
(Strictly speaking, we don’t need the CR step — the definition of Ú alone suffices to
get us a statement containing only ¬ and Ù, but CR makes it possible to simplify the
answer.)
For Future Reference. ¬(S Ù T ) is equivalent to ¬ S Ú ¬ T. (We’ll prove this later.)
A BEGINNING COURSE IN MODERN LOGIC
Here are two ways to do (b). The first way is to apply US to the definition of ®, substituting
¬S for S to obtain
¬ (¬S Ù ¬T)
¬S ® T
The second way is like the way in which we did (a), except that there is only one step:
¬S ® T
¬(¬S Ù ¬T)
Def. of ®
This way of proceeding also involves US, but implicitly rather than explicitly: strictly
speaking, it isn’t the definition of ® that we apply in carrying out the step, but the schema
shown just above, obtained by uniform substitution of ¬S for S in the definition.
For (c) we need to go through two steps:
¬(S « T)
¬((S ® T) Ù (T ® S))
Def. of «
¬(¬(S Ù ¬T) Ù ¬(T Ù ¬S))
Def. of ®
New Terms
conditional
antecedent
consequent
converse
biconditional
disjunction
Problems
1. Rewrite each of the following formulas as an equivalent one which contains no signs other
than ¬ and Ù.
*a. (S « T) ® (S ® T)
b. ((¬S Ú T) Ù S) ® T
70
A BEGINNING COURSE IN MODERN LOGIC
c. (S ® ¬T) ® ¬ (S « T)
d. ¬(S ® T) « ¬(¬S Ú T)
2. For each of the following pairs of formulas show that the members of the pair are equivalent.
*(i)
a. ¬(S « T)
b. ¬(S ® T) Ú ¬(T ® S)
(ii)
a. S ® ¬T
b. ¬(S Ù T)
(iii)
a. ¬S ® T
b. S Ú T
(iv)
a. ¬S « T
b. (S Ú T) Ù ¬(S Ù T)
(v)
a. S ® T
b. ¬T ® ¬S
Note. Two conditionals related in this way are called CONTRAPOSITIVES of each other.
The equivalence is commonly called the LAW OF CONTRAPOSITION (CP).
3. Using the new signs introduced in this chapter, rewrite each of the following formulas so as
to make it shorter. Try to make each one as short as you can.
*4.
*a.
¬(¬(S Ù ¬T) Ù ¬(¬S Ù T))
b.
¬((¬S Ú T) Ù ¬T Ù ¬S)
c.
¬(¬S Ú T) Ú ¬(S Ú ¬T)
We remarked at the end of the chapter that anything which can be expressed in our
expanded five-sign system can also be expressed in the two-sign system introduced in
Chapter 4. Believe it or not, it turns out to be possible to express anything that can be
expressed in the two-sign system in a ONE-sign system. There are two such systems; this
problem involves one of them and the next problem involves the other.
71
A BEGINNING COURSE IN MODERN LOGIC
72
The one sign in this system is defined by the following equivalence schema:
¬(S Ù T)
S|T
(Valid)
¬S is written S|S and S Ù T is written (S|T)|(S|T). The sign | is commonly referred to as the
‘Sheffer stroke’, after the mathematician who first proposed the system, and is read as nand
(a contraction of not and). The compound formula S|T is called the ALTERNATIVE DENIAL OF
S and T.
Rewrite each of the following in the five-sign system as a formula in which no more than
one sign appears:
a. S|(T|T)
b. (S|S)|T
c. ((S|(T|T))|((S|S)|T))|((S|(T|T))|((S|S)|T))
Rewrite each of the following using no signs other than the Sheffer stroke:
d. ¬S Ú ¬T
e. S Ú T
f. ¬(S « T)
5.
Note: Don’t attempt this problem until you have worked through problem 4.
The CONJUNCTIVE DENIAL of S and T, written S/T and read ‘Neither S nor T’ (or just ‘S nor
T’), is defined by the equivalence schema
¬S Ù ¬T
S/T
(Valid)
¬S is written S/S and S Ú T is written (S/T)/(S/T).
Rewrite each of the following in the five-sign system. The first two can be written as
formulas in which only one sign appears; the third can be written as a formula in which only
two signs appear.
a.
((S/S)/T)/((S/S)/T))
A BEGINNING COURSE IN MODERN LOGIC
b.
(S/S)/(T/T)
c.
((S/S)/(T/T))/((S/S)/(T/T))
Rewrite each of the following so that no sign appears other than /:
d.
S Ù ¬T
e.
S Ú ¬T
f.
S«T
6. Find a combination of truth values for S and T such that S ® T is true and T ® S false.
7. Rewrite each of the following as an equivalent formula using no signs other than ¬ and ®.
a.
SÙT
b.
SÚT
*8. Explain why schema (11) is invalid.
73
7
Deduction in Sentential Logic
In the study of sentential logic we confine our attention to arguments whose validity (or
invalidity) is due solely to the deployment of the signs ¬, Ù, Ú, ® and «. In Chapter 5 we
looked at a few simple valid argument schemata for sentential logic and at the beginning of
Chapter 6 we promised a method for showing the validity of schemata applicable both to
sentential and quantificational logic. We are now ready to begin delivering on this promise.
One part of the method is implicit in the preceding chapter, which shows us how to convert
long and forbidding compound formulas using only the negation and conjunction signs into
shorter ones taking the form of conditionals, biconditionals or disjunctions. Sometimes, however,
it turns out to be helpful to do just the reverse: to ‘unpack’ abbreviated formulas into ones which
do not involve anything except negation and conjunction.
The remainder of the method depends on the following fundamental principle of deductive
reasoning:
If S entails T and T entails U then S entails
U.
To see why this must be so, take note first that if S is true and S entails T then T is true; but then
if T entails U, U is true. Hence it’s impossible, if S entails T and T entails U, for S to be true and
U false.
The foregoing principle, called the PRINCIPLE OF TRANSITIVITY OF ENTAILMENT means that we
can chain valid arguments together using the conclusions of earlier ones as premises for later
ones. If all of the arguments in the chain are valid, then the premises of the first argument of the
chain and the conclusion of the last one form a valid argument. Let’s look now at how this
principle applies in a particular case. We begin by revisiting two of the valid argument schemata
introduced in Chapter 4:
(1)
SÙT
S
(Valid)
A BEGINNING COURSE IN MODERN LOGIC
(2)
SÙT
TÙS
(Valid)
Now suppose that we encounter the schema
(3)
SÙT
T
It’s not hard to determine, just by eyeballing, that this is valid. In the interest of making a point,
however, let’s take a different approach. Specifically, let’s show that the validity of (3) is a
consequence of the validity of (1) and (2) given the Principle of Transitivity of Entailment. To do
so in a case like this is overkill, but in a good cause since it makes for a simple illustration of a
technique that is definitely NOT overkill if the problem is complicated enough. Suppose that we
begin by applying schema (2) to the premise of (3). Then we have
(4)
SÙT
TÙS
(Valid)
But (1) tells us that
(5)
TÙS
T
(Valid)
Hence, by transitivity of entailment, (3) is valid as well.
Let’s now adopt a slightly different — and more economical — way of representing a chain
of reasoning like the one just shown:
(6)
a. S Ù T
b. T Ù S
c. T
Premise
(2), a
(1), b
In (6a) we set down the premise of the original argument and label it as such. We then derive
(6b) from it by applying schema (2); the label ‘(2), a’ to the right tells us that we applied schema
(2) to line (6a) to obtain line (6b). Then, in (6c) we indicate that we invoked schema (1) to obtain
this line from line (6b).
A sequence of steps like that shown in (6) by which we move in steps from a premise (or set
of premises) to a conclusion is called a DEDUCTION. To the right of each formula we say what
justifies writing the formula at that point in the deduction: that the formula is a premise, or that it
is entailed by one or more preceding lines in the deduction according to some previously
75
A BEGINNING COURSE IN MODERN LOGIC
validated schema. The goal is to come up with an approach that can be applied to extremely
complex arguments in such a way as to show that the conclusion follows from the premises by
means of a deduction which breaks the task down into a series of small and manageable steps.
For the time being we will require that every line which is not a premise be justified by one of a
group schemata to be given below. Some of these are ‘one-way’ schemata like (1) while others
are equivalence schemata like (2).
The one-way schemata, of which we will assume seven, are called RULES OF INFERENCE. Each
rule has a traditional name to which we will give a two-letter abbreviation shown in parentheses;
some of them have been seen before. Each rule is given both a schematic form and a verbal
statement.
Ù-Elimination (AE)
SÙT
S
The conjunction of two statements entails the first conjunct.
Ù-Introduction (AI)
S
T____
SÙT
If two statements are true then so is their conjunction.*
Modus Ponens (MP)†
S®T
S_____
T
If a conditional and its antecedent are both true then so is the consequent of the
conditional.
*
†
This is schema (1) from a few pages ago.
Latin for ‘method of affirming’ and so named because the second premise affirms the truth of
the antecedent of the first.
76
A BEGINNING COURSE IN MODERN LOGIC
Modus Tollens (MT)*
S®T
¬ T___
¬S
If a conditional is true and its consequent is false then the antecedent of the conditional
is false.
Ú-Introduction (OI)
S____
SÚT
A statement entails the disjunction of itself and any other statement.
Hypothetical Syllogism (HS)†
S®T
T®U
S®U
If the consequent of one conditional is the antecedent of another then the two
conditionals jointly entail a third conditional whose antecedent is the antecedent of the
former and whose consequent is the consequent of the latter.
Disjunctive Syllogism (DS)
SÚT
¬ S__
T
A disjunction and the negation of its first disjunct jointly entail the second disjunct.
For Future Reference: In a schema with multiple premises, the premises may occur in any
order without affecting validity.
*
Latin for ‘method of denying’ and so named because the second premise denies the truth of the
consequent of the first.
†
A syllogism is an argument with two premises.
77
A BEGINNING COURSE IN MODERN LOGIC
Each of these rules has the virtue of being simple — none involves more than two premises,
for example — and thus of being easily checked for correctness. (Indeed, we’ve already
validated all of them, either via discussion in the text or as problems.) We’ll see later, however,
that everything we can do with these seven rules can be done with just two: AI and AE.
The advantage to this way of showing validity becomes apparent when we consider more
complicated schemata. Suppose, for example, that we’re given the following:
(7)
¬ (S Ù ¬ T)
¬ (¬S Ù ¬U)
¬(T Ù ¬V)
¬(U Ù ¬W)
¬V________
W
(Valid)
Trying to sort this one out by the earlier method is sure to be difficult and confusing, and the
likelihood of making a mistake somewhere along the line is quite high. The deductive method,
on the other hand, makes things significantly easier.
Before proceeding further, it’s helpful to do some abbreviating, so here is a simplified
version making use of the additional signs introduced in Chapter 5.
(8)
S®T
SÚU
T®V
U®W
¬V________
W
(Valid)
Since each abbreviated statement is by definition equivalent to its counterpart in (7), if we can
show that (8) is valid then we’ve shown that (7) is valid also.
The following deduction establishes the validity of (8).
78
A BEGINNING COURSE IN MODERN LOGIC
(9)
a. S ® T
b. S Ú U
c. T ® V
d. U ® W
e. ¬V
f. ¬ T
Premise
Premise
Premise
Premise
Premise
MT c,e
g. ¬ S
h. U
i. W
MT a,f
DS b,g
MP d,h
Implicit in the strategy of first converting the premises of (7) into the ones of (8) is a further
principle which simply says that we can perform SE whenever we please. For example, if the
formula ¬ ¬ S occurs in some line of a deduction, we may replace it in a subsequent line with S.
Here is a list of some of the most useful equivalences (some of which we have seen before):
79
Commutative Laws (CL)
SÙT
TÙS
SÚT
TÚS
Associative Laws (AL)
(S Ù T) Ù U
S Ù (T Ù U)
(S ÚT) ÚU
S Ú(T ÚU)
Cancellation Rule (CR)
¬¬S
S
To these we may also add the definitions given earlier of the supplemental signs ®, « and Ú.
Here now is an example of the exploitation of equivalences in a deduction. The schema
A BEGINNING COURSE IN MODERN LOGIC
80
(10) ¬(S Ù ¬T)
S________
SÙT
can be shown to be valid by means of the following deduction:
(11) a. ¬(S Ù ¬T) Premise
b. S ® T
c. S
d. T
e. S Ù T
SE, def. of ®, a
Premise
MP b,c
AI c,d
In line (11b) we converted the formula in (11a) into an equivalent one, thereby enabling us to
apply MP to obtain line (11c).* In (11c) we indicate to the right that we have applied the Rule of
Substitution, identify the equivalence involved (the definition of ®) and, as in all other cases,
show the line to which we applied the rule. Here is another example. We can show the validity of
the argument
(12) ¬S
T__
¬(T ® S)
via the deduction
(13) a. ¬ S
Premise
b. T
Premise
c. ¬ S Ù T
AI a,b
d. T Ù ¬S
SE, CL, c
e. ¬¬(T Ù ¬S)
SE, CR, d
f. ¬(T ® S) SE, def. of ®, e
Notice that in going from (13e) to (13f) we made a substitution
subformula ¬(T Ù ¬S) of ¬¬(T Ù ¬S) by a conditional.
INSIDE
a formula, replacing the
Here now is a further convention that will save a little bit of effort. Thus far, we have always
indicated to the right of each line in a deduction the line(s) from which we obtained it. In any
*
Whenever we make use of an equivalence schema, we will always write ‘SE’ to the right to
emphasize that the line thus obtained is equivalent to the line it was obtained from.
A BEGINNING COURSE IN MODERN LOGIC
81
case where we obtain a given line either from the immediately preceding line or from the
immediately preceding two lines, we will allow ourselves to omit the pointer(s) to the earlier
line(s). Thus the deduction (13) could be re-rendered as
(14) a. ¬ P
b. Q
c. ¬ P Ù Q
d. Q Ù ¬P
e. ¬¬(Q Ù ¬P)
f. ¬(Q ® P)
Premise
Premise
AI
SE, CL
SE, CR
SE, def. of ®
Complete command of the technique of validation by deduction requires close attention to
some points of procedure. A deduction consists of a sequence of lines each of which consists in
turn of three parts: a LINE INDEX (or just INDEX), consisting of a small letter of the alphabet with a
period after it; a FORMULA; and an ANNOTATION, indicating either that the line is a premise or that
it has been obtained by some specified rule of inference. Example (last line of (14) above):
Index
f.
Formula
¬(Q ® P)
Annotation
SE, def. of ®
The formula is said to OCCUPY the line: for example, ¬(Q ® P) occupies line f of the deduction
(14). Whenever we say that a given formula occupies a line, we mean that the formula in
question is the LARGEST ONE WHICH APPEARS IN THE LINE. So, for example, although Q ® P appears
in the above line, it does not occupy it, since it is not the longest formula which appears in the
line. Take a moment to be sure that you’re clear about this, because it’s going to be important in
the following discussion.
Suppose that we’re trying to validate the schema
S Ù ¬(T ® U)
S Ù ¬U
Here is a deduction which does the job:
A BEGINNING COURSE IN MODERN LOGIC
a. S Ù ¬(T ® U)
b. S Ù ¬¬(T Ù ¬U)
c.
d.
e.
f.
g.
S Ù (T Ù ¬U)
S
(T Ù ¬U) Ù S
T Ù ¬U
¬U Ù T
h. ¬U
i. S Ù ¬U
82
Premise
SE, Def. of ®
SE, CR
AE
SE, CL c
AE
SE, CL
AE
AI
d, h
Now, there are four places in this deduction where SE is invoked: lines b, c, e and g. In lines e
and g, the relevant equivalence (CL in both cases) is applied to the formula which occupies a
preceding line: S Ù (T Ù ¬U) in the first case, T Ù ¬U in the second. But in lines b and c, the
relevant equivalences (the definition of ® in the first case, CR in the second) are applied to
SMALLER FORMULAS WITHIN THE FORMULA OCCUPYING A PRECEDING LINE: ¬(T ® U) in the case
of line (b), and ¬¬(T Ù ¬U) in the case of line (c). The larger formula occupying the line,
however, is equivalent to the one occupying the earlier line from which it was obtained since SE
says that a given formula may be replaced by an equivalent one without affecting its truth value,
OR THAT OF ANY LARGER FORMULA CONTAINING IT. This means that for any formula which occupies
a given line of a deduction, we can obtain a formula equivalent to it either by operating on the
entire formula by means of an equivalence schema or by operating on some smaller formula
contained by the larger one by means of such a schema. So, for example, the formula which
occupies line (b) in the deduction is equivalent to the one which occupies line a, even though the
only part of the one occupying line (a) is altered in obtaining line (b).
Rules of inference, by contrast, do not work this way! When a rule of inference is applied to
the occupant of a given line, IT MUST APPLY TO THE ENTIRE FORMULA WHICH OCCUPIES THE LINE.
Failure to observe this requirement is called a NON-OCCUPANCY ERROR and can have disastrous
consequences, as the following examples will show. First consider the invalid schema
¬(S → T)
S_______
T
(Suppose S is true and T is false; then both premises are true, and the conclusion false.) Now
suppose that someone ‘validates’ the schema by means of the following ‘deduction’:
A BEGINNING COURSE IN MODERN LOGIC
a. ¬(S → T)
b. S
c. T
83
Premise
Premise
MP
Granted, there is a conditional, namely S → T, in line (a) whose antecedent occupies line (b); but
S → T is not the occupant of line (a) (¬(S → T) is), so MP is not applicable to the two lines.
(That is, the occupant of line (a) is not, as required by MP, a conditional — it’s the NEGATION of
a conditonal.)
Here is a second example. The following schema is likewise invalid:
¬(S Ù ¬T) Ù U
SÙU
(Suppose that S is false and U is true; then the premise is true but the conclusion is false. The
truth value of T is immaterial to the example.) Now look at the following attempt at a validating
deduction:
a.
b.
c.
d.
e.
f.
¬(S Ù ¬T) Ù U
¬(S Ù ¬T)
S
U Ù ¬(S Ù ¬T)
U
SÙU
Premise
AE
AE
CL
AE
AI
a
c, e
Where is the mistake? Answer: on line (c). Yes, the formula occupying line (b) does contain a
conjunction (namely S Ù ¬T) but this conjunction does not occupy the line: the occupant of the
line is the NEGATION Of this conjunction.
Here’s one more example, just for good measure. The schema
S Ù ¬(S ® T)
SÙT
is invalid since the premise is true if S is true and T false, but the conclusion is obviously false
under these conditions. Now look at the following attempt at a deductive validation:
A BEGINNING COURSE IN MODERN LOGIC
a. S Ù ¬(S ® T)
b. S
c. S Ù T
84
Premise
AE
MP
The (faulty) reasoning here says ‘Since I have S ® T in line (a) and the antecedent of the
conditional in line (b), then I can replace the conditional in line (a) (leaving the rest of the line
unchanged) by its consequent, justfying the move by MP.’ Wrong! The conditional S ® T
doesn’t occupy line (a), and hence isn’t available to MP.
Mistakes of the kind just illustrated are of a kind which we might call ‘non-occupancy
errors’, and I make a point of them because they occur with some frequency in student work.
Beware!
We now come to two further equivalences that are of enormous utility, known as DE
MORGAN’S LAWS (after the British mathematician Augustus De Morgan) and referred to from here
on as DM:
¬SÚ¬T
¬(S Ù T)
(Valid)
¬SÙ¬T
¬(S Ú T)
(Valid)
Here is a deduction which establishes the first equivalence (noted earlier):
(15) a. ¬ S Ú ¬ T
b. ¬¬(¬ S Ú ¬ T)
c. ¬(S Ù T)
Premise
SE, CR
SE, AOR*
Notice that in getting from the premise to the conclusion, we used only SE — no ‘one-way’ rules
of inference were involved. Two statements are always equivalent if it is possible to deduce
one from the other using equivalence schemata only, and for the time being this is the only
technique of establishing equivalence that we will permit. (There are others, but we will
introduce them later.)
*
See Chapter 6.
A BEGINNING COURSE IN MODERN LOGIC
85
Use of SE in a deduction is subject to a restriction that needs to be stated explicitly, namely
that IT MUST ALWAYS BE CARRIED OUT IN A WAY WHICH RESPECTS PARENTHESES. That is, it is possible
to substitute INSIDE parentheses, but not ACROSS them, and formulas cannot be arbitrarily
reparenthesized. Here is an example to show why this restriction is necessary. Consider the
schema
(16) ¬ S Ú (¬T Ù U)
U
(Invalid)
To see that this schema is invalid, suppose that S and U are both false. (The value of T doesn’t
matter.) Since ¬S is true, the premise of (16) is also true, but the conclusion is false. But now
suppose that someone comes up with the following ‘deduction’ in support of the claim that (16)
is valid:
(17) a. ¬ S Ú (¬T Ù U)
b. ¬(S Ù T) Ù U
c. U Ù ¬(S Ù T)
d. U
Premise
SE, DM
SE, CL
AE
Something has clearly gone wrong, but what? The error can be located in line (b). DM cannot be
applied here because the parenthesization has not been respected. Since ¬S and ¬T in line (a) are
separated by a left parenthesis, the application of DM to line (a) is not allowed — nor is the
redisposition of the parentheses shown in (17b).
Another point about SE which needs to be emphasized is that SUBSTITUTION IS POSSIBLE ONLY
WHEN THE SUBSTITUTED FORMULA IS EQUIVALENT TO THE ONE FOR WHICH IT IS SUBSTITUTED; it is not
sufficient that it merely be entailed by the the one for which it is substituted. Here is an example
of the sort of trouble that can arise if this restriction is not scrupulously adhered to:
(18) a. ¬S
b. T
c. ¬(S Ú T)
Premise
Premise
SE, OI, a [ERROR]
This deduction cannot be correct, since if T is true by assumption then so is S Ú T — in which
case line (c) is false. The problem here is that in substituting S Ú T for S in line (a) we substituted
a formula which, while entailed by S, is not equivalent to it. Suppose, on the other hand, that we
had applied OI to line (b) so as to obtain
¬S Ú T.
A BEGINNING COURSE IN MODERN LOGIC
86
This is permissible, since although it does not constitute an allowable instance of SE, it DOES
constitute an allowable application of OI to the occupant of line (a) of the deduction taken as a
whole.
Here, then, are the things to remember:
Rules of inference, like MP or OI, can apply only TO THE ENTIRE OCCUPANT OF A LINE OF A
DEDUCTION.
SE is applicable only
WHEN THE SUBSTITUTED FORMULA IS EQUIVALENT TO THE ONE FOR
WHICH IT IS SUBSTITUTED.
For the time being we will restrict ourselves to just the rules of inference and equivalence
schemata given so far. In other words, a deduction in LSL is (for now, at least) defined as a
sequence of formulas each of which satisfies one of the following conditions:
(i)
the formula is a premise;
(ii)
the formula is equivalent to some earlier line in the deduction according to CL, AL,
CR, DM or the definition of Ú, ® or «;
(iii) the formula is obtainable from one or more earlier lines by virtue of AE, AI, OI, MP,
MT, DS or HS.
Later we will make the definition a little less restrictive in that we will allow lines to be justified
in some other ways and we will also permit certain operations that must now be carried out in
several steps to be ‘compressed’ into a single step. For the moment, however, we will be quite
strict in what is permitted in the interest of making sure that the logical leaps involved in a
deduction are as small as we can reasonably make them.
Demonstration Problems
1. Show by means of a deduction that the following schema is valid:
A BEGINNING COURSE IN MODERN LOGIC
87
(S Ù T) ® C
S
T
C
Solution. The key to solving this problem is by finding a way to apply MP. Here is a
deduction based on this strategy.
a. (S Ù T) ® C
b. S
c. T
d. S Ù T
e. C
Premise
Premise
Premise
AI
MP, a, d
2. Show by means of a deduction that the following schema is valid:
(S Ú T) ® (C Ù D)
S
C
Solution. Here we want first to find a way to apply MP so as to obtain C Ù D, after which
AE will get us the conclusion. Here is a deduction that does so.
a. (S Ú T) ® (C Ù D)
b. S
c. S Ú T
d. C Ù D
e. C
Premise
Premise
OI
MP, a, c
AE
Discussion. It is often helpful before embarking on a problem like this one to think in outline
about what sort of approach is going to be needed rather than to simply try things
haphazardly. In both of these problems, for example, we have a premise in the form of a
conditional and the conclusion consists either of the consequent of the conditional or of part
of the consequent. In the case of problem 2 it helps in fact to think ‘backwards’: if there is a
way to obtain C Ù D then C can be obtained by AE; and since C Ù D is the consequent of a
conditional, finding a way to apply MP is the natural way to proceed. In the next two
A BEGINNING COURSE IN MODERN LOGIC
88
problems, we will focus on this element of strategy — of how to get a sense, before diving
into the problem, of the outlines of an approach to solving it.
3. Validate the following schema:
S®T
SÚU
¬U
T
Solution. Notice that the first premise, S ® T, is a conditional and that its antecedent, S,
appears as part of the second premise. One possible strategy then is to try to deduce S, which
will then enable us to apply MP to obtain T. Here is a deduction which does just that:
a. S ® T
b. S Ú U
c. U Ú S
d. ¬U
e. S
f. T
Premise
Premise
SE, CL
Premise
DS
MP, a,e
(Success!)
4. Validate the following schema:
(S Ù T) Ú U
¬S
U
Solution. Here we have an argument whose first premise is a disjunction whose second
disjunct is U. If we can deduce the negation of the first disjunct, then we can apply DS to
obtain the second. We therefore focus on deducing ¬(S Ù T).
a. (S Ù T) Ú U
b. ¬S
c. ¬S Ú ¬T
d. ¬(S Ù T)
e. U
Premise
Premise
OI
SE, DM
DS, a,d
(Success!)
A BEGINNING COURSE IN MODERN LOGIC
89
Discussion. A natural question to ask is ‘How did you know to apply OI to line (b)? And
how did you know to make ¬T the second disjunct in line (c) rather than something else?’ To
a degree this is purely a matter of insight, imagination and ‘feel’ of a kind that comes only
with practice. Here again, it’s helpful to try working BACKWARDS from the goal, along the
following lines. ‘I’m trying to deduce ¬(S Ù T). Is that equivalent to something I can deduce
from something I already have?’ To this the answer is ‘Yes’ because ¬(S Ù T) is, by DM,
equivalent to ¬S Ú ¬T. But ¬S is a premise and OI is one of our rules of inference.
For future reference. Doing problems of this kind involves a delicate balance between two
points of view. On the one hand, you need a bird’s-eye view of the entire problem so as to
develop an outline of the strategy to be used in solving it. But you also need a worm’s-eye
view to make sure that each individual step is correctly carried out.
New Terms
transitivity of entailment
deduction
rule of inference
Problems
1. The following argument schemata are valid. Explain why. You may do so either via a
deduction of the conclusion from the premises or by a verbal explanation of the sort used in
Chapter 5.
*(i) ¬(¬ S Ù ¬ T)
¬ S________
T
(Valid)
(ii) ¬(S Ù ¬ T)
¬ T______
¬S
(Valid)
2. For each of the following schemata, give a deduction to show the validity of the schema.
Each non-premise line in the deduction is to be justified by one of the following: CR, CL,
DM, definitions of Ú, ® and «, AE, AI, MP, MT, OI, DS, HS.
A BEGINNING COURSE IN MODERN LOGIC
*a.
¬(S Ù ¬T)
S__________
T
*b.
S__________
¬(¬S Ù ¬T)
c. ¬ S
¬T
________
¬ (S Ú T)
d. S
T
____
¬(S ® ¬T)
3. Validate each of the following by means of a deduction. Each non-premise line in the
deduction is to be justified (except where noted) by one of the following: CR, CL, DM,
definitions of Ú, ® and «, AE, AI, MP, MT, OI, DS, HS.
*a. S ® T
¬SÚT
b. S Ù T
T®U
S®U
c. ¬ S Ù ¬ T
¬(S Ú T)
(second part of DM). Note: use of DM is not allowed in this part of the problem.
*4.
Imagine that another student in this course comes to you with the following worry. In
playing around with the rules of inference the student has apparently found a way to prove
that you can deduce any statement from any other statement using these principles. For
consider the following deduction:
90
A BEGINNING COURSE IN MODERN LOGIC
a. S
b. ¬ ¬ S
c. ¬ ¬ (S Ú T)
d. ¬ (¬ S Ù ¬ T)
e. ¬ ¬ T
f. T
Premise
SE, CR
OI
SE, DM
AE
SE, CR
But by US, you can replace S and T with any statements you want — hence every schema
with the first statement as its premise and the second as its conclusion is valid.
Consequently there is something wrong with the principles.
What is wrong with this argument?
91
8
Indirect Validation
All of the examples that we have given so far involve showing that an argument is valid by
means of a deduction which begins with the premises of the argument and ends with the
conclusion. Such validations are called DIRECT. Sometimes, however, it proves difficult to deduce
a conclusion from premises in a ‘straight line’ fashion and in such cases a different approach
may make things easier. One such method consists of beginning not with the premises of the
argument but with the NEGATION OF THE CONCLUSION. If from this we can deduce the negation of
one of the premises then the argument is valid. This method is based on the fact, noted in
Chapter 2, that in a valid argument the falsity of the conclusion entails the falsity of at least one
of the premises. We will illustrate it by using it to establish the validity of the schema (1) via the
deduction (2):
(1)
¬(S Ù T)
¬S Ú ¬T
(2)
a. ¬(¬S Ú ¬T)
b. ¬¬(¬¬S Ù ¬¬T)
c. S Ù T
d. ¬¬( S Ù T )
Premise
SE, def. of Ú
SE, CR
SE, CR
The first line of (2) negates the conclusion of (1). Note that the conclusion of this deduction is
the negation of the premise of (1). What we have done, in other words, is show the validity of (1)
not by working with it directly but by validating ANOTHER schema, namely
¬(¬S Ú ¬T)
¬¬( S Ù T )
which is valid only if (1) is. The technique involved is one form of a strategy called INDIRECT
validation. This method involves negating the conclusion of the argument and deducing the
negation of one of the premises from it. In our example there is only one premise, but a multi-
A BEGINNING COURSE IN MODERN LOGIC
93
premise argument can also be validated in this way. Suppose, for example, that we’re given the
schema
(3)
SÙT
U
S®U
Here is validation of the schema by means of a deduction in which the negation of a premise is
obtained from the negation of the conclusion:
(4)
a. ¬(S ® U)
b. ¬¬(S Ù ¬U)
c. S Ù ¬U
d. ¬U Ù S
e. ¬U
Premise
SE, Def. of ®
SE, CR
SE, CL
AE
The method just illustrated (which we will henceforth refer to as Method 1) is sometimes the
quickest way to validate a schema but it tends to not be very widely applicable. (Although it is
sometimes applicable in cases where there are multiple premises, as just shown, it works best
when there is only one premise.) We now turn to another form of indirect deduction, called
REDUCTIO AD ABSURDUM (‘Method 2’), which has very broad applicability — and which, in fact,
we will use over and over again in the remainder of the course.* To best understand the idea
underlying this method it’s best to first have an understanding of some concepts that we will be
using in another connection later on; since we will need to introduce them somewhere anyway,
we choose to do it here.
A statement which can in principle be either true or false is called a CONTINGENCY. An
example is the statement Rio de Janeiro is the capital of Brazil, which happens to be false but
which could, in principle, be true — indeed, once WAS true. (It ceased to be true in 1960, when
the Brazilian capital was removed to the then newly constructed city of Brasilia.)
Noncontingencies are of two types. One type consists of statements which cannot
even in principle. As an example, consider the compound formula S Ù ¬ S of LSL.
means to say that this statement cannot be true even in principle — or, equivalently,
impossible for it to be true — is that no matter what truth value is assumed for S, this
*
be true,
What it
that it’s
formula
Since ‘Reductio ad Absurdum’ is something of a mouthful, you can call this Method 2 if you
wish.
A BEGINNING COURSE IN MODERN LOGIC
94
comes out false. Such statements are called CONTRADICTIONS. On the other hand, the formula S
Ú ¬ S comes out true no matter what truth value is assumed for S. Such statements, which cannot
be false even in principle, are called TAUTOLOGIES. On the other hand, the statement S Ù T of LSL
is a contingency if S and T are presumed to be themselves contingencies. The reason is that if S
and T are contingencies, then it’s possible for S and T to be both true, but also possible for either
or both of them to be false; in the former eventuality, their conjunction is true but in the latter
then their conjunction is false.
For Future Reference: Unless some explicit indication is given to the contrary, literals in
LSL are always taken to represent contingencies.
Suppose now that some statement S is a contradiction and that T entails S. THEN T IS ITSELF A
CONTRADICTION! This is so because it’s impossible for a true statement to entail a false one, hence
impossible for a statement entailing a contradiction to be true.
Note. The foregoing paragraph is EXTREMELY IMPORTANT. If you do not understand it, you
will be lost for the rest of the course, so don’t proceed from here until you DO understand it.
If you can’t understand it on your own, get help.
This in turn provides us with another strategy for showing that an argument is valid: conjoin
its premise (or the conjunction of its premises, if there’s more than one) with the negation of the
conclusion and show that the resulting statement is a contradiction by deducing a contradiction
from it. If this can be done then it’s been shown that it’s impossible for both the conjunction of
the premises and the negation of the conclusion to be true — which in turn means that we’ve
shown that it is impossible for the premises to all be true and the conclusion false. But that’s just
what it means for an argument to be valid.
We illustrate by showing the validity of the rule of Modus Ponens. Here first is a restatement
of the rule:
(5)
S®T
S_____
T
(Valid)
and here is an easy deduction which shows its validity by Reductio ad Absurdum, using no rules
of inference other than AE and AI, and no equivalence schema other than the definition of ®:
A BEGINNING COURSE IN MODERN LOGIC
(6)
a. ((S ® T) Ù S) Ù ¬ T
95
b. (S ® T) Ù S
c. S ® T
d. ¬(S Ù ¬T)
e. S
Premise (conjunction of conjunction of premises with
negation of conclusion)
AE
AE
SE, def. of ®
AE, b (see Remark, below)
f. ¬T
g. S Ù ¬ T
h. ¬(S Ù ¬T) Ù S Ù ¬ T
AE, a
AI
AI, d,g, Contradiction
Remark. Actually, we’ve compressed two steps in obtaining (e) from (b) since strictly
speaking we need to commute to get S into position at the beginning before we can apply
AE. In the interest of shortening deductions, we shall from this point on allow ourselves the
liberty of omitting applications of CL required to support moves such as this one.
Validating MP by a deduction which uses only AE, AI and the definition of ® actually
serves a purpose beyond that of illustrating reductio ad absurdum/Method 2. When we first
introduced the various rules of inference that we’ve been using we capitalized on the fact that
they’re simple enough to be easily understood and their correctness therefore reasonably clear
from an intuitive standpoint. It turns out, however, that for most of them — and MP is one such
— we do not in fact have to rely on our intuition to justify them as long as we’re willing to
accept AE, AI and SE: we can validate all the remaining rules of inference deductively. And this
in turn means that the only rules of inference we have to accept on faith are AE, AI — which is
helpful these are the ones that most people find to be the easiest to take on faith. The two
demonstration problems below continue the project we’ve just started.
Demonstration Problems
1. Using no rules other than AE, AI, SE and MP (the last of which has already been validated
deductively), validate MT deductively.
Solution. First a restatement of the rule:
S®T
¬T___
¬S
(Valid)
A BEGINNING COURSE IN MODERN LOGIC
96
The next step is to construct the premise for the reductio, by conjoining the premises of the
original schema with the negation of the conclusion. This gives us
(S ® T) Ù ¬ T Ù ¬¬S
Here now is a deduction which enables us to derive a contradictory conclusion from this
premise:
a. (S ® T) Ù ¬ T Ù ¬¬S
b. S ® T
c. ¬¬S
d. S
e. T
f. ¬ T
g. T Ù ¬ T
Premise
AE
AE, a
SE, CR
MP, b,d
AE, a
AI, Contradiction
Remark. Notice that MT is not used at any point in this deduction. MP is, but we have
already validated it deductively so it’s not cheating to use it here. Nor is it cheating to use
CR, since it was justified in a way which makes no use of MT.
2.
Using no rules other than AE, AI and SE validate OI deductively.
Solution. As before we begin with a restatement of the rule:
S
SÚT
The premise for the reductio is accordingly
S Ù ¬(S Ú T)
Here then is the required deduction:
a. S Ù ¬(S Ú T)
b. S Ù ¬S Ù ¬T
c. S Ù ¬S
Premise
SE, DM
AE
Contradiction
Remark. Recall that DM can be proven without the use of OI.
A BEGINNING COURSE IN MODERN LOGIC
Alternate solution. OI can also be validated by Method 1, as follows:
a. ¬(S Ú T)
Premise
b. ¬S Ù ¬T
SE, DM
c. ¬S
AE
We can save some labor if we agree to the following shortcut: we will consider a contradiction to
have been established as soon as we derive a line in the deduction which is the negation of some
previous line. Thus, we could stop the two indirect deductions in the problems at lines (f) and (e)
respectively. Indirect deductions of the reductio type can be further shortened by another
expedient: if the conclusion of the schema to be validated is in the form ¬ S then we’ll allow the
premise of the reductio to take the form K Ù S, where K stands for the conjunction of the
premises from the original schema. With these shortcuts in hand we can sometimes validate a
schema more quickly by indirect than by direct means, as the following example will show. The
schema
¬S Ù ¬ T
¬(S Ù T) (Valid)
can be directly validated by the following four-line deduction:
a. ¬S Ù ¬ T
b. ¬ S
c. ¬S Ú ¬T
d. ¬(S Ù T)
Premise
AE
OI, b
SE, DM
However, to validate it indirectly, by a reductio making use of the approved shortcuts, requires
only three lines:
a. ¬S Ù ¬ T Ù S Ù T
b. ¬S
c. S
Premise
AE
AE, Contradiction
97
A BEGINNING COURSE IN MODERN LOGIC
98
WARNING!
Go
ON FROM THIS POINT ONLY WHEN YOU’RE SURE YOU UNDERSTAND
REDUCTIO AD ABSURDUM
AND ALL OF THE SUPPORTING DISCUSSION AND EXAMPLES — INCLUDING THE DEMONSTRATION
PROBLEMS.
SEEK HELP IF YOU’RE HAVING TROUBLE. THE CONSEQUENCES OF NOT HEEDING THIS
ADVICE COULD BE FATAL.
Against the background provided by our discussion of indirect deduction, let’s now go back
to the problem of validating equivalence schemata. So far we have one — and only one —
method for doing so: by deducing one statement in the schema from the other by a series of steps
in which only other equivalence schemata are involved. We now introduce two additional
methods for establishing equivalences both of which depend critically on indirect deduction.
The first method consists of proving the equivalence of two statements by separately
validating the one-way schemata in which the two statements occur as premise and conclusion
and vice-versa. For example, one way to prove the equivalence of
(7)
a.
b.
(S Ú T) ® ¬S
¬S
is to validate the two one-way schemata
(S Ú T) ® ¬S
¬S
¬S
(S Ú T) ® ¬S
The second can be validated directly, as follows:
a.
b.
c.
f.
g.
h.
¬S
¬S Ú ¬(S Ú T)
¬(S Ú T) Ú ¬S
¬((S Ú T) Ù S)
¬((S Ú T) Ù ¬¬S)
(S Ú T) ® ¬S
Premise
OI
SE, CL
SE, DM
SE, CL
SE, Def. of ®
A BEGINNING COURSE IN MODERN LOGIC
99
But the job can be done much more quickly by an indirect deduction using Method 1:
a. ¬((S Ú T) ® ¬S)
b. (S Ú T) Ù ¬¬S
c. ¬¬S
Premise
SE, Def. of ®
AE
Now let’s consider how to validate the first schema. Here the method of choice is reductio
(Method 2):
a. ((S Ú T) ® ¬S) Ù ¬¬S
Premise
b. (S Ú T) ® ¬S
AE
c. ¬¬S
AE, a
d. S
SE, CL
e. S Ú T
OI
f. ¬(S Ú T)
MT, b,c
Contradiction
A third way to prove equivalence is to validate a one-way schema with one of the statements
as premise and the other as conclusion and then a second one-way schema with the negations of
the premise and conclusion of the first one as premise and conclusion respectively. Example:
(8)
a.
b.
S
S Ú (S Ù T)
One-way schemata to be validated:
S
S Ú (S Ù T)
¬S
¬(S Ú (S Ù T))
Validating the first is easy, requiring nothing more than an application of OI. Here is a validation
of the second:
a.
b.
c.
e.
f.
¬S
¬S Ú ¬T
¬(S Ù T)
¬S Ù ¬(S Ù T)
¬(S Ú (S Ù T))
Premise
OI
SE, DM
AI, a,c
SE, DM
On the other hand, Method 2 will work just as well:
A BEGINNING COURSE IN MODERN LOGIC
(S Ú (S Ù T)) Ù ¬S
S Ú (S Ù T)
¬S
SÙT
S
S Ù ¬S
a.
b.
c.
d.
e.
f.
Premise
AE
AE, a
DS
AE
AI, c,e Contradiction
As this discussion shows, there are many different ways of attacking a problem — if one of
them doesn’t work, try another. But there is a deeper point here as well, an absolutely
fundamental idea behind all mathematical thinking. It is that you can’t always solve a problem
(or solve it easily) by tackling it head on. Sometimes you need to, as it were, sneak up on it from
behind. The methods of indirect deduction are not a mere intellectual curiosity but are used over
and over again in mathematics, the sciences and in philosophy. But we also now see why it is
that the notion of validity is defined the way it is. Sometimes we determine what’s true by
showing something else to be false — which we do by assuming that something and then
deducing a contradiction from it. Every time we do this, we make use of a valid argument with a
false premise and a false conclusion. To exclude this possibility would be to close off one of the
most productive of all techniques of reasoning. So again, we see method in the madness.
New Terms
(in)direct validation
contingency
contradiction
tautology
Reductio ad Absurdum
Problems
1. Give deductions which validate DS and HS. Needless to say, neither of these rules can be
used in the deduction by which it’s validated.
*2.
Let S be any tautology. Then if S entails T, T is also a tautology. Explain why.
100
A BEGINNING COURSE IN MODERN LOGIC
*3.
101
Contradictions have the property that every such statement entails every satement. Supply a
deduction to show that this is so. (Hint: Begin the deduction with the premise S Ù ¬ S.)
4. Validate each of the following by means of an indirect deduction. Each non-premise line in
the deduction is to be justified by one of the following: CR, CL, DM, definitions of Ú, ® and
«, AE, AI, MP, MT, OI, DS, HS.
a. S ® T
S ® ¬T
¬S
b. S ® T
U®V
____
(¬T Ú ¬U) ® (¬S Ú ¬U)
c. (¬S Ù T) ® U
T Ù ¬U
S
d. ¬ T____________
¬(¬(S Ù ¬T) Ù S)
5. The following schema is called the Law of Clavius:
¬S ® S
S
Validate this schema. Each non-premise line in the deduction is to be justified by one of the
following: CR, CL, DM, definitions of Ú, ® and «, AE, AI, MP, MT, OI, DS, HS.
6. There is a rule of inference commonly called CONSTRUCTIVe DILEMMA (CD) which is
expressed by the following schema:
SÚT
S®U
T®U
U
(Valid)
A BEGINNING COURSE IN MODERN LOGIC
Validate this schema.
For Future Reference: You may henceforth use CD in deductions if you wish. The same
holds for the rule CP (see problem 2(iv) for Chapter 6).
7. Validate the schema
S®T
¬S®U
TÚU
8. Validate the following equivalence:
S
S Ú (S Ù T)
*9.
If a schema can be validated by means of a deduction, then it can be validated indirectly by
Method 2. Explain why.
102
9
Lemmas
It sometimes happens that in the course of a deduction a step seems reasonable even though there
is no previously stated rule of inference or equivalence schema to support it. On the other hand,
the fact that the step SEEMS reasonable doesn’t guarantee that it is. If the step is justified,
however, there is a way to show that this is the case by creating new rules of inference or
equivalence schemata ‘on the fly’. Such rules are called LEMMAS.
Here is an example of a deduction in which two of the steps are supported by a lemma.
Suppose that we are given the schema
S
T
__
S ® (S ® T)
(Valid)
One way of showing the validity of the schema is via a deduction which starts like this:
a. S
b. T
c. S Ù T
d. S ® T
Premise
Premise
AI
?
Now, how are we to justify line (d)? It would be nice if we had a rule of inference
SÙT
S®T
but we don’t — not yet, anyway. But if we can prove the validity of this schema, then we can use
it as a rule of inference to justify the step from (c) to (d) in our original deduction. In order to do
this, of course, we must use only rules of inference already in hand. As it happens, this can be
done fairly straightforwardly as follows:
A BEGINNING COURSE IN MODERN LOGIC
a. ¬(S ® T)
b. S Ù ¬T
Premise (negation of conclusion)
SE, Def. of ®
c. ¬T
d. ¬T Ú ¬S
e. ¬(S Ù T)
AE (see Remark, below)
OI
SE, DM (see Remark)
Remark. Strictly speaking, to get to line (b) from line (a) we must first apply the definition
of ® to obtain ¬¬(S Ù ¬T) and then apply CR. We will henceforth allow applications of CR
to be ‘folded in’ with applications of other rules.
This validates our lemma, which we’ll call Lemma A. Here, then, is a completion of the
original deduction, making use of the lemma.)
a. S
b. T
c. S Ù T
d. S ® T
e. S Ù (S ® T)
f. S ® (S ® T)
Premise
Premise
AI
Lemma A
AI, a,d
Lemma A
Note that we’ve used Lemma A twice in the completed deduction.
Now let’s consider the question: under what circumstances are you likely to want to use a
lemma? Before we answer this question, let’s look again at the problem we used to introduce the
concept. With the aid of the lemma we were able to give a direct deduction by which to validate
the given schema. But the lemma was validated INDIRECTLY. In other words, we reached a point
in the original deduction where we had to, as it were, change direction and shift from reasoning
directly to reasoning indirectly. Our technique of deduction allows us to reason either directly or
indirectly, but it doesn’t allow us to change from one mode of reasoning to the other within a
single deduction. So: whenever such a change is called for, a lemma is needed.
Here is a somewhat more general (and precise) way of saying the same thing. Suppose that,
in some deduction, you have derived Statement A and you want the next line to have Statement
B. You don’t have a rule of inference in hand to support this move, but you know (or at least
suspect) that the schema
104
A BEGINNING COURSE IN MODERN LOGIC
Statement A
Statement B
105
is valid. To show that it’s valid, however, requires either assuming ¬Statement B and deducing
¬Statement A or assuming Statement A Ù ¬Statement B and deducing a contradiction. You then
validate the schema by one of these techniques and then invoke it as a lemma to support the
desired move in the original deduction.
To see another circumstance in which lemmas are useful, let’s reconsider our original
problem. As it happens, we could, if we wanted, validate the original schema via a single
deduction, as follows:
a. S
b. T
Premise
Premise
c. T Ú ¬S
OI
d. ¬S Ú T
e. ¬(¬¬S Ù ¬T)
f. ¬(S Ù ¬T)
g. S ® T
SE, CL
SE, Def. of Ú
SE, CR
SE, Def. of ®
h. (S ® T) Ú ¬S
i. ¬S Ú (S ® T)
j. ¬(¬¬S Ù ¬(S ® T))
k. ¬(S Ù ¬(S ® T))
l. S ® (S ® T)
OI
SE, CL
SE, Def. of Ú
SE, CR
SE, Def. of ®
Now, if you compare the two boxed sections of the deduction you’ll notice that they follow
exactly the same pattern. That is, the first line of each begins with the application of OI to the
preceding line, followed by a series of substitutions supported respectively by CL, the definition
of Ú, CR and the definition of ®. We have, in other words, gone through exactly the same
complex reasoning process twice, consuming ten lines of a twelve line deduction. In our previous
validation, we needed only a total of eleven lines: five for the lemma and six for the main
deduction. Hence, the deduction using the lemma is shorter by one line. But the saving of a line
here and there is not really what’s important. What IS important is that we go through the
complex reasoning process shown in the two boxes above only once rather than twice. So while
we could have done the job without the lemma, we can do it much more elegantly with the
lemma.
A BEGINNING COURSE IN MODERN LOGIC
Demonstration Problems
1. Validate the following two schemata:
(i) (S Ú T) Ù ¬ T
(S Ú T) ® S
(ii) S « T
T____
(S Ú T) ® S
Solution.
Lemma B.*
S
T®S
Validation of Lemma.
a. S
b. S Ú ¬T
c. ¬T Ú S
d. ¬T Ú ¬¬S
e. ¬(T Ù ¬S)
f. T ® S
Premise
OI
SE, CL
SE, CR
SE, DM
SE, def. of ®
Alternate validation (indirect, by Method 1).
a. ¬(T ® S)
Premise
b. ¬¬(T Ù ¬S)
SE, def. of ®
c. T Ù ¬S
SE, CR
d. ¬S
AE
*
This schema is sometimes called the RULE OF CONDITIONALIZATION.
106
A BEGINNING COURSE IN MODERN LOGIC
Main Validations.
Validating (i):
a. (S Ú T) Ù ¬ T
b. S Ú T
c. ¬T
d. S
e. (S Ú T) ® S
Premise
AE
AE
DS
Lemma B
Validating (ii)
a. S « T
b. (S ® T) Ù (T ® S)
c. T ® S
d. T
e. S
f. (S Ú T) ® S
Premise
SE, Def. of «
AE
Premise
MP, c, d
Lemma B
Discussion. The point of this problem was to show another circumstance under which
lemmas are useful. The point here is that it’s possible to validate both schema (i) and schema
(ii) via a strategy which involves first deducing S and then deducing the shared conclusion of
the two schemata. To do this without the aid of Lemma B would require the repetition of the
same complex reasoning process. By using the strategy of first providing a lemma we go
through that process only once and then invoke it at the appropriate point in each of the main
deductions.
2. Validate the schema
¬S ® T
¬T
___
(¬S Ù ¬T) Ú S
Solution.
a. ¬S ® T
b. ¬T
c. ¬¬S
e. S
Premise
Premise
MT
SE, CR
107
A BEGINNING COURSE IN MODERN LOGIC
f. (S Ú T) ® S
g. ¬((S Ú T) Ù ¬S)
Lemma B
SE, Def. of ®
h. ¬(S Ú T) Ú S
i. (¬S Ù ¬T) Ú S
SE, DM
SE, DM
108
Discussion. You might or might not have noticed that the conclusion of the schema is
equivalent to (S Ú T) ® S. If you did notice this, and also noticed that it’s possible to deduce
S from the premises via MT, then there’s a point in the deduction where you’re in exactly the
same situation as in the two main deductions shown in the previous problem: you’ve gotten
to S and now you want to deduce (S Ú T) ® S. But we have a lemma for that very purpose
— so we use it yet again.
For Future Reference. In writing up problem solutions in which lemmas are invoked in
support of a deduction, always state and prove the lemmas first. (You may not know what
lemmas — if any — you’ll need when you actually begin the problem — this is a
requirement on how to PRESENT the solution, not a specification of the order of steps you need
to go through to find it.) If you had been given the example just worked as a problem then
here is how it should be written up:
Lemma A
SÙT
S®T
Validation of Lemma:
a. ¬(S ® T)
b. S Ù ¬T
c. ¬T
d. ¬T Ú ¬S
e. ¬(S Ù T)
Premise (negation of conclusion)
SE, Def. of ®
AE (see Remark, below)
OI
SE, DM (see Remark)
Main Validation
a. S
b. T
Premise
Premise
c. S Ù T
d. S ® T
AI
Lemma A
A BEGINNING COURSE IN MODERN LOGIC
e. S Ù (S ® T)
f. S ® (S ® T)
109
AI, a,d
Lemma A
When you begin a new problem, identify any lemmas you use by letters of the alphabet
SUBSEQUENT to ones you have used in solving earlier problems. So, for example, if you have
proven Lemmas A and B in problem 1 and have a new lemma in problem 2, call it Lemma C.
The reason is that if a lemma proven in an earlier problem should be usable in a later one you
don’t have to re-prove it; and since there will be only one Lemma B, say, then any subsequent
reference to this lemma will always be clear.
Earlier we made the claim that MP, MT, HS,OI and DS can all be deductively validated in a
chain of reasoning that begins with just SE, AI and AE. We can now actually parlay this into an
even stronger claim: we can do everything with just SE, AI and AE! The other rules are
themselves really nothing more than lemmas which we have singled out for special recognition
because they are so useful in a wide variety of situations. So there is a sense in which SE, AI and
AE are the ‘rock bottom’ of our deductive system and everything else is built deductively on the
foundation that they provide. When you stop to think how simple and straightforward these three
rules are, this is really quite remarkable. Part of the interest of this subject, in fact, comes from
the realization that from a few obvious initial assumptions we can show many things that are far
less obvious. What we get out of our system, in other words, vastly exceeds what we put into it.
And that’s what makes a logician’s or a mathematician’s heart beat faster!
Optional Section: Logic and Axiomatics
The principles which underlie the deductive approach to validity closely resemble what
mathematicians call an AXIOMATIC SYSTEM. The earliest such system that we know of is the
treatment of the geometry of the plane formulated by the Greek mathematician Euclid in the
third century B.C. There are three main ingredients: a set of UNDEFINED TERMS OR SYMBOLS; a set
of DEFINTIONS; and a set of AXIOMS — statements whose truth is sufficiently obvious to warrant
being assumed without proof. The definitions and axioms are then enlisted in the task of proving
further statements, called THEOREMS, about the relevant class of objects. Among the undefined
terms in Euclid’s geometry are point and line, which can then be used in the definitions of terms
like angle and polygon. Among Euclid’s axioms is one which says that given any two quantities,
one must be either equal to, greater than, or less than the other. One of the better known
theorems is the one which says that the interior angles of a triangle sum to a straight angle. The
axioms and theorems taken together make up the set of THESES of the system.
A BEGINNING COURSE IN MODERN LOGIC
110
In our approach to deduction in sentential logic, there is an analogue to each of the
components just described. First, there is a set of undefined symbols, namely ¬ and Ù. These in
turn are used in the definitions of such defined symbols as ® and Ú. The theses are valid
schemata whose validity is either assumed (axioms) or whose validity can be proved. (More
precisely, the theses are statements to the effect that particular schemata are valid, but the
distinction isn’t terribly important in this context.) In the system developed here for SL, the
axioms are the equivalences CR, CL (for conjunction), and AL (for conjunction) and the rules of
inference AE and AI.
What is interesting about axiomatic systems is that they are built on a foundation of obvious
truths — on air, as it were — and yet it is possible to erect on this foundation an edifice of
theorems many of which are highly unobvious. Our confidence in their truth is nonetheless
justified by the fact that they are entailed by the axioms, whose truth does not appear to be open
to question. Every new theorem is another brick in an intellectual tower which extends infinitely
upward and outward from the most modest of beginnings.
In regard to logic in particular, the idea of an axiomatic system has another interesting
feature. The kind of logic you’re studying was developed in part to assist in the study of the
nature of axiomatic systems — including those which pertain to logic itself! The idea of a subject
which has itself as one of its objects of study may seem a bit odd at first, but a little thought will
reveal that it is in the very nature of of logic that it should have this property. The part of logic
which focuses on logic is called LOGICAL METATHEORY, or METALOGIC and is the source of some
of the most significant mathematical results of the modern era. One of these is worth a mention
before we go on.
Ask yourself the following question about SL: is there a way to give a validating deduction
for EVERY valid schema in LSL? You might be inclined to answer ‘Yes’ on the grounds that
every such schema that we’ve encountered so far can be validated in this way. But surely this is
not all the valid schemata there are — indeed, there are infinitely many of them — so to answer
the question we must resort to methods of proof. As it happens, it can be proven that there is
indeed a validating deduction for every valid schema (though the details are quite technical and
belong in a more advanced course in logic), a property of the system called COMPLETENESS.
Completeness can also be proved for various more elaborate kinds of logic (one of which will be
presented later on); but it is known there are also kinds of logic which are not complete in this
sense. But establishing these results is itself an exercise in logic — part of the fascination that the
subject holds for those who study it for a living.
A BEGINNING COURSE IN MODERN LOGIC
New Terms
lemma
Problems
1. For each of the following schemata, give two deductions, one of which validates the schema
directly while the other validates it indirectly.
*(i)
S
T
¬U
¬((S Ù T) ® U)
(ii)
S
T
¬U
¬((S Ú T) ® U)
(iii)
S
T
U
¬V
¬((S Ù (T Ú U)) ® V)
(iv)
S
T
S ® (S Ù T)
2. [CHALLENGE] The following equivalences are known as the LAWS OF DISTRIBUTIVITY (or
DISTRIBUTIVE LAWS) (‘DL’). Provide deductions by which to validate them.
a. S Ù (T Ú U) ___
(S Ù T) Ú (S Ù U)
b. S Ú (T Ù U) ____
(S Ú T) Ù (S Ú U)
111
A BEGINNING COURSE IN MODERN LOGIC
Now prove the following equivalence:
(S ® T) Ù (U ® T)
(S Ú U) ® T
*3.
Validate the following schema, called the DISJUNCTIVE CONDITIONAL rule (‘DC’):
S®T
¬S Ú T
112
10
Truth Tables
In Chapter 4, we gave rules for interpreting formulas containing the negation and conjunction
signs. In this chapter we are going to first present these rules in a different form and then, in the
next, we’ll introduce a new technique for validation of argument schemata.
Let’s begin by revisiting our rule for interpreting formulas containing the negation sign. This
rule can be re-stated in tabular form, like this:
¬
S
1
0
0
1
*
*
*
Table 10.1
Each column in the table is headed by a sign or a literal. In this case there is only one literal,
namely S, and only one sign, namely ¬. Each cell of the second column indicates one of the
possible truth values for the statement represented by the literal S while each cell of the first
column indicates the truth value of the statement formed by combining the negation sign with S.
(Remember that true sentences have 1 as their truth value and false sentences 0.) The table tells
us, in other words, that if S has 0 as its truth value than ¬ S has 1 as its truth value; and if S has 1
as its truth value, ¬S has 0 as its value. The asterisks that appear below the columns of the table
will be explained a little later.
A BEGINNING COURSE IN MODERN LOGIC
114
Table 10.1 is called a TRUTH TABLE. As the name implies its purpose is to show how to
determine the truth value of the statement ¬ S given the truth value of S. Here now is the truth
table corresponding to our rule for interpreting statements containing the conjunction sign:
S
Ù
T
0
0
0
1
0
0
0
0
1
1
1
1
*
*
*
*
Table 10.2
In this table the first and third columns are headed by literals and we must consider all the
possible combinations of truth values for them. There are four such possibilities: that both
statements are false (first row), that both are true (last row), that S is true and T false (second
row) and that S is false and T true (third row). The middle column, headed by the conjunction
sign, then shows for each possible combination of truth values for the literals what the truth
value is for the entire statement S Ù T. Note that if one or the other of the literals has the truth
value 0, so does the entire conjunction, and that it has the value 1 when both literals have this
value — exactly in accordance with our original rule.
Tables 10.1 and 10.2 are called BASIC or AXIOMATIC truth tables. Their function is to tell us
how to assign truth values to simple negations or conjunctions based on the truth values of their
component literals. But on the basis of these tables we can construct more complex tables for
longer statements, in which more than one sign occurs. Here is an example:
¬
0
1
1
1
*
*
*
*
*
(¬
1
0
1
0
*
*
S
0
1
0
1
*
Table 10.3
Ù
1
0
0
0
*
*
*
*
¬
1
1
0
0
*
*
*
T)
0
0
1
1
*
A BEGINNING COURSE IN MODERN LOGIC
115
This table tells us how to determine, for each possible combination of truth values for S and
T, the truth value of the formula ¬(¬S Ù ¬T). Remember that we must respect parentheses, which
tell us in this case that the first negation sign is to be considered as applying to the entire
conjunction ¬S Ù ¬T. Put somewhat differently, the parentheses tell us that for each possible
combination of truth values for S and T, we first compute the truth value for ¬S Ù ¬T and then
compute the truth value for its negation.
Now the time has come to explain the asterisks which appear below the tables. Their purpose
is to tell us in what order the various steps of filling in the cells of the table are carried out. The
first step is to assign the various possible combinations of truth values to the literals S and T.
This is indicated by the presence of a single asterisk at the bases of the columns headed by these
literals. The second step is to compute the various possible truth values for ¬S and ¬T, referring
back to Table 10.1. Note that everywhere a 0 occurs in the S column, a 1 occurs in the column
directly preceding it, and similarly for the T column. Since these computations are the second
step in the process, we place two asterisks below the column. The third step is to compute the
various possible truth values of ¬S Ù ¬T, referring back to Table 10.2. And finally, the fourth
step is to compute the truth values for the entire statement, so there are four asterisks beneath the
leftmost column of the table — the last one to be filled in.
In a truth table, the column at whose base the largest number of asterisks appears represents
the column containing, for each possible combination of truth values for its literals, the truth
value of the entire statement and is called the FINAL COLUMN. Note that this does not mean that it
is the last — that is, the rightmost — column of the table in a physical sense, but rather that it is
THE LAST COLUMN TO BE FILLED IN THE COURSE OF THE COMPUTATION.
The filling in of Table 10.3 makes it possible for us to give a table for disjunction, since S
Ú T is equivalent to ¬(¬S Ù ¬T):
S
Ú
T
0
0
0
1
1
0
0
1
1
1
1
1
*
*
*
*
Table 10.4
A BEGINNING COURSE IN MODERN LOGIC
116
(Note that the final column of this table has all 1’s except in the first cell, corresponding to the
rule which says that a disjunction is true if, but only if, at least one of its disjuncts is true.) Tables
for conditionals and biconditionals can be worked out in the same way. (This is done in the next
chapter.)
In our examples so far, we have had only two literals. But we must make allowance as well
for statements containing three or more. And here it is necessary to make a subtle distinction,
which arises when we consider, say, the formula (S Ú T) Ù ¬S. How many literals does this
formula contain? The temptation is to say that it contains three, and in a sense this is correct: if
we count symbols in the formula, there are six in all (not counting the parentheses) and three of
these are literals. On the other hand, there is another sense in which there are only two literals
since one — namely S — occurs twice. That is, in constructing the formula we have made use of
only two distinct symbols, S and T. We will deal with this situation by adhering to the following
principle: if the same literal occurs more than once, WE COUNT ONLY THE FIRST OCCURRENCE.
Hence, we take our example formula to contain only two literals. However, we recognize three
OCCURRENCES of literals in the formula: two occurrences of S and one of T. Why we are going to
all the trouble of making this distinction will become apparent shortly.
Here now is the truth table for the formula we have just discussed:
(S
Ú
T)
Ù
¬
S
0
0
0
0
1
0
1
1
0
0
0
1
0
1
1
1
1
0
1
*
1
*
*
1
*
0
*
*
*
*
0
*
*
*
1
*
Table 10.5
As before, we begin by associating combinations of truth values with the literals. Since there are
three occurrences of literals in the formula, there are three columns with a single asterisk at the
base. Notice now that because there are two occurrences of the literal S, THE COLUMNS HEADED
BY THE OCCURRENCES OF THIS LITERAL ARE IDENTICAL. This is a result of the obvious need to assure
that if a literal has a given truth value in one occurrence, then it has the same truth value in any
subsequent occurrences. Step two of the process is then to compute the possible values of S Ú T
A BEGINNING COURSE IN MODERN LOGIC
and step three to compute the values of ¬S. Finally, we compute the values for the entire
statement, the result of which is shown in the fourth column.
You may have noticed that in filling Table 10.5 we could have reversed the second and third
steps (that is, filling in the second and the fifth columns) without affecting the outcome.
However, in the interest of consistency of procedure, when we have the choice of which column
to fill next, WE WILL ALWAYS CHOOSE THE LEFTMOST ONE.
Suppose now that we have a formula which contains three different literals, for example, the
formula (S Ù T) Ù ¬ U. Now things become quite complicated, for whereas there are four
possible combinations of truth values for a formula with two literals, there are eight
combinations for this one:
all three literals false
all three literals true
S false, others true
S and T false, U true
S and U false, T true
S true, others false
S and T true, U false
S and U true, T false
As the number of literals grows, the mind begins to boggle at all the possible combinations of
truth values. And yet we must somehow make provision for them. There are two questions to
which we need answers: First, for each number of literals, how many different possible
combinations of truth values are there? (That is, how many rows will we need in the truth table
for a formula containing that number of literals?) And second, once we have the right number of
rows, how do we make sure that each row is different?
As a first step toward answering the first question, let’s consider the possibilities we’ve
looked at thus far. If there is just one literal, as in Table 10.1, then we need only two rows; if
there are two literals, then we need four rows; and if there are three, we need eight rows. In other
words, the number of rows needed doubles each time we add a new literal. If this pattern
117
A BEGINNING COURSE IN MODERN LOGIC
118
continues, then for four literals we’ll need sixteen rows, for five we’ll need thirty-two, and so on.
But can we be sure that the pattern WILL continue in this way? Put in something closer to
Mathematicalese, can we be sure for any n greater than 1 that the number of combinations of
truth values for n different literals is twice that for a formula with n - 1 literals? The answer is
‘yes’, for the following reason. Pick a number — any number you please — as long as it’s a
whole number greater than 0. Call this number n and let C represent the number of different
combinations of truth values for a formula containing n different literals. Now suppose that we
have a formula containing n+1 different literals. For the first n literals there will be C possible
combinations. Then for each of these combinations we must consider, first, the possibility that
the n+1st literal is false; then we must go through the process again considering the possibility
that the n+1st literal is true. In other words, the number of possible combinations for n+1 literals
— that is, 2C — will have doubled over what we needed for n literals. Put in a more general
way, for a formula containing n different literals we need 2n rows in the truth table for the
formula. Here then, without any values filled in yet, is the table for (S Ù T) Ù ¬ U.
(S
*
Ù
T)
*
Ù
¬
U
*
Table 10.6
To make sure that we get all eight combinations — in other words, that we don’t repeat a
combination — we adopt the following strategy. For the first literal (and all subsequent
occurrences of it, should there be any) we alternate 0’s and 1’s. Then, for each other literal (and
any subsequent occurrences) we double the number of rows before an alternation takes place, as
compared to the immediately preceding literal. So, for example, we alternate after every second
row for the second literal, after every fourth for the third and so on. This then gives us the pattern
shown below:
A BEGINNING COURSE IN MODERN LOGIC
(S
Ù
119
T)
Ù
¬
U
0
0
0
1
0
0
0
1
0
1
1
0
0
0
1
1
0
1
0
1
1
1
*
1
*
1
*
Table 10.7
For Future Reference: In doing problems which require the use of truth tables, always follow
the procedure just outlined. If you do, you’ll minimize the likelihood of a mistake and also help
preserve the sanity of the grader!
Truth tables are useful for a variety of purposes, as we will see in the next two chapters. For
the moment, we’ll exploit this new technique to show how it is possible to determine whether a
compound formula is a tautology, contingency or contradiction.
We illustrated the notion of a tautology with the formula S Ú ¬ S. That this statement will be
true regardless of the truth value assumed for S is easy enough to see. But what about a longer
and more complex statement, like ¬(¬(S Ù T) Ù ¬(¬ S Ú ¬ T))? Well, if you’re very good at this
sort of thing you might have recognized that ¬(S Ù T) and ¬S Ú ¬T are equivalent according to
De Morgan’s Laws; since the former entails the latter, the conjunction of the first and the
negation of the second is a contradiction and its negation is therefore a tautology. But you don’t
have to have that kind of insight in order to be able to determine that the original formula is a
tautology: you can do it completely mechanically by means of a truth table, as follows:
A BEGINNING COURSE IN MODERN LOGIC
120
¬
(¬
(S
Ù
T)
Ù
¬
(¬
S
Ú
¬
T))
1
1
0
0
0
0
0
1
0
1
1
0
1
1
1
0
0
0
0
0
1
1
1
0
1
1
0
0
1
0
0
1
0
1
0
1
1
*
*
*
*
*
*
*
*
*
0
*
*
*
1
*
1
*
*
1
*
0
*
*
*
*
*
*
*
*
1
*
*
*
*
*
*
*
0
*
*
*
*
1
*
0
*
*
*
*
*
*
0
*
*
*
*
*
1
*
Table 10.8
It will be observed that in the final column of the table we see nothing but 1’s — indicating that
the formula as a whole is true no matter what assumptions are made about the truth values of its
literals. In the next-to-final column, we see only 0’s, thus showing that we have negated a
contradiction. We have thus determined that our formula is a tautology since, if it comes out true
no matter what truth values are assumed for its literals, then it is impossible for it to be false. If a
compound formula is a contingency, on the other hand, then both 0 and 1 will appear at least
once in the final column of the formula’s truth table.
New Terms
truth table
basic/axiomatic
final column (of)
Problems
All of the following problems can be solved by the use of truth tables.
A BEGINNING COURSE IN MODERN LOGIC
1. For each of the following formulas determine if there is a combination of truth values for S
and T such that the formula as a whole is true. If the answer is yes, give such a combination
(there could be more than one); if the answer is no, so state.
*a.
(S Ù T) Ú ¬T
b.
(S Ú T) Ù ¬T
c.
¬(¬S Ú ¬T) Ù ¬ T
d.
(¬S Ú ¬T) Ù ¬ T
2. For each of the following formulas determine if there is a combination of truth values for S,
T and U such that the formula as a whole is true. If the answer is yes, give such a
combination (there could be more than one); if the answer is no, so state.
a. (S Ù T) Ù ¬(T Ù ¬U) Ù ¬U
b. (S Ú T) Ù ¬(T Ú ¬U) Ù ¬S Ù ¬T Ù ¬U
3. For each of the following statements determine whether it is a tautology, a contradiction or a
contingency.
a.
¬(S Ù ¬T) Ù S Ù ¬T
b.
(¬S Ú T) Ù S Ù T
c.
¬((¬S Ú T) Ù S Ù ¬T)
d.
(¬¬(S Ù T) Ú (¬S Ú ¬T)) Ù (¬(¬S Ú ¬T) Ú ¬(S Ù T))
121
11
Validity Revisited
If an argument is valid then we can show this by means of an appropriately constructed
deduction. Truth tables give us another way of doing the same thing — and more besides.
Suppose that S is the conjunction of the premises of an argument and T is the conclusion. Then
since, if the argument is valid, it’s impossible for S to be true and T false, S Ù ¬T is a
contradiction; similarly, ¬(S Ù ¬T) is a tautology. So we can test an argument schema for validity
by putting it in one of these forms and then constructing a truth table. If we use the first form
then we are looking for a final column consisting entirely of 0’s — the truth table analogue of a
deduction by Reductio ad Absurdum. If we use the second form then we are looking for a final
column consisting entirely of 1’s — the analogue of a direct deduction.
There is, however, a slightly quicker way to validate an argument schema which owes to the
fact that ¬(S Ù ¬T) can be abbreviated by the conditional S ® T. To make the process of
checking conditionals by truth tables easier we first construct the truth table for ¬(S Ù ¬T), which
is by definition equivalent to S ® T:
¬
1
0
1
1
*
*
*
*
(S
0
1
0
1
*
Ù
0
1
0
0
*
*
*
Table 11.1
¬
1
1
0
0
*
*
T)
0
0
1
1
*
A BEGINNING COURSE IN MODERN LOGIC
123
From this table we can then construct one for the conditional:
®
1
0
1
1
*
*
S
0
1
0
1
*
T
0
0
1
1
*
Table 11.2
For Future Reference: A conditional is false only if its antecedent is true and its consequent
false. Note especially that A CONDITIONAL IS TRUE IF ITS ANTECEDENT IS FALSE.
Then we can validate any argument schema by putting it in the form of a conditional with the
conjunction of the premises as its antecedent and the conclusion as its consequent and then
testing it via a truth table making use of Table 11.2. Example: to validate MP, that is,
S®T
S
T
we construct the following truth table:
((S
0
1
0
1
*
®
1
0
1
1
*
*
T)
0
0
1
1
*
Ù
0
0
0
1
*
*
*
S)
0
1
0
1
*
®
1
1
1
1
*
*
*
*
T
0
0
1
1
*
Table 11.3
Note that the final column consists entirely of 1’s.
A similar strategy can be invoked for validating equivalence schemata. Here we begin with
the truth table for the conjunction of a conditional and its converse:
A BEGINNING COURSE IN MODERN LOGIC
124
®
1
0
1
1
*
*
(S
0
1
0
1
*
Ù
1
0
0
1
*
*
*
*
T)
0
0
1
1
*
®
1
1
0
1
*
*
*
(T
0
0
1
1
*
S)
0
1
0
1
*
Table 11.4
and from this construct a table for the biconditional:
«
1
0
0
1
*
*
S
0
1
0
1
*
T
0
0
1
1
*
Table 11.5
Now let’s validate the equivalence schema
¬(S Ù T)
¬SÚ¬T
(one of De Morgan’s Laws):
(¬
1
1
1
0
*
*
*
(S
0
1
0
1
*
Ù
0
0
0
1
*
*
T))
0
0
1
1
*
«
1
1
1
1
*
*
*
*
*
*
(¬
1
0
1
0
*
*
*
*
S
0
1
0
1
*
Ú
1
1
1
0
*
*
*
*
*
¬
1
1
0
0
*
*
*
*
T)
0
0
1
1
*
A BEGINNING COURSE IN MODERN LOGIC
125
Table 11.6
The use of truth tables has the advantage over the use of deductions of being completely
mechanical: no insight or ingenuity is required — only care in making sure that the table has
been constructed according to the procedures specified in the last chapter. There is a
corresponding disadvantage, however, namely that the table required for a formula with more
than two or three literals becomes inconveniently large. For example, for a formula containing
five literals, 32 rows are required in the truth table (25 = 32). In such cases it’s usually much
quicker to show validity by a deduction than by the often tedious process of constructing a truth
table. Deductions are also a much better learning tool since, unlike truth tables, they force you to
think — which is why we didn’t begin discussing truth tables until deductions were well in hand.
But there is one thing that it is always possible to do with a truth table that cannot be done
with deductions. Though you can use a deduction to show that an argument is valid, you can’t
show deductively that an argument is INVALID. While producing an appropriate deduction enables
you to show validity, to show invalidity you need to show that NO SUCH DEDUCTION EXISTS —
quite a different matter! But with truth tables you can show invalidity as well as validity by
exactly the same technique: if the conditional whose antecedent is the conjunction of the
premises and whose consequent is the conclusion is a tautology — as indicated by a final column
of a truth table consisting entirely of 1’s — then the associated argument schema is valid; but if it
turns out to not be a tautology then a 0 will show up somewhere in the final column of its truth
table — which tells us that the associated schema is invalid.
Demonstration Problems
1. Determine the validity of the following argument schema using a truth table.
S®T
T ___
S
Solution. The first step is to construct the required conditional, which is:
((S ® T) Ù T) ® S
There are only two literals, so a truth table of only four rows is required:
A BEGINNING COURSE IN MODERN LOGIC
126
((S
®
T)
Ù
T)
®
S
0
1
0
0
0
1
0
1
0
0
0
0
1
1
0
1
1
1
1
0
0
1
1
1
1
1
1
1
*
*
*
*
*
*
*
*
*
*
*
*
*
Notice that the final column shows a 0 in the third row. Hence the formula is not a tautology and
the schema is INVALID.
2. Validate HS using a truth table.
Solution. Restatement of the schema:
S®T
T®U
S®U
The conditional to be checked is accordingly
((S ® T) Ù (T ® U)) ® (S ® U)
Since there are three different literals, S, T and U, we require eight rows in our table (see
next page):
A BEGINNING COURSE IN MODERN LOGIC
127
((S
®
T)
Ù
(T
®
U))
®
0
1
0
1
0
1
0
1
0
0
0
0
1
0
1
1
0
1
1
1
1
0
0
1
0
1
0
0
(S
®
U)
1
0
1
0
0
1
1
0
0
0
0
1
0
1
0
1
0
0
1
1
0
0
1
0
1
1
1
0
1
1
0
1
0
1
1
1
1
1
1
1
1
1
1
1
1
1
0
1
1
1
1
1
1
1
1
1
1
1
1
1
*
*
*
*
*
*
*
*
*
*
*
*
*
*
*
*
*
*
*
*
The truth table method for determining validity of arguments is an example of what is called a
DECISION PROCEDURE. A decision procedure is a purely mechanical way of getting a ‘yes’ or a
‘no’ answer to a question, such as ‘Is this a valid argument?’ Because it is completely
mechanical it could in principle be carried out by a computer, the only limitations being practical
ones having to do with how much memory is required and how long it takes to carry out the
procedure. One of the major preoccupations of theoretical computer science is with
implementing efficient decision procedures.
Logicians are not necessarily interested in whether you can make a decision procedure
efficient — they’re happy to leave that problem to the computer scientists. But logicians ARE
interested in whether or not decision procedures EXIST AT ALL for particular problems. In the case
of determining validity of arguments of sentential logic, such a procedure does exist. But as we
shall soon see, there’s more to logic than just sentential logic. And when you move on to other
kinds of problems, it turns out that decision procedures are not always available. One of the
central concerns of logic in the modern era, in fact, has been with precisely the question of when
a problem that can be phrased in the form of a question with a ‘yes’ or ‘no’ answer can be solved
by a decision procedure.
A BEGINNING COURSE IN MODERN LOGIC
We raise this point because later on we are going to expand our logical system in a way
which takes it beyond the realm of decision procedures. If an argument is valid we can always
construct a deduction by which to show that this is so, but there’s no analogue of the truth table
method. All the more reason, then, to focus on deductive methods.
New Terms
decision procedure
Problems
For any three of the schemata given in problem 1 at the end of Chapter 5 (including the starred
ones!), show the validity of the schema by means of a truth table.
128
12
Logical Relations
For any two literals there are four possible combinations of truth values. If we pick S and T, for
example, then both may be true, both may be false, S can be true and T false or S false and T
true. In more technical language we say that the two statements are LOGICALLY INDEPENDENT, by
which we mean that the truth or falsity of one has no bearing on the truth or falsity of the other.
But now consider, say, the statements S and S Ú T. These are not logically independent because
if S is true then we know that S Ú T is also true (though not conversely), and if S Ú T is false
then we know that S is false (though not conversely). Similarly, S and S ® T are not logically
independent because we know that if S ® T is false then S is true. (Why?)
Statements which are not logically independent are said to be LOGICALLY RELATED. The
relations that obtain between related sentences may take a number of forms some of which we
have already investigated. If one statement entails another, for example, the two statements are
logically related since if we know that the former is true the latter must be as well, and if the
latter is false so must the former be. Equivalence takes this one step further: if two statements are
equivalent, then if you know the truth value of either (it doesn’t matter which one it is) this tells
you the truth value of the other. The same is true of negation: if you know that two sentences
negate each other and know the truth value of one (again, it doesn't matter which one), then this
tells you the truth value of the other one as well.
In regard to negation, it’s helpful to introduce a distinction that will help us avoid confusion.
The terms negate and negation are commonly used in two ways in logic. We are using them in
one way when we say, for example, that ¬S negates/is the negation of S; we are using it in a
different though related way when we say that ¬S and S negate each other — meaning that it is
impossible for ¬S and S to have the same truth value. We can distinguish the two senses by
saying that ¬S EXPLICITLY or OVERTLY negates S, and that S IMPLICITLY or COVERTLY negates ¬S.
That is: the only way to overtly negate a given statement is to put the sign ¬ in front of it; any
other way of negating a statement is a covert negation. So, e.g., ¬¬S overtly negates ¬S, whereas
S covertly negates it. By the same token, ¬(S Ù T) overtly negates S Ù T while ¬S Ú ¬T covertly
negates it; ¬(S ® T) overtly negates S ® T while S Ù ¬T covertly negates it; and so on. A
A BEGINNING COURSE IN MODERN LOGIC
130
statement has just one overt negation, but may have many different covert ones; every covert
one, however, is equivalent to the overt one.
Back in Chapter 1 we spoke of consistency and inconsistency. These are ideas that we can
now understand more precisely in terms of the ideas of logical relatedness and logical
independence. To say that two sentences are inconsistent is to say that it’s impossible for them
both to be true, whereas to say that they are consistent is to say that it is possible for them both to
be true. This in turn means that sentences which are logically independent are always consistent,
and sentences which are inconsistent are always logically related.
Logical relations come in many forms but a number play an especially important role —
entailment, equivalence and negation among them. In this chapter we are going to look at two
more such relations, called CONTRARIETY and SUBCONTRARIETY.
Two statements are said to be CONTRARIES of each other if, and only if, it is possible for them
to both be false but impossible for them to both be true.* The formulas S Ù T and ¬S Ù ¬T are
related in this way. There are any of a number of ways in which we could assure ourselves that
this is true, one of which is to construct a truth table for their conjunction. If you do this (I hope
you will) and you do it correctly (as I also hope you will) then you’ll notice two things. The first
is that the final column of the table consists entirely of 0’s. The second is that the ‘semi-final’
columns, that is the ones headed by the remaining two conjunction signs, will never both show
1’s in the corresponding cells, though they will both sometimes show 0’s.
We can also ascertain deductively that the two formulas are related in this way, for to say that
it is impossible for both to be true is to say that each entails the negation of the other. That is, the
following two schemata are valid:
*
(1)
S Ù T________
¬(¬ S Ù ¬ T)
(2)
¬SÙ¬T
¬(S Ù T)
This is a narrower definition than the usual one, according to which two sentences are
contraries if and only if it is impossible for them to both be true. On this definition (but not on
the one we are assuming here) negation is a special case of contrariety. We opt here for the
narrower definition because it is likely to be less confusing for beginners.
A BEGINNING COURSE IN MODERN LOGIC
131
Here is a deduction by which we can validate (1):
a. S Ù T
b. S
c. S Ú T
d. ¬¬(S Ú T)
e. ¬(¬S Ù ¬T)
Premise
AE
OI
SE, CR
SE, DM
(I leave it to you to validate (2) deductively.)
What is the difference between contraries and negations? The answer is that when two
sentences negate each other, it’s impossible for them to have the SAME truth value — they can be
neither both true nor both false. When two sentences are contraries of each other, only half of the
definition of negation holds: the two sentences can’t both have 1 as their truth value but they CAN
both have 0.
Two sentences are SUBCONTRARIES of each other if, and only if, their negations are contraries.
Thus, while subcontraries can both be true, they cannot both be false. The two formulas S
Ú T and ¬S Ú ¬T are thus subcontraries of each other.*
Negation, contrariety, subcontrariety and entailment are themselves interconnected that:
(i)
If S and T are CONTRARIES then each ENTAILS the NEGATION of the other.
(ii)
If S and T are SUBCONTRARIES then each IS ENTAILED BY the negation of the other.
Why is this so? The following table shows the possible combinations of truth values when S and
T are contraries:
S
T
0
0
1
0
0
1
Table 12.1
*
Here, as in the case of contrariety, we use a narrower definition than the usual one, according
to which subcontraries can't both be false but doesn't exclude the possibility of subcontraries also
being negations.
A BEGINNING COURSE IN MODERN LOGIC
132
Assuming that these are the only possible combinations, consider the consequences for the
conditionals S ® ¬T and T ® ¬S:
S
0
1
0
*
®
1
1
1
*
*
*
¬
1
1
0
*
*
T
0
0
1
*
Table 12.2
T
0
0
1
*
®
1
1
1
*
*
*
¬
1
0
1
*
*
S
0
1
0
*
Table 12.3
Note that all three combinations yield a true conditional in both instances; hence it’s impossible
for the antecedent of either to be true and the consequent false, so the antecedent of each entails
the consequent.
A similar demonstration can be given for (ii). Here we start with the table
S
1
0
1
T
0
1
1
Table 12.4
and then consider the conditionals ¬S ® T and ¬T ® S:
A BEGINNING COURSE IN MODERN LOGIC
133
¬
0
1
0
*
*
S
1
0
1
*
®
1
1
1
*
*
*
T
0
1
1
*
Table 12.5
¬
1
0
0
*
*
T
0
1
1
*
®
1
1
1
*
*
*
S
1
0
1
*
Table 12.6
These connections between the three relations can be neatly summarized a diagram known as the
Square of Opposition.
Fig. 12.1
The Square of Opposition
A BEGINNING COURSE IN MODERN LOGIC
134
Suppose that we place the formulas S Ù T and ¬S Ù ¬T at the upper corners of the square, as
shown in Fig. 12.2. We thus indicate that the formulas in question are contraries. The vertical
arrows show directions of entailment. Since the conjunction of two formulas entails the
disjunction of these formulas, we can place S Ú T and ¬S Ú ¬T as shown. Fig. 12.2 then tells us
that the two disjunctions are subcontraries and also that the statements connected by diagonals
negate each other.
.
Fig. 12.2
(The two formulas at the left-hand corners could as well appear at the right-hand ones and viceversa. That is, the two contraries could be interchanged as long as the two subcontraries are as
well.)
The Square of Opposition, apart from the fact that it’s rather pretty, provides a convenient
way of summarizing relationships that it’s easy to get mixed up about, so taking a little time to
commit it to memory is well worth the effort. Here are the three main principles:
1.
*
Diagonals link negations.*
This is perhaps the appropriate place to point out that we’re using the term
NEGATION
in two somewhat different
senses. On the one hand, we speak of forming the negation of a statement by prefixing ¬ to it; on the other, however,
we speak of two formulas as negations of each other when they cannot have the same truth value — implying that a
single statement can have more than one negation. This inconsistency doesn’t get in the way of anything but for
those who are bothered by it, the term negation in the second sentence could be replaced by the term contradictory.
A BEGINNING COURSE IN MODERN LOGIC
135
2.
SUBcontraries are at the bottom (sub- means ‘below’) so contraries are at the top.
3.
Statements at the top of the square entail those vertically below them,
BUT NOT
CONVERSELY.
Demonstration Problems
1. Arrange the following statements on the Square of Opposition.
S®T
T®S
T Ù ¬S
S Ù ¬T
Solution.
T Ù ¬S
S Ù ¬T
S Ù ¬T
T Ù ¬S
T®S
S®T
or
S®T
T®S
Discussion. S ® T is equivalent to ¬S Ú T (DC — see problem 3 for Chapter 9) and is
thus entailed by T Ù ¬S, which is equivalent to ¬S Ù T (CL). T ® S and S Ù ¬T are
similarly related, for the same reason.
2. Arrange the following statements on the Square of Opposition.
SÙT
¬S ® T
¬(S Ú T)
¬S Ú ¬T
Solution.
SÙT
¬(S Ú T)
¬(S Ú T)
SÙT
¬S Ú ¬T
¬S ® T
or
¬S ® T
¬S Ú ¬T
Discussion. By DM, ¬(S Ú T) is equivalent to ¬S Ù ¬T, so ¬(S Ú T) and S Ù T are contraries
and ¬(S Ú T) entails ¬S Ú ¬T.
Now let’s come back to the ideas of consistency and entailment. Here are some useful
principles to remember:
A BEGINNING COURSE IN MODERN LOGIC
136
1. If two statements negate each other then they are INCONSISTENT.
2. If two contingencies are
INCONSISTENT
and are not negations of each other, then they are
CONTRARIES.
3. If two contingencies are INCONSISTENT then neither entails the other.
4. Subcontraries are CONSISTENT.
Henceforth we will allow ourselves to use any relationships which are shown in Fig. 12.2 to
justify the steps in a deduction, using the abbreviation SO to mark any line so obtained.
Demonstration Problems
3. Validate the schema
¬S Ù ¬T
S ® ¬T
Solution. From Fig. 12.2 we know that ¬S Ù ¬T entails ¬S Ú ¬T. We’ll henceforth allow a
shortcut in deductions, whereby moving from a conjunction to a disjunction in a single step is
allowed, writing ‘SO’ (for ‘Square of Opposition’) any time we do so. Hence, we can write
a. ¬S Ù ¬T
Premise
b. ¬S Ú ¬T
SO
c. ¬(S Ù T)
SE, DM
d. ¬(S Ù ¬¬T)
SE, CR
e. S ® ¬T
SE, Def. of ®
4. Validate the schema
¬(S Ú T)
¬S Ú ¬ T
Solution. The premise of the schema is the negation of S Ú T, hence equivalent, according to
Fig. 12.2, to ¬S Ù ¬T — which in turn entails the conclusion.
Alternate Solution. S Ú T and ¬S Ú ¬T are subcontraries so the negation of the former
entails the latter. But the negation of the former is equivalent, according to Fig. 12.2, to
¬S Ù ¬T.
A BEGINNING COURSE IN MODERN LOGIC
137
Discussion. We see here an example of the use of the Square of Opposition to even further
shortcut the deductive process. Step-by-step deduction is always available if needed but there
are circumstances under which it is possible to show entailment simply by recognizing
equivalences and following the pattern shown in the diagram.
Although it might not be obvious at first, implicit in the idea of consistency is a connection
between consistency on the one hand and validity and entailment on the other. The connection
can be described very simply: to say that the premises of a valid argument entail the conclusion
is to say that the premises and the negation of the conclusion are inconsistent. For this reason
some introductory treatments of logic put the primary emphasis on consistency and on the
conditions under which two statements are consistent or inconsistent. And many logicians
consider the idea of consistency to be the most fundamental one in the discipline.
Consistency is important for at least one other reason. In order for an argument to establish
the truth of its conclusion, two conditions must be satisfied: the argument must be valid, AND all
of its premises must be true. But how do we know if the premises of an argument are all true?
This is not necessarily a logical matter — for example, the truth of the statement All whales are
mammals doesn’t depend on the laws of logic but on certain facts about the anatomy and
physiology of living organisms. But there is one set of conditions under which you can use logic
to establish something about the truth of the premises of an argument, namely if it turns out that
two or more of the premises are INCONSISTENT. If so, then you know that the premises are NOT all
true and the argument — even if valid — is insufficient to establish the truth of the conclusion.
(This does NOT mean, by the way, that the conclusion is thereby shown to be false; all it shows is
that IF the conclusion is true, it has to be shown to be true in some other way.)
New Terms
contrary
subcontrary
Square of Opposition
Problems
1. Validate schema (2) from the chapter.
A BEGINNING COURSE IN MODERN LOGIC
*2.
The following four formulas are so related that they can be arranged on the Square of
Opposition. Show how to arrange them appropriately. (That is, put each formula at an
appropriate corner of the square.)
S®T
T®S
¬(S ® T)
¬(T ® S)
3. Note: Do not attempt this problem unless you have worked through problem 2.
The following four formulas are so related that they can be arranged on the Square of
Opposition. Show how to arrange them appropriately.
S«T
¬(S ® T) Ú ¬(T ® S)
¬(S ® T) Ù ¬(T ® S)
(S ® T) Ú (T ® S)
*4.
All contradictions are equivalent to each other, as are all tautologies. Explain.
5. (Note: Before doing this problem you should review problem 3 from Chapter 8.) If a
statement is a tautology, then it is entailed by every statement. Explain why.
*6. Explain why each of the following is true.
a.
Two statements are consistent if, and only if, their conjunction is not a
contradiction.
b.
If two statements are tautologies then the two statements are consistent.
c.
If one of two statements is a contradiction then the two statements are
inconsistent.
d.
If one of two statements entails the other, and the entailing statement is not a
contradiction, then the two statements are consistent.
138
A BEGINNING COURSE IN MODERN LOGIC
7. [Note: Do not attempt this problem until you have worked through problem 6.] For each of
the following pairs of statements, determine whether the statements are consistent or
inconsistent and explain your answer.
*a.
i. S ® T
ii. ¬(S Ú T)
*b.
i. S « T
ii. S Ù ¬T
c.
i. S Ù T
ii. S Ú ¬T
d.
i. S Ù T
ii. ¬(S ® T)
e.
i. S Ú T
ii. ¬(S « T) Ù ¬S
139
13
Symbolization
We have spent the last several chapters introducing one variety of symbolic logic, namely
sentential logic, and the associated language, LSL. Part of the motivation for the creation of this
special language comes from the considerations discussed in Chapter 3, but there is at least one
further reason that we have not yet discussed. To understand what is involved, ask the following
question: what is the relationship between an argument schema consisting of sentences of LSL
and an actual argument in a natural language? Consider, for example, the following two
arguments in English:
(1)
Roses are red.
Violets are blue.
Roses are red and violets are blue.
(2)
Minneapolis is in Minnesota.
Chicago is in Illinois.
__
Minneapolis is in Minnesota and Chicago is in Illinois.
Both of these arguments are valid by virtue of being in a particular form. How are we to
characterize this form? One way to do so is to say that both arguments conform to a certain plan,
a blueprint as it were, which we may represent by the schema
(3)
S
T
SÙT
(Valid)
This schema could be understood as telling us the following: one way to form a valid argument
in a given language is to take two statements in the language as premises (they can be any two
we wish) and form a conclusion consisting of the conjunction of the two statements, according to
the rules of the language. An actual argument in a natural language which exhibits the structure
represented by a given schema is called an INSTANCE of the schema; thus, (1-2) are both instances
of the schema (3). If the schema is valid, then SO ARE ALL INSTANCES OF IT.
A BEGINNING COURSE IN MODERN LOGIC
141
We could also think of the relationship the other way around: to know whether an argument
in a given natural language is valid, construct a schema that accurately describes its form and
then determine whether the schema is valid. The first step in this process, the construction of the
schema to represent the form of the original argument, is called SYMBOLIZATION.
But why bother with symbolization? Why not just work with the natural language arguments
directly (as in fact we did in Chapter 2)? The answer is that, apart from the problems pertaining
to working directly with natural language that we discussed in Chapter 3, there is the additional
complication that two different arguments phrased in a natural language may appear to have
different forms while nonetheless embodying the same structure from a logical point of view.
Here is an example.
(4)
Minneapolis is in Minnesota
St. Paul is in Minnesota.
Minneapolis is in Minnesota and St. Paul is in Minnesota.
(5)
Minneapolis is in Minnesota
St. Paul is in Minnesota.
Minneapolis and St. Paul are in Minnesota.
Now, notice that (4) is clearly an instance of (3) — but what about (5)? In one sense, the answer
seems clearly to be ‘No’ since the conclusion is not formed by conjoining the premises (although
the word and does occur in the sentence). In another sense, however, the answer seems to be
‘Yes’ since the conclusion of (5) is quite clearly equivalent to that of (4). We could think of the
conclusion of (5) as a way of abbreviating that of (4), so the arguments differ in letter, as it were,
but not in spirit.
The escape from the apparent paradox embodied in a pair of sentences like (4) and (5), which
are not of the same form in one sense but are of the same form in another, is a distinction
commonly made by logicians between what is called GRAMMATICAL FORM on the one hand and
LOGICAL FORM on the other. The conclusions in (4) and (5) are said to differ in their grammatical
form but to embody the same logical form.
Now back to symbolization. The advantage to symbolization is that we can express the
information that two statements whose grammatical forms are different but whose logical forms
are the same by giving them both the same symbolization. Thus, if we symbolize Minneapolis is
in Minnesota by M and St. Paul is in Minnesota by S then we can symbolize the conclusions of
A BEGINNING COURSE IN MODERN LOGIC
142
both (4) and (5) by the same LSL formula, namely M Ù S. Since we know that (4) can be
symbolized by a valid argument schema and (5) can be symbolized by the same schema, then we
can account for the validity of (5) in the same way that we account for that of (4) even though the
conclusions of the two arguments are outwardly different.
The example we’ve just discussed is not an isolated case. The existence of a variety of
different grammatical forms for sentences which all have the same logical form is a common
phenomenon in natural language. Here are some more examples from English that should help to
make the point.
(6)
a. Minneapolis is in Minnesota and Minneapolis is in Hennepin County.
b. Minneapolis is in Minnesota and is in Hennepin County.
c. Minneapolis is in Minnesota and in Hennepin County.
d. Minneapolis is in Minnesota and Hennepin County.
(7)
a. If the Duke is late then the Duchess will be furious.
b. If the Duke is late the Duchess will be furious.
c. The Duchess will be furious if the duke is late.
d. Should the Duke be late the Duchess will be furious.
e. The Duchess will be furious, should the Duke be late.
(8)
a. The Duke is not late.
b. The Duke isn’t late.
c. It is not the case that the Duke is late.
d. It’s not the case that the Duke is late.
e. It’s not the case that the Duke’s late.
(9)
a. I didn’t go to work yesterday.
b. Yesterday I didn’t go to work.
(10) a. I’m looking for someone who is wearing a white carnation.
b. I’m looking for someone who’s wearing a white carnation.
c. I’m looking for someone wearing a white carnation.
(11) a. A woman who was wearing a fur coat came in.
b. A woman came in who was wearing a fur coat.
Example (6) shows that there are no fewer than four different ways of expressing something that
in LSL would be expressed in only one way. Letting M symbolize Minneapolis is in Minnesota
A BEGINNING COURSE IN MODERN LOGIC
143
and H symbolize Minneapolis is in Hennepin County, then all four statements in (6) can be
symbolized by the single LSL formula M Ù H. Similarly, if we symbolize The Duke is late by D
then all five sentences in (8) can be symbolized by ¬D. And so on.
One kind of statement that can be expressed in English in a wide variety of ways is the
conditional. Example (7) shows a number of different ways of expressing the conditional that
we could symbolize in LSL by D ® F, where D symbolizes The Duke is late and F symbolizes
The Duchess will be furious. But these are not the only possibilities. Consider, for example, the
sentence
(12) Attempting to upgrade the quality of education without spending money guarantees
failure.
That this sentence has the logical form of a conditional can be seen by considering the following
paraphrase:
(13) If you attempt to upgrade the quality of education without spending money then you are
guaranteed to fail.
Let’s say that any English conditional which has the grammatical form If -STATEMENT-thenSTATEMENT is in CANONICAL FORM. Then any sentence counts as a conditional if it can be put in
this form without changing its meaning. We’ll also say that any conditional in which the
antecedent is introduced by the word if is an EXPLICIT conditional; all other conditionals are
.IMPLICIT.
Demonstration Problems
1. Each of the following can be rephrased as an explicit conditional. Put each statement in
canonical form.*
a. Anyone who is wearing a white carnation can be admitted to the party.
b. No one who is not wearing a white carnation can be admitted to the party.
c. To be able to be admitted to the party you must be wearing a white carnation.
*
There is more to the logic of these statements than we’re considering here. This matter is taken
up again in Part III — see the problems for Chapter 16.
A BEGINNING COURSE IN MODERN LOGIC
Solution. There could be more than one way of paraphrasing each of these, but here are my
own suggestions.
a. If a person is wearing a white carnation then that person can be admitted to the party.
b. If a person is not wearing a white carnation then that person cannot be admitted to
the party.
c. If you are to be admitted to the party then you must be wearing a white carnation.
Equivalently: if you are not wearing a white carnation then you cannot be admitted to the
party.
2. Identify which of the following statements are conditionals, and for each one which is, put it
in canonical form.
a. Wearing a white carnation gets you into the party.
b. I hope that a white carnation gets me into the party.
c. This party is open only to people wearing white carnations.
d. Since I’m wearing a white carnation I expect to be admitted to the party.
Solution.
a. If you’re wearing a white carnation then you get into the party.
b. [not a conditional] (but, see discussion below)
c. If you aren’t wearing a white carnation then this party is not open to you.
d. [not a conditional] (see discussion below)
Let’s focus now on Demonstration Problem 2 and ask: how, in dealing with English sentences,
do we know a conditional when we see one? This turns out to be quite a subtle question and
there are cases where opinions could differ. Nonetheless, there are some clear rules of thumb that
work well in many cases and which will help us in a variety of ways.
144
A BEGINNING COURSE IN MODERN LOGIC
145
Although the question we’re asking is about English let’s revert for a moment to LSL and
recall that a conditional S ® T is defined as an abbreviation for ¬(S Ù ¬T). Such a statement is
thus false if, but only if, S Ù ¬T is true. One strategy to adopt, then, in determining whether a
sentence is a conditional is first to try to imagine a situation in which the sentence is false and to
describe this situation by the conjunction of a true sentence and a false one. In the case of
sentence (a) from problem 2, for example, we might proceed as follows. Imagine that you go to
the party wearing a white carnation but are denied admission. Let W symbolize You are wearing
a white carnation and G symbolize You get into the party. Since you’re wearing a white
carnation but don’t get into the party, then W Ù ¬G describes this situation. But if (a) is true then
W Ù ¬G is false, hence ¬(W Ù ¬G) is true and is expressible also as the conditional W ® G.
Similarly, in the case of (c), the sentence would be false if you were to go to the party
without wearing a white carnation and be let in. Hence G Ù ¬W is true. If (c) is true, however,
then the statement ¬(G Ù ¬W) is true, and G ® W.
In the case of (b) it’s not obvious that there is a way to apply this strategy. One COULD apply
the strategy to part of the sentence, namely a white carnation gets me into the party. But the
sentence as a whole expresses only someone’s hope that a certain conditional is true. The
sentence is false if the person saying (or writing) it does not in fact entertain such a hope but it’s
not clear that there’s any way to describe this situation by means of a conjunction of a positive
and a negative sentence. It should be borne in mind, however, that this does not mean there is
literally no such way. I happen not to be able to think of one and I’m fairly adept at this sort of
thing but I am reluctant to say absolutely that (b) cannot be analyzed as having the logical form
of a conditional. That’s why, though I’ve identified it above as not being a conditional I’ve said
‘See discussion below’. The same is true of example (d).
This is a situation of a kind we haven’t encountered before. Up to this point, everything has
been cut and dried. Certain principles have been stated and their consequences explored, and
questions have clear answers. But questions having to do with how exactly to analyze the logical
form of sentences in natural languages are sometimes less clear and there are disagreements even
among the experts. (Welcome to the real world!) These are not questions about logic per se, that
is, about valid reasoning, but rather about the meanings of linguistic expressions and such
questions are often subtle and difficult. Nonetheless, there are cases which are reasonably clear
as well and (a) and (c) from problem 2 above would seem to qualify.
In a conditional, the antecedent is said to express SUFFICIENT conditions for the truth of the
consequent and the consequent is said to express NECESSARY conditions for the truth of the
antecedent. Another way of saying the same thing is that if a conditional S ® T is true and S is
A BEGINNING COURSE IN MODERN LOGIC
146
also true, then this guarantees the truth of T; and if the conditional is true and T is false then this
guarantees the falsity of S. (The rules of inference MP and MT simply express this information
in schematic form.) Conditionals in English are accordingly sometimes expressed in ways which
refer to necessary or sufficient conditions. For example, the sentence
(14) Being in Minneapolis is a sufficient condition for being in Minnesota.
can be thought of as a conditional whose expression in canonical form is
(15) If you’re in Minneapolis then you’re in Minnesota.
Similarly,
(16) Being in Minnesota is a necessary condition for being in Minneapolis.
is a conditional whose canonical form is
(17) If you’re not in Minnesota then you’re not in Minneapolis.
Notice, however, that (14) and (16) are equivalent (as are (15) and (17)). Suppose that we
symbolize the antecedent of (15) by M and the consequent by S. Then (15) is symbolized by the
LSL formula M ® S; but (17) is then symbolized by ¬S ® ¬M, which is equivalent to M ® S.
(If you did not do problem 5 from Chapter 8, which calls for validation of the Rule of
Contraposition, you should take a moment now to validate this equivalence either deductively or
by truth table.)
Consider now how we would render the following sentence in canonical form:
(18) You’re in Minneapolis only if you’re in Minnesota.
A little thought should reveal that the only if ... part of (18) expresses a necessary condition for
the truth of the sentence You’re in Minneapolis. That is, (18) is equivalent to (17) and thus can be
rendered in canonical form by (17). But it can also be rendered by (15), since (15) and (17) are
equivalent.
Next, consider the sentence
(19) You’re in Minneapolis if and only if you’re in Minnesota’s largest city.
This sentence is a BICONDITIONAL, as can be seen from re-expressing it as a conjunction:
A BEGINNING COURSE IN MODERN LOGIC
(20) You’re in Minneapolis if you’re in Minnesota’s largest city and you’re in Minneapolis
only if you’re in Minnesota’s largest city.
each conjunct of which can in turn be re-expressed in canonical form to yield
(21) If you’re in Minnesota’s largest city then you’re in Minneapolis and if you’re in
Minneapolis then you’re in Minnesota’s largest city.
Here are three other ways to express biconditionals in English:
(22) a.
b.
If you’re in Minnesota’s largest city then you’re in Minneapolis, and conversely.
Being in Minneapolis is both a necessary and a sufficient condition for being in
Minnesota’s largest city.
We’ll take the canonical form for biconditionals to be the form taken by sentence (19), that is
STATEMENT-if and only if-STATEMENT.
New Terms
symbolization
canonical form
necessary conditions
sufficient conditions
Problems
1.
Paraphrase each of the following as a conditional in canonical form.
*a. He who takes cakes that the Parsee Man bakes makes dreadful mistakes.
(Rudyard Kipling, How the Rhinoceros Got his Skin)
*b. Whereof we cannot speak, thereof we must remain silent.
(Ludwig Wittgenstein, Tractatus Logico-Philosophicus)
*c. Nothing worthwhile ever comes easy.
d. A fool and his money are soon parted.
147
A BEGINNING COURSE IN MODERN LOGIC
e. You don’t need a weatherman to know which way the wind blows.
(Bob Dylan, Subterranean Homesick Blues)
f. Beggars can’t be choosers.
g. 20% off all merchandise Saturday and Sunday
h. We must all hang together or we shall surely hang separately.
(Attributed to Benjamin Franklin, on deciding to join the American Revolution)
i. No shoes, no shirt, no service.
j. A rose by any other name would smell as sweet.
(Shakespeare, Romeo and Juliet)
k. Pay up or else.
*l. Unless the hard disk has crashed we’re okay.
2. Paraphrase each of the following as a biconditional in canonical form.
*a. The barber in this town shaves all and only those who do not shave themselves.
*b. Mow the lawn and I’ll give you a dollar, but not otherwise.
c. A triangle is equiangular if it’s equilateral, and conversely.
d. When you’re hot you’re hot. When you’re not you’re not.
e. We’re open every weekday but not on weekends.
f. Truth is beauty, beauty truth.
(Keats, ‘Ode on a Grecian Urn’)
148
14
Evaluation of Natural Arguments
Using Sentential Logic
An argument phrased in a natural language is called a NATURAL ARGUMENT. Whether or not such
an argument is valid is determined by the same principles used in evaluating argument schemata,
though the process is often made easier by being broken down into two steps: symbolization
followed by evaluation of the resulting symbolic schema for validity. The task of symbolization
itself, however, may have to be done rather indirectly. Here is an illustration.
A letter in the August 1, 1992 issue of Science, the journal of the American Association for
the Advancement of Science, takes issue with an article whose author suggests that computer
simulations can take the place of traditional scientific experiments. The letter writer argues as
follows:
Computers do calculations. [The results of these calculations] ... are predictions, not
results of experiments. The validation of a prediction is confirmation by experiment.
Now, these two sentences by themselves do not suggest an immediate symbolization — the
reason being that the letter writer is not being completely explicit about his premises.
Nonetheless, it is plausible to impute to the writer the following two assumptions on the basis of
what he actually DOES say, and which he clearly expects the reader to share:
(1)
If there is a difference between predictions and experimental results then computer
calculations cannot replace experiments.
(2)
There is a difference between predictions and experimental results.
These are called TACIT premises, that is, premises that are not actually stated in the argument
itself because acceptance of them by the reader can be taken for granted.
The conclusion for which the writer is arguing is clearly
A BEGINNING COURSE IN MODERN LOGIC
(3)
150
Computer calculations cannot replace experiments.
Do (1) and (2) entail this conclusion? Statement (1) is a conditional in canonical form. Suppose
that we symbolize its antecedent by D and its consequent by C. Since statement (2) is identical to
the antecedent of (1), it too may be symbolized by D. The argument with (1-2) as premises and
(3) as conclusion can accordingly be symbolized as
D®C
D
C
This schema is clearly valid, being simply MP with D substituted for S and C for T.
Here is a second example. A letter to the Minnesota Daily on August 24, 1992, replying to an
opinions piece on abortion, contains the following passage:
Is it really such an effortless move “from abortion, or the killing of an unborn baby, to the
killing of the retarded, the handicapped, the sick or even the elderly”? If this last
statement is true then how do you explain the recent passage of the Americans with
Disabilities Act, which protects the rights of the disabled?
The first sentence in the passage takes the form of what is called a RHETORICAL QUESTION — a
sentence in the form of a question which is nontheless used as if it were a statement. (Rhetorical
questions are typically intended as negations of the statements that correspond to them.) So the
first sentence can be re-expressed as the statement
(4)
It is not an effortless move from abortion to the killing of the retarded, the handicapped,
the sick or the elderly.
The author clearly intends this conclusion to be supported by the second sentence, which is also
in the form of a rhetorical question and may be re-rendered as the following conditional in
canonical form:
(5)
If it is an effortless move from abortion to the killing of the retarded, the handicapped,
the sick or the elderly then the Americans with Disabilities Act would not have been
passed.
Also implicit in the second sentence is the further statement
(6)
The Americans with Disabilities Act was passed.
A BEGINNING COURSE IN MODERN LOGIC
151
Suppose that we symbolize (6) as P. Then the consequent of (5) can be symbolized as ¬P.
Symbolizing the antecedent of (5) as E, we then obtain the schema
E ® ¬P
P
¬E
This schema too is valid, as the following deduction will show:
a. E ® ¬P
b. P
c. ¬¬P
d. ¬E
Premise
Premise
SE, CR
MT, a,c
Of course people do not always give valid arguments in support of their conclusions. As an
example, consider the following passage:*
Never has a book been subjected to such pitiless search for error as the Holy Bible. Both
reverent and agnostic critics have ploughed and harrowed its passages; but through it all
God’s word has stood supreme, and appears even more vital because of the violent
attacks made upon it. This is proof to Baptists that here we have a revelation from God ...
The author is clearly trying to support the statement
(7)
The Bible is God’s revelation.
The second sentence can be more briefly rendered as follows:
(8)
The Bible withstands all attempts to attack it critically.
The author also clearly expects his readers to accept the following statement, expressed as a
conditional in canonical form:
(9)
*
If the Bible does not withstand attempts to attack it critically then it is not God’s
revelation.
Quoted from Hillyer Stratton, Baptists: Their Message and Mission (Philadelphia: Judson
Press, 1941), p. 49. The example is taken at second hand from H. Pospesel, Propositional Logic
(Englewood Cliffs, NJ: Prentice-Hall, 1984), p. 16.
A BEGINNING COURSE IN MODERN LOGIC
152
If we take (8-9) as premises of an argument and (7) as a conclusion ostensibly entailed by
these premises, we can see that it is invalid. Suppose that we symbolize (7) by R and (8) by W.
Then we can symbolize (9) by ¬W ® ¬R. Here then is the schema we must evaluate for validity:
¬W® ¬R
W
R
The invalidity of this schema is shown by the following truth table:
((¬
W
®
¬
R)
Ù
W)
®
R
1
0
1
1
0
0
0
1
0
0
1
1
1
0
1
1
0
0
1
0
0
0
1
0
0
1
1
0
*
*
1
*
1
*
*
*
*
0
*
*
*
1
*
1
*
*
*
*
*
1
*
1
*
*
*
*
*
*
1
*
Note that the final column does not consist entirely of 1’s — there is a 0 in the second row.
Hence the formula for which the table was constructed is not a tautology and the schema from
which we derived the formula is accordingly invalid.
The error committed by the author of the quoted passage is called the FALLACY OF DENYING
THE ANTECEDENT. It is a kind of ‘backwards MT’ in that it consists of trying to reason from a
conditional and the negation of its ANTECEDENT to the negation of the consequent rather than, as
MT allows us to do, from a conditional and the negation of its CONSEQUENT to the truth of the
ANTECEDENT.
The term fallacy is standardly used in logic to refer to any strategy or procedure which
apparently leads to a valid argument but which, on closer inspection, turns out not to. The
Fallacy of Denying the Antcedent is one of the most widely committed — no doubt because it is
easy to get mixed up about antecedents and consequents and thus to misremember the rule MT.
A BEGINNING COURSE IN MODERN LOGIC
Demonstration Problems
153
1. Symbolize the argument being made by the writer of the following statement, which
appeared in a letter in the Aug. 25, 1992 issue of The New York Times.
The question ‘Is the fetus a person or not?’ has been answered by no less an entity than the
United States government. The Internal Revenue Service does not consider a fetus a person
(a tax-deductible dependent).
Solution. The first step is to try to tease out the statements of which the argument consists.
Often it’s easiest to start by stating the conclusion, which in this case is:
(a) A fetus is not a person.
The author does not actually state the conclusion, but is clearly attempting to support this
statement by deducing it from the following premises, of which the first is tacit:
(b) If a fetus is a person then a fetus is a tax-deductible dependent.
(c) A fetus is not a tax-deductible dependent.
The next step is to symbolize the statements (a-c). Let’s symbolize the antecedent of (b) as P
and the consequent as D. Then (b) as a whole is symbolized as P ® D. Since (c) negates the
consequent of (b), it is symbolized as ¬D and the conclusion, which negates the antecedent of
(b), is symbolized as ¬P. We thus obtain the schema
P®D
¬D
¬P
2. An editorial in the same issue of the New York Times opposing term limits for members of
Congress contains the following argument:
Historically, term limits [for members of Congress] would have shortened the
political lives of Daniel Webster, Robert LaFollette, Jr. and Sam Rayburn, among
others. Term limits heave out the good with the bad. America deserves better than
that.
Symbolize this argument.
Solution. The conclusion, though not explicitly stated, can be stated as follows:
A BEGINNING COURSE IN MODERN LOGIC
154
(a) Term limits are a bad idea.
The writer of the editorial then cites a number of distinguished members of Congress from
times past whose careers would have been shortened by term limits and goes on to say
(b) Term limits heave out the good with the bad.
There is also a tacit premise, namely
(c) If term limits heave out the good with the bad then they are a bad idea.
Let us then symbolize (a) by B and (b) by H; (c) is then symbolized by H ® B and the
argument as a whole is represented by the schema
H®B
H
B
For Future Reference: To write up a solution to a problem such as this one you need only give
symbolizations for the statements of which the argument consists and then a schema representing
the argument. For example, here is an acceptable writeup of the solution just given to problem 2:
Symbolization of Statements
H: Term limits heave out the good with the bad.
B: Term limits are a bad idea.
Symbolization of Argument
H®B
H
B
You may also be asked to determine whether an argument, symbolized in the way you have
proposed, is valid or invalid. If the argument is invalid, you must show this by producing an
appropriately constructed truth table. To show that an argument is valid you may do so either by
the truth table method or by a deduction.
A BEGINNING COURSE IN MODERN LOGIC
155
The arguments we have discussed so far are fairly simple. Here is an example of a more
complicated one, suggested to me by a debate which took place at a meeting of the University of
Minnesota Faculty Senate in the fall of 1992.
The Legislature will support the University’s funding request only if we adopt an explicit
policy regarding faculty workload. If we do not get adequate funding we will decline in
quality. But if we adopt a workload policy we will end up rewarding effort rather than
accomplishment, leading to a decline in quality. Either way the quality of the institution
will decline.
This argument seeks to support the pessimistic conclusion
(10) The quality of the University will decline.
There are four premises, all in the form of conditionals:
(11) a. If we do not adopt a workload policy then we do not get adequate funding.
b. If we do not get adequate funding then the quality of the University will decline.
c. If we do adopt a workload policy then we reward effort rather than
accomplishment.
d. If we reward effort rather than accomplishment then the quality of the University
will decline.
Let’s symbolize these statements as follows:
D: The quality of the University will decline.
W: We adopt a workload policy.
E: We reward effort rather than accomplishment.
F: We get adequate funding.
Here, then, is the symbolization of the argument as a whole:
A BEGINNING COURSE IN MODERN LOGIC
156
¬W ® ¬F
¬F ® D
W®E
E®D
D
This argument can be shown to be valid (alas!) by the following reductio:
¬W ® ¬F
¬F ® D
W®E
E®D
¬D
W®D
¬W
¬F
D
a.
b.
c.
d.
e.
f.
g.
h.
i.
Premise
Premise
Premise
Premise
Premise
HS
c, d
MT
MP
a, g
MP
b, h
Contradiction (see line (e))
New Terms
natural argument
rhetorical question
tacit
premise
conclusion
fallacy
Problems
*1.
This problem is in two parts:
Part I. Describe, from a logical point of view, a no-win situation. Hint: think of such a
situation as one in which the outcome is the same no matter what course of action you
A BEGINNING COURSE IN MODERN LOGIC
157
undertake — you’re damned if you do and damned if you don’t — as in the illustrative
argument about the adoption of a workload policy.
Part II. Logically speaking, is there any difference between a no-win situation and a nolose situation? Explain.
2.
On Nov. 29, 1992 the Minnesota Vikings and the Green Bay Packers both played games
important to their playoff prospects. Green Bay had to win its game against Tampa Bay to
keep its playoff hopes alive; and the Vikings would clinch a playoff berth that day if they
defeated the Los Angeles Rams and Green Bay lost to Tampa Bay, but not otherwise. Both
teams won their games that day. What was the consequence for the Vikings? Justify your
answer by means of a valid argument in which the following symbolized statements occur:
V: The Vikings win.
P: The Packers lose.
C: The Vikings clinch a playoff berth.
Having constructed the argument, show either by a deduction or a truth table, that the
argument is valid.
*3.
In 1808 the English scientist John Dalton proposed that all matter is made up of atoms —
minute particles that could not be broken down into anything smaller. His line of reasoning
was as follows:
If matter is not made up of atoms, then no matter how minute a quantity of a substance
we are given, it can be divided into still more minute quantities. If this is so, then there
is no explanation for the fact that when chemical elements combine to form compounds
they invariably do so in fixed ratios. (For example, hydrogen and oxygen combine to
form water only if there is exactly twice as much hydrogen as oxygen involved in the
reaction.) An explanation of this property of chemical reactions accordingly requires
that we assume that each chemical element is made up of particles which cannot be
further broken down.
Symbolize Dalton’s argument and show that it’s valid by means of a deduction.
4. A mystery writer whose real name is J. Allen Simpson publishes under the pseudonym of
M.D. Lake. His reasons for doing so are explained in an article by Terri Sutton in the Nov.
18, 1992 issue of City Pages:
A BEGINNING COURSE IN MODERN LOGIC
Simpson remembered a former neighbor, a girl who had grown up to be a campus cop.
He called her up, she showed him around, and hey presto, he had his detective. Because
he wanted to write in the first person, he tried to make the character a man. It didn’t
work. In his mind the campus cop was forever female.
Turns out [the gender of the detective] wasn’t a problem — but his was. After Amends
for Murder was accepted at Avon, the editors encouraged Simpson to go by his initials
since, they said, women won’t read a book narrated by a woman if it’s written by a man.
His initials — J.A. — were already in use by a woman writing police procedurals ... M.D.
Lake was born ...
Simpson has attracted some flak for his literary drag from female readers who felt
tricked.
Implicit in the foregoing passage is an argument that can be stated as follows:
A male writer who wants to write first-person narrative in a female voice is in a bind. If
he identifies himself by his real name women won’t buy his books. If he hides behind a
genderless pseudonym, women who find out that he isn’t female will feel betrayed. He
thus cannot avoid losing the allegiance of female readers.
Symbolize the argument and show its validity by means of a deduction.
5. A letter to the Minnesota Daily (Aug. 19, 1992) regarding possible Turkish intervention in
Bosnia-Herzegovina contains the following statement:
Turkey cleansed itself of Armenians in the early part of this century. Then, in 1974,
Turkey invaded Cyprus ... and cleansed it of the Greek Cypriots. ... Unless the Armenians
all died from food poisoning, the Greek Cypriots decided to move south to be closer
together (and I am the King of Norway), Turkey should be the last country to speak about
the matter.
Symbolize this argument and determine whether it is valid or invalid. Pay special attention to
the word unless, referring to part l of problem 1 at the end of the preceding chapter. (Note:
The writer of the letter is not, in fact, the King of Norway.)
6.
A good place to find natural arguments is in the parts of a newspaper where opinions are
expressed: in editorials, columns and letters to the editor. Using one or more issues of one
or more newspapers find at least one argument which, when correctly symbolized, takes the
form of MP and at least one which, when correctly symbolized, takes the form of MT.
(You can provide more than one argument of either type.) Turn in either the originals or
158
A BEGINNING COURSE IN MODERN LOGIC
photocopies of each item (article, editorial, column or letter) from which you have taken an
argument, and mark the location of the argument in the item. Identify tacit statements
(premises or conclusions), and then indicate whether the argument is of the MP or MT
type. Symbolize each argument.
159
Part III
Predicate Logic
15
Individuals and Properties
In this part of the course we return to a theme first heard at the very beginning, namely the logic
of quantifiers — also called PREDICATE LOGIC (PL). (Quantifiers, it will be recalled from Chapter
4, are words like all, no and some.) To treat PL we develop a new language — to be called
(surprise!) LPL. Many of the features of sentential logic and of LSL are carried over into the new
logic and its associated language. In fact, the following generalization holds:
All rules of deductive procedure for SL, including the rules of inference and the
equivalence schemata, apply to PL. The difference between PL and SL is that there are
ADDITIONAL rules of deductive procedure not found in SL.
In sentential logic, we use literals to represent statements and then create more complex
statements by combining simpler ones by means of the signs ¬, Ù, Ú, ® and «. Whether an
argument schema made up of formulas of LSL is valid depends solely on the way in which these
signs are deployed in the schema. Our attention turns now to arguments of a kind which take us
beyond what we can do with just these raw materials. An example is one of the very first ones
we considered, namely:
(1)
All Newtown Pippin computers are expensive.
The computer just purchased by Ms. Smith is a Newtown Pippin.
The computer just purchased by Ms. Smith is expensive.
In this argument, one of the things that we must take into account in understanding why it is
valid is the word all. Similarly, in the valid argument
(2)
No cat is canine.
Rufus is a cat.
Rufus is not canine.
the validity is in part due to properties of the word no. One of the differences between LSL and
LPL is that LSL has no analogues of such words while LPL does. Before we consider quantifiers
further, however, we must lay some prior groundwork. Specifically, we must consider sentences
which attribute PROPERTIES to INDIVIDUALS. Examples of such sentences are
A BEGINNING COURSE IN MODERN LOGIC
(3)
a.
b.
162
Rufus is a cat.
Minneapolis is in the Midwest.
In (3a) the property of being a cat is attributed to a specific individual, who goes by the name
Rufus. In (3b) the property of being in the Midwest is also attributed to a specific individual — a
place this time, not an animate being — namely the place that goes by the name Minneapolis.
This is easy enough to understand, but it’s not terribly precise — indeed, not nearly precise
enough for what we’re going to need to do later on. So we turn next to the task of defining more
carefully exactly what we mean by the terms individual and property.
Pick at random a nonempty set — that is, a set that has at least one member. It can be any
nonempty set you wish: the set of people in the world, of books on the shelf over your desk, of
abstract concepts, of numbers greater than 3 — whatever. Note that the set could be finite or
infinite. Call this set U. Then each member of U is called an INDIVIDUAL RELATIVE TO U. So in
this context individual just means ‘element of some nonempty set’ and we are free to pick and
chose what this set is.
The set U is commonly referred to as the UNIVERSE OF DISCOURSE, or just the UNIVERSE. As
already observed, we can define our universe any way we want as long as it consists of a
nonempty set. This amounts to saying that we can pick and choose what things we want to talk
about — people, places, whatever we wish. Just to have something concrete to work with, let’s
say that our selected U is the set consisting of the four cities Minneapolis, St. Paul, Chicago and
New York.
Now consider English phrases like is in the Midwest. Such expressions can be thought of as
names of particular subsets of U. So, for example, is in the Midwest can be thought of as a name
for the subset of U (as we have defined it for purposes of this discussion) consisting of
Minneapolis, St. Paul and Chicago. Such subset names are among the symbols called
PREDICATES.
As in LSL, we distinguish in LPL between literals and signs, and the two languages are alike
in that every sign in LSL is also a sign in LPL, and every literal of LSL is also a literal of LPL.
There are, however, some differences as well. First, there are signs in LPL that are not signs of
LSL. (These are introduced in the next chapter.) Similarly, there are literals of LPL that are not
literals of LSL. There are three kinds of literals in the language, called INDIVIDUAL CONSTANTS,
A BEGINNING COURSE IN MODERN LOGIC
163
and PREDICATES.* Individual variables will not be discussed until the next
chapter; we turn attention now to the other kinds of literals.
INDIVIDUAL VARIABLES
Literals made up of Italicized lower-case letters other than u, v, w, x, y or z are individual
constants and are understood to be names of individuals relative to U. So, for example, we might
use the letters m, s, c and n respectively to designate Minneapolis, St. Paul, Chicago and New
York. Literals made up of BLOCK CAPITAL letters are used either as they are in LSL, to
represent statements (in which case they are called STATEMENT LETTERS), or as predicates. How a
given literal is being used can always be inferred frm context; for example, in S Ù T(a), S
represents a statement while T is used as a predicate. (Strictly speaking, even literals representing
statements are counted as predicates of a special kind, but that’s a nicety we aren’t going to
worry about here.) So, for example, M might consist of the subset of our little universe of cities
consisting of just the midwestern ones — that is, Chicago, Minneapolis and St. Paul.†
Individual constants and predicates have no meanings in any absolute sense. M, for example,
does not necessarily have to mean ‘is in the Midwest’. It can mean anything we want it to
PROVIDED that it is taken to represent a statement or to be the name of a subset of some universe
of discourse specified in advance. But we’re free not only to pick the universe in any way we
want (as long as it’s nonempty), we’re free to associate M with any subset of it we want.
A universe and a set of statements telling us how to associate individual constants with
elements of the universe, statement letters with truth values and predicates with subsets of the
universe together form what is called a MODEL for LPL. In specifying a model, however, we
must adhere absolutely to the following restriction: EACH PREDICATE DESIGNATES EXACTLY ONE
SUBSET OF THE UNIVERSE OF DISCOURSE AND EACH INDIVIDUAL CONSTANT DESIGNATES EXACTLY ONE
Note that this is restriction does not hold of English, at least insofar as
it is possible for names to designate more than one entity at a time: for example, if we think of
the universe of discourse for English as being the entire universe (in the cosmological sense),
MEMBER OF THE UNIVERSE.
*
More accurately, there are three in the specific subpart of the language presented in this course.
There is more to LPL than we present here, but consideration of it is usually left to more
advanced courses in logic.
†
Just as we need infinitely many statement literals in LSL, we also need to make provision for
infinitely many individual constants and infinitely many predicates. We do so in the same way,
that is, by letting any sequence of repeated small italicized letters (other than u through z) count
as an individual constant, and any sequence of repeated block capitals as a predicate.
A BEGINNING COURSE IN MODERN LOGIC
164
then the city name Philadelphia (for example) does not designate a single individual since there
is more than one city so named.‡ So we run into problems with sentences like Philadelphia is in
the North, which is true if we are talking about Philadelphia, Pennsylvania but false if we’re
talking about Philadelphia, Mississippi. Similarly, Paris is the site of the Eiffel Tower is true if
we’re talking about Paris, France but false if we’re talking about Paris, Texas. This fact about the
use of names in ordinary language causes problems not unlike the ones we discussed back in
Chapter 3 involving ambiguous sentences. So now we see yet another reason for avoiding natural
language for purposes of studying logic.
On the other hand, there is no prohibition on letting a given element or subset of the universe
be denoted by more than one individual constant or predicate. This situation arises in ordinary
language too — for example, the country officially known as the People’s Democratic Republic
of Korea is commonly referred to (by Americans, at any rate) as North Korea. Similarly, the
animal that a zoologist would call Felis concolor is known colloquially by various terms,
including cougar, puma and mountain lion. But no indeterminacy arises from there being more
than one word for a given thing; and, as we shall see later on, allowing for ‘multiple designation’
is crucial to an adequate formulation of logic.
Suppose now that we wish to say, in LPL, that a certain individual (relative to the universe in
our selected model) has some specified property. Suppose, for purposes of discussion, that s is
the individual constant associated with this individual in our model and M is the name of the
subset corresponding to the property in question. Then M(s) (read ‘M of s’) is the appropriate
formula. This formula is true relative to any model in which M is associated with a subset of the
universe of discourse having the individual associated with s as a member, and false relative to
any model in which this is not the case. Notice that this means that M(s) is true relative to some
models but false relative to others — it all depends on how M and s are defined. Formulas of this
kind — that is, composed entirely of literals — are termed ATOMIC.
As just noted, atomic formulas consist (by definition) solely of literals. Formulas involving
signs are called MOLECULAR, and those involving only the signs ¬, Ù, Ú, ®, and « are called
COMPOUND. (Later, we will introduce two additional signs.) Insofar as the signs ¬, Ù, Ú, ®, and
« are concerned, the rules for their use are exactly the same as before. For example,
M(c) Ù ¬M(n) is a formula of LPL which could be understood as the analogue of the English
sentence Chicago is midwestern and New York is not midwestern (though it would not have to be
‡
The best known one, of course, is the one in Pennsylvania. But there is also a Philadelphia,
Mississippi and there may be others as well.
A BEGINNING COURSE IN MODERN LOGIC
165
understood this way). The conditions under which a molecular formula is true relative to the
model are exactly the same as those under which a compound formula of LSL is true: for
example, M(c) Ù ¬M(n) is true relative to the model if and only if both M(c) and ¬M(n) are true
(again, relative to the model); similarly, ¬M(n) is true if and only if M(n) is false (that is, the
individual associated with n in the model is not an element of the subset associated with M in the
model).
But why all the fuss about universes of discourse and models, and having statements which
are not true or false in any absolute sense but only true or false relative to some model? There are
several reasons, all related to each other. The main one is that just as we don’t care, in sentential
logic, whether the statements represented by individual literals are true or false, and can thus
assign truth values to literals in a completely arbitrary fashion, so in predicate logic do we not
care whether atomic formulas are true or false. We make it possible to have the desired
flexibility by allowing ourselves to select models in any way we wish and interpreting predicates
and individual constants relative to a selected model in any way we wish as long as we don’t
violate the restriction that each expression can designate only one thing at a time. But we also get
a precise definition of validity in LPL, to wit: an argument is valid if and only if THERE IS NO
MODEL RELATIVE TO WHICH THE PREMISES ARE TRUE AND THE CONCLUSION FALSE. By the same token,
an argument is INVALID if such a model exists. This in turn leads to another strategy for showing
that an argument is invalid: construct a model relative to which the premises are true and the
conclusion false. If such a model exists, then it’s possible for the premises to be true and the
conclusion false — which is what it means to say that the argument is invalid — and, likewise, if
no such model exists then it’s impossible for the premises to be true and the conclusion false.
We can illustrate this point with a specific example. The sentential schema
S®T
T ___
S
happens to be INVALID. This can be determined by inspecting the truth table for the formula
((S ® T) Ù T) ® S
but with our notion of model now in hand there is another way as well. Suppose that we think of
S and T as representing statements in LPL. Specifically, let’s suppose that S represents P(a) and
T represents Q(a). Then, making appropriate substitutions, we obtain the schema
A BEGINNING COURSE IN MODERN LOGIC
166
P(a) ® Q(a)
Q(a)
_
P(a)
Now let’s consider a model in which U consists of the individuals Alice, Bob, Claire and Dave
and in which a is associated with the Alice, P is associated with the the subset of U consisting of
just the individuals Bob and Dave, and Q is associated with the subset consisting of Alice and
Claire. Note that relative to any such model P(a) is false and Q(a) is true. Now, if P(a) is false
then the first premise of the schema is true since a conditional is true whenever its antecedent is
false. Hence both premises are true but the conclusion is false. But what does this mean? It
means that the original schema is invalid. Since S can represent any statement we wish, it can
represent P(a), and since T can represent any statement we wish, it can represent Q(a). Having
shown that there is a model relative to which P(a) ® Q(a) and Q(a) are both true but P(a) is
false, we have thus shown that it is possible for S ® T and T to be true and S to be false. So:
LPL and the idea of truth relative to a model can be used as the basis of a strategy for showing
that arguments in a certain form are invalid. Now in this case, it’s a rather elaborate way of doing
something that can be done more easily with a truth table. However, we will encounter before
very long cases where the truth table method is not available to us as a means of showing
invalidity while this method is. So that’s one reason for all the heavy breathing about models!
Reference to models can also be used as a way of precisely defining the conditions under
which a statement is a tautology or a contradiction: if the former, then there’s no model relative
to which it’s false and if the latter then there’s none relative to which it’s true. By the same
token, if two sentences are negations of each other there’s no model in which they have the same
truth value; if they’re contraries, there’s no model in which they’re both true; and if they’re
subcontraries then there’s no model in which they’re both false. Finally, entailment and
equivalence are also definable in comparable terms: one statement entails another if and only if
there’s no model in which the former is true and the latter false, and two statements are
equivalent if and only if there’s no model in which they have different truth values.
Demonstration Problems
1. Describe a model in which the universe of discourse consists of the United States, Canada
and Mexico. Give LPL analogues of the following English sentences:
(a) The United States and Mexico are both south of Canada.
A BEGINNING COURSE IN MODERN LOGIC
167
(b) If Canada is north of the United States then it’s north of Mexico.
(c) Neither the United States nor Canada is south of Mexico.
Solution. Since the universe is already given, we need merely interpret predicates and
individual constants to complete the model. What predicates and constants we pick and how
we interpret them is completely arbitrary. Here’s one possible set of choices for the
expressions we’ll need to do the rest of the problem:
S: south of Canada
N: north of the United States
M: north of Mexico
Q: south of Mexico
s: the United States
c: Canada
m: Mexico
Analogues in LPL of the three sentences are then as follows:
(a) S(m) Ù S(s)
(b) N(c) ® M(c)
(c) ¬Q(s) Ù ¬Q(c)
Note. The model given here is not wholly specified (since we haven’t assigned values to all
individual constants nor to all predicates). That is why, at the beginning of the problem, it
says ‘describe’ rather than ‘construct’. The idea is that we describe just those features of a
model which are relevant to the task at hand, leaving the (irrelevant) remainder unspecified.
2. Show the invalidity of the argument
If Alice passed the final then she passed the course.
Alice didn’t pass the final
Alice didn’t pass the course.
A BEGINNING COURSE IN MODERN LOGIC
168
by means of a model in which the premises are true and the conclusion false.
Solution (one of many!). Assume a universe consisting of Alice, Bob, Claire and Dave and
let these individuals be designated respectively by a, b ,c and d. Let F symbolize passed the
final an C symbolize passed the course and suppose that F and G respectively designate the
sets {Bob, Claire} and {Alice, Bob, Claire}. The argument as a whole can be symbolized as
follows;
F(a) ® C(a)
¬F(a)
¬C(a)
(true in the model just described)
(also true in the model)
(false in the model)
The first premise is true in the mode since it has a false antecedent; the second premise is true
because F(a) is false in the model. The conclusion is false because C(a) is true in the model.
3. Show the invalidity of the following schema via a model relative to which its premises
are true and its conclusion false:
P(a) ® Q(a)
¬Q(b) ___
¬P(b)
Solution. Assume as before a universe consisting of Alice, Bob and Claire designated
respectively by a, b and c. Associate P with the set consisting of Alice and Bob and Q with
the set consisting of Alice and Claire. Then P(a), Q(a) and ¬Q(b) are all true and ¬P(b) is
false. Hence the argument is invalid.
New Terms
individual
property
individual constant
predicate
atomic formula
A BEGINNING COURSE IN MODERN LOGIC
169
molecular formula
universe of discourse
model
Problems
1. Consider a model in which there are four members of the universe of discourse, named by
the individual constants a, b, c and d. Assume further that the predicates P, Q and R refer
respectively to the sets consisting of the individuals named by a and b, the individuals named
by c and d, and the individuals named by a, c and d. For each of the following statements in
LPL determine whether the statement is true or false with respect to this model.
*a.
P(a) ® Q(a)
b.
Q(a) ® P(a)
c.
(P(a) Ù Q(b)) ® Q(c)
d.
P(a) « Q(c)
e.
¬(P(a) « Q(b))
f.
¬P(a) ® P(c)
2. [Note: Do not attempt this problem until you have worked through No. 1.] Suppose that the
four-membered universe in the preceding problem consists of the individuals Alice, Bob,
Claire and Dave and that these are referred to respectively by a, b, c and d. Suppose that the
following statements in English are all true with respect to this model:
a.
Alice and Claire are female.
b.
Bob and Dave are male.
c.
Alice and Bob are actors.
d.
Claire and Dave are comedians.
e.
Neither Alice nor Bob is a comedian.
f.
Neither Claire nor Dave is an actor.
A BEGINNING COURSE IN MODERN LOGIC
170
Let F symbolize the English predicate is female, M symbolize is male, A symbolize is an
actor and C symbolize is a comedian. Symbolize each of the arguments below and determine
whether it’s valid or invalid. In each case explain your answer.
I.
Alice is female and an actor.
Dave is male and a comedian.
Alice is a comedian or male and Dave is an actor or female.
II.
Bob is neither female nor a comedian.
Dave is neither female nor an actor. __
Bob is not an actor or Dave is not female.
III.
Bob is female or an actor.
Alice is female and an actor.
Neither Alice nor Bob is a comedian.
IV. Bob is female and Alice is a comedian.
Bob is female or Alice is not a comedian.
*3.
Consider the following argument:
P(a) ® Q(b)
Q(b) ® ¬R(b)
R(b)
¬P(a)
What is wrong with the following attempt to show that this argument is valid?
Assume a model in which the sentence R(b) is true but P(a) and Q(b) are false.
Then all the premises of the argument are true and so is the conclusion. Hence the
argument is valid.
16
Quantification in Predicate Logic
In the previous chapter we saw how, in LPL, to make statements which attribute properties to
individuals. In this chapter we are going to consider how to make statements like the following:
(1)
All cities in the Upper Midwest are in the Midwest.
Notice that this is not a statement which attributes a property to a single individual; rather, it tells
us that a certain property — that of being in the Midwest — accrues to the members of a whole
SET of individuals. Letting U correspond to the English predicate is in the Upper Midwest and M
to is in the Midwest then we can translate (1) into LPL as follows:
(2)
("x)[U(x) ® M(x)]
which can be re-translated back into English as the rather more wordy sentence
(3)
For every individual x in the universe, if x is a city in the Upper Midwest then x is in the
Midwest.
The symbol ", which we are seeing for the first time here, is called the UNIVERSAL QUANTIFIER,
and is commonly read by English-speaking logicians as ‘for every’ or ‘for all’, and (2) is said to
be a UNIVERSALLY QUANTIFIED statement. The letter x here is what is called an INDIVIDUAL
VARIABLE, about which we will have more to say in a moment. But first, take note of the
following:
The letters u through z, which we noted in the previous chapter are reserved for another
use and cannot be used as individual constants, will be used as individual variables.
Further, NO OTHER LETTERS MAY BE SO USED.
Now, what exactly does an individual variable do? This question unfortunately can’t be
answered in just a few words — or at least it can’t be answered in a few words in a way that
won’t be potentially misleading. The concept is subtle, so it will take some time and some effort
to explain it completely and you’ll need to pay close attention.
A BEGINNING COURSE IN MODERN LOGIC
172
We define an INSTANCE of a statement ("x)[S] as a statement S´ which is just like S except
that an individual constant has been uniformly substituted for x in S. The following, accordingly,
are all instances of (2):
U(m) ® M(m)
U(s) ® M(s)
U(c) ® M(c)
The process of obtaining an instance of ("x)[S] is called INSTANTIATION and can be understood as
consisting of the following steps: first, remove ("x) and then substitute an individual constant for
each occurrence of x in S. The universal quantifier can then be understood as saying, in effect
‘No matter how you instantiate a formula in which I appear, the resulting statement is true’. The
function of the variable x is just to show where in S to put the selected individual constant. Or,
put in somewhat less cutesy-poo terms:
If a universally quantified statement is true then so is EVERY ONE OF ITS INSTANCES .
Notice that this in turn means that it is possible to show a universally quantified statement to
be false relative to a given model by producing a single false instance of it. (The falsification of
such a statement in this way is called REFUTATION BY COUNTEREXAMPLE.)
A sentence containing individual constants may be an instance of more than one quantified
statement. For example, P(a) ® Q(a) is an instance of all three of the following:
("x)[P(x) ® Q(a)];
("x)[P(a) ® Q(x)];
("x)[P(x) ® Q(x)].
Notice now that there is a strong analogy between universal quantification and something we
have seen before, namely conjunction in SL. If a conjunction is true then all its conjuncts are
true, and if a universally quantified statement is true then all its instances are true. Insofar as this
is so, we really have nothing new here from a logical point of view.
Now consider the English statement
(4)
Some cities in the Midwest are in the Upper Midwest.
A BEGINNING COURSE IN MODERN LOGIC
173
One way to express this in LPL would be As follows:
(5)
¬("x)[M(x) ® ¬U(x)]
Lest this not be immediately obvious, think about it this way. The statement
(6)
("x)[M(x) ® ¬U(x)]
says that for any individual whatsoever, if it’s in the Midwest then it’s not in the Upper Midwest.
To say that (4) is true is to say that (6) is false; but (5) is the negation of (6).
We now introduce the following definition:
($x)[S]
¬("x)[¬S]
On the top of the schema we see yet another new sign (the last one to be introduced!), namely $.
This is called the EXISTENTIAL QUANTIFIER and is customarily read by English-speaking logicians
as ‘for some’ or ‘there exists’. In terms of this definition, let’s consider how to re-render (5)
using our new quantifier. We’ll do so via a deduction in which (5) occurs as the premise and in
which each step involves an equivalence:
a. ¬("x)[M(x) ® ¬U(x)]
b. ¬("x)[¬(M(x) Ù ¬¬U(x))]
c. ($x)[M(x) Ù ¬¬U(x)]
d. ($x)[M(x) Ù U(x)]
Premise
SE, Def. of ®
SE, Def. of $
SE, CR
Line (d) of the deduction can be read back into English as ‘For some individual x, x is a city in
the Midwest and x is in the Upper Midwest’ or ‘There exists an individual x such that x is in the
Midwest and x is in the Upper Midwest’.
Now, just as we can talk about instances of universally quantified statements we can talk as
well about instances of existentially quantified ones. The process of instantiation is the same
except, of course, that instead of removing ("x) from the original formula we remove ($x).
Hence, all the following are instances of line (d) of our deduction:
M(m) Ù U(m)
M(s) Ù U(s)
M(c) Ù U(c)
A BEGINNING COURSE IN MODERN LOGIC
174
But the conditions under which an existentially quantified statement is true are different from
those under which a universally quantified statement is true. Specifically, an existentially
quantified statement is true if AT LEAST ONE of its instances is true. Note that, insofar as this is so,
existential quantification is to universal quantification as disjunction is to conjunction. Note also
that the existential quantifier is a convenience, not a necessity — just as disjunction is — since it
is defined in terms of other elements of the language.
The analogy between universal and existential quantification on the one hand and
conjunction and disjunction on the other can be pushed further, as shown in the following rules:
(7)
Quantifier Negation Rules (QN)
a.
¬("x)[S]
($x)[¬S]
b.
¬($x)[S]
("x)[¬S]
where S represents a formula in which x occurs. In other words, if a formula of LPL begins with
a negation sign followed by a quantifier then an equivalent formula can be obtained by switching
quantifiers and moving the negation sign inside the square brackets so as to negate the bracketed
formula. If it’s not immediately clear why QN works as it does, consider the following
deductions:
(8)
a. ¬("x)[S]
b. ¬("x)[¬¬S]
c. ($x)[¬S]
Premise
SE, CR
SE, Def. of $
(9)
a. ¬($x)[S]
b. ¬¬("x)[¬S]
c. ("x)[¬S]
Premise
SE, Def. of $
SE, CR
Does any of this have a familiar ring? It should, because what we’re looking at is really just
the analogue, in the quantificational sphere, of DM! Nor should it be surprising that there is such
an analogue given that universal and existential quantification are the quantificational analogues
of conjunction and disjunction.
A BEGINNING COURSE IN MODERN LOGIC
175
In Chapter 4 we introduced the notion of the scope of a sign. There is an analogous notion for
quantifiers. Consider the formulas
(a) ("x)[D(x) ® (M(x) Ú F(x))]
(b) ("x)[D(x) ® M(x)] Ú ("x)[D(x) ® F(x)]
Notice that (a) contains only one quantifier and one pair of square brackets. The scope of the
quantifier is everything between the immediately following left bracket and the corresponding
right bracket. This in turn means that all the occurrences of x in (a) are in the scope of the one
quantifier that occurs in the formula. Put in a different way, the various occurrences of the
variable are said to be BOUND BY this quantifier. A variable which is not bound by any quantifier
is said to be FREE.
Now consider formula (b). Here we have two different occurrences of the universal
quantifier, each with its own pair of brackets. Thus, the occurrences of x in the left disjunct of (b)
are in the scope ONLY of the FIRST quantifier and the occurrences of x in the right disjunct are in
the scope ONLY of the SECOND quantifier.
To understand the importance of the notion of scope, consider how to symbolize the
following statement:
(10) Everyone in the philosophy department doesn’t favor the plan.
Note that this sentence is ambiguous, being interpretable in either of the following ways:
(11) a. Not everyone in the philosophy department favors the plan.
b. No one in the philosophy department favors the plan.
To symbolize these statements, let’s assume that P symbolizes is in the philosophy department
and F symbolizes favors the plan. Then we have
(12) a. ¬("x)[P(x) ® F(x)]
b. ("x)[P(x) ® ¬F(x)]
Notice that these two formulas differ, among other ways, in regard to the relative scopes of the
quantifier and the negation sign: in (12a) the quantifier is in the scope of the negation sign while
in (12b) the reverse is true. It’s become customary among logicians to say that in the first case
the quantifier has NARROW scope relative to that of the negation sign while in the second the
A BEGINNING COURSE IN MODERN LOGIC
176
quantifier has WIDE SCOPE relative to that of the negation sign. For this reason, you’ll often hear
the two interpretations of a sentence like (11) referred to as the ‘narrow scope’ interpretation or
the ‘wide scope’ interpretation.
Optional Section
By way of conclusion to this chapter, we’re going to look at a more precise way of stating the
conditions under which a quantified statement is true relative to a given model. Consider first of
all a formula containing one or more free occurrences of a given variable and no free occurrences
of any other variable, such as P(x) ® Q(x). Let M be a model with universe U. Then we say,
relative to M, that the formula in question HOLDS OF (or APPLIES TO) an element i of U if, and only
if, there is an individual constant WHICH DOES NOT ALREADY OCCUR IN THE FORMULA which is
associated with i by M and uniform substitution of that constant for x yields a statement which is
true in M; OR, if there is no such constant, there is a way of ‘reassigning’ one of the individual
constants to i (keeping everything else the same) so that it is associated with i and the result of
uniformly replacing x by THAT constant yields a statement which is true in the resulting slightly
altered model. For example, define M as follows:
U = {Alice, Bob, Claire, Dave}
P: {Alice, Claire}
Q: {Alice, Bob, Claire}
a: Alice
Every other individual constant: Bob.
Then relative to M, P(x) ® Q(x) holds of Alice since P(a) ® Q(a) is true in M. Suppose,
however, that we have a model N in which no individual constant is assigned to Alice, but which
is otherwise just like M as defined above. (In particular, the predicates are assigned in exactly the
same way as in M.) Then relative to N, P(x) ® Q(x) also holds of Alice since we can reassign,
say, a, holding all else constant, in such a way that a is associated with Alice and P(a) ® Q(a) is
true in the slightly modified version of N thus obtained. On the other hand, the formula
P(x) Ù ¬P(x) does not hold, relative to M (or any other model, for that matter), of any element of
U since no matter what replaces x, the result is a statement which is false in the relevant model.
A BEGINNING COURSE IN MODERN LOGIC
177
What we are trying to get at with our idea of a formula with a free variable holding of a given
individual is best explained by means of some examples. Consider first an atomic formula, say
P(x). To say that this formula holds of, say, Claire in a given model is just to say that in that
model, Claire is in the set to which P is assigned by the model; by the same token, to say that
¬P(x) holds of Claire in the model is just to say that in the model in question, Claire is not in the
set to which P is assigned. Similarly, to say that P(x) Ù Q(x) holds of Claire in the model is to say
that in this model, Claire is in both the set to which P is assigned and the one to which Q is
assigned; to say that P(x) ® Q(x) holds of Claire is to say that Claire is in the set to which Q is
assigned if she is in the one to which P is assigned.
Now consider a universally quantified statement, that is, one of the form ("x)[S]. Such a
statement is defined as being true in a model M with universe U if, and only if, S holds of EVERY
element of U. By the same token, ($x)[S] is true in M if, and only if, S holds of AT LEAST ONE
element of U. Suppose, for example, that M is as defined just above. Then ("x)[P(x) ® Q(x)] is
true in M since P(x) ® Q(x) applies to every indvidual in U. We already know that it applies to
Alice, and by the same token, it applies to Bob (since, e.g., P(b) ® Q(b) is true in M). Although
no individual constant is associated with Claire, we can reassign a so as to be so associated, and
in that case P(a) ® Q(a) is true in the associated modification of M; and, by similar reasoning, it
holds as well of Dave. By the same token, ($x)[P(x) Ù ¬Q(x)] is false in M since P(a) Ù ¬Q(a)
does not hold of any element of U. It doesn’t hold of Alice, since P(a) Ù ¬Q(a) is false in M (by
virtue of the falsity of its second conjunct), and it doesn’t hold of Bob since P(b) Ù ¬Q(b) is false
in M by virtue of the falsity of its first conjunct. But it also doesn’t hold of Claire, since if a is
reassigned to her then P(a) Ù ¬Q(a) is false in the modified model, and similarly for Dave.
You may be wondering why we impose the requirement that the individual constant
substituted for the variable in the original formula has to be one which does not already occur in
the formula. For example, if we want to know whether P(a) ® Q(x) holds of a given individual,
we must pick something other than a as the constant by which to replace x. If we did not restrict
the process in this way, we would get unwanted results in some cases. For example, suppose we
are given a model M with {Alice, Bob, Claire, Dave} as its universe of discourse in which the
predicates P and Q are both assigned to the set {Alice, Bob} and a is assigned to Alice, and ask
the following question: does P(a) ® Q(x) hold of Claire? Since Alice is in the set to which P is
assigned and Claire is not in the set to which Q is assigned, the answer is ‘No’. But if we were to
allow a to replace x in determining formally whether P(a) ® Q(x) holds of Claire, we would get
the wrong answer, since P(a) ® Q(a) is true in the variant model just like M except that a is
assigned to Claire. Requiring that the constant which replaces x be other than a assures that we
do not, so to speak, load the dice in favor of unwanted possibilities.
A BEGINNING COURSE IN MODERN LOGIC
178
You may also be wondering why we have to go through all these elaborate machinations
instead of just saying that a universal statement is true if, and only if, all its instances are true,
and an existential one is true if, and only if, at least one of its instances is true. The whole story is
complicated, and involves mathematical subtleties that are best left for a more advanced course
in logic, but you can get at least an inkling by considering a model M defined as follows:
U = {Alice, Bob, Claire, Dave}
P: {Claire, Dave}
Q: {Alice, Bob}
a: Alice
Every other individual constant: Bob.
Now, consider the statement ($x)[P(x)]. This is intended to be our way of saying, in LPL, that
there is at least one individual in the universe with property P — which is true: Claire is such an
individual, for example. But every instance of ($x)[P(x)] is false in M, since the individual
constants have all been associated with individuals other than the members of the set associated
with P. But P(x) applies to both Claire and Dave.
Now consider the statement ("x)[Q(x)], which is intended to be our way of saying, in LPL,
that every individual has the property Q. This is certainly not the case in M: Claire, for example,
does not have the property. But since every individual constant is associated in the model with an
individual which DOES have the property, all the instances of ("x)[Q(x)]are true; on the other
hand, there is an individual (Claire) to which P(x) does not apply.
That said, there is, nonetheless, a circumstance in which a universal statement is true if all its
instances are and an existential one is true only if at least one of its instances is, namely if the
individual constants ‘cover’ the universe — that is, every element of the universe is associated
with a constant. In such a case, the model is said to be COMPLETE. If we could set things up so that
models are always complete, then there would be no problem. Unfortunately, this is not always
possible, though understanding why must be left to a more advanced course.
Demonstration Problems
1. Symbolize the following statements, using quantifiers:
(a) Every doctor is male or female.
A BEGINNING COURSE IN MODERN LOGIC
(b) Every doctor is male or every doctor is female.
Solution.
Symbolization of predicates:
D: is a doctor
M: is male
F: is female
Symbolizations of statements:
(a) ("x)[D(x) ® (M(x) Ú F(x))]
(b) ("x)[D(x) ® M(x)] Ú ("x)[D(x) ® F(x)]
2. The symbolizations (a-b) in the solution to problem 1 are NOT equivalent. Show this by
means of a model in which one of the statements is true and the other false.
Solution (one of many):
U = {Alice, Bob, Claire, Dave}
D: {Alice, Bob}
M: {Bob, Dave}
F: {Alice, Claire}
a: Alice
b: Bob
c: Claire
d: Dave
(NB, for students who have read the optional section: this model is complete.) Relative to
this model, M(s) Ú F(s), M(b) Ú F(b), M(c) Ú F(c), and M(d) Ú F(d) are all true; hence, any
conditional which has one of these formulas as its consequent is true, which in turn means
that all instances of (a) are true, making (a) true. However, M(a) and F(b) (for example) are
both false so D(a) ® M(a) and D(b) ® F(b) are both false; hence each disjunct of (b) is false
(by virtue of having at least one false instance), so (b) is false.
3. Give a negation of
("x)[P(x) ® Q(x)]
179
A BEGINNING COURSE IN MODERN LOGIC
180
which does not contain any occurrences of " or ®.
Solution. According to QN, the schema
¬("x)[P(x) ® Q(x)]
($x)[¬(P(x) ® Q(x))]
is valid. Recall, however, that ¬(S ® T) is equivalent to S Ù ¬T. Hence we can replace the
conditional inside the brackets to obtain
($x)[P(x) Ù ¬Q(x)]
Alternate Solution. Recall that the original formula is by definition equivalent to
¬($x)[¬(P(x) ® Q(x))]
and, by CR, is negated by
($x)[¬(P(x) ® Q(x))]
Substitution inside the brackets then yields the same result as before.
New Terms
quantifier
existential
universal
individual variable
scope (of a quantifier)
Problems
*1.
Give a formula which negates the existentially quantified statement shown below and
which contains no occurrence of $ or Ù and only one negation sign:
($x)[P(x) Ù Q(x)]
A BEGINNING COURSE IN MODERN LOGIC
181
2. Give a formula which negates the universally quantified statement below and which contains
no occurrence of " or « and only one negation sign:
("x)[P(x) « Q(x)]
3. In Chapter 13 we presented a number of ‘disguised’ conditionals, that is, sentences that could
be re-expressed as conditionals in canonical form even though they were not originally given
in that form. We noted, however, that there is more to the logic of such sentences than we
dealt with there and promised to return to the issue. This is where we do so. Consider the
following statement (see Demonstration Problem 1 from Chapter 13):
If you are to be admitted to the party then you must be wearing a white carnation.
It seems fairly clear that this sentence contains an IMPLICIT UNIVERSAL QUANTIFIER.
Symbolizing the predicates of the antecedent and consequent respectively by A and W, the
correct symbolization of this sentence is ("x)[A(x) ® W(x)]. Now symbolize each of the
following in LPL.
*a. Whereof we cannot speak, thereof we must remain silent.
(Ludwig Wittgenstein, Tractatus Logico-Philosophicus)
b. A fool and his money are soon parted.
c. Beggars can’t be choosers.
d. No shoes, no shirt, no service.
e. A rose by any other name would smell as sweet.
(Shakespeare, Romeo and Juliet)
17
Deduction With Quantifiers
All the rules of inference which we introduced in the context of SL carry over into PL. However,
there are also some new rules of inference, associated specifically with sentences containing
quantifiers. Here are two of them:
(1)
Universal Instantiation (UI)
A universally quantified formula ENTAILS every one of its instances.
(2)
Existential Generalization (EG)
An existentially quantified formula IS ENTAILED BY every one of its instances.
Examples:
1. Suppose that a line of a deduction is occupied by the formula
("x)[P(x) ® Q(x)]
Then it is possible to have
P(a) ® Q(a)
as the occupant of a subsequent line.*
Explanation. The original statement can be equivalently expressed as
¬($x)[P(x) Ù ¬Q(x)]
which says that there is no individual x in the universe such that P(x) is true and Q(x)
false. Pick an arbitrary individual constant, say a. Then ¬(P(a) Ù ¬Q(a)) is true; hence,
*
Refer back to Chapter 7 if you don’t remember what it means for a statement to occupy a line
of a deduction.
A BEGINNING COURSE IN MODERN LOGIC
by the definition of ®, P(a) ® Q(a) is true. And the same will clearly be the case for
any other individual constant we pick.
2. Suppose that a line of a deduction is the formula
P(a) Ù Q(a)
Then it is possible to have
($x)[P(x) Ù Q(x)]
as the occupant of a subsequent line.
Explanation. The original statement tells us that the individual designated by a has
both the properties designated by P and Q. It follows that there is at least one such
individual in the universe, which is what the existentially quantified statement says.
Here now is an argument which, when symbolized, can be validated by a deduction using both of
these rules:
(3)
Every philosopher is a genius.
No genius is stupid.
Alice is a philosopher.
There is at least one philosopher who isn’t stupid.
Symbolization of predicates:
P: is a philosopher
G: is a genius
S: is stupid
Letting a symbolize Alice we can then symbolize the entire argument as follows:
("x)[P(x) ® G(x)]
¬($x)[G(x) Ù S(x)]
P(a)
($x)[P(x) Ù ¬S(x)]
183
A BEGINNING COURSE IN MODERN LOGIC
184
Validating deduction:
a. ("x)[P(x) ® G(x)]
b. P(a) ® G(a)
c. P(a)
d. G(a)
e. ¬($x)[G(x) Ù S(x)]
f. ("x)[¬G(x) Ú ¬S(x)]
g. ¬G(a) Ú ¬S(a)
h. ¬S(a)
i. P(a) Ù ¬S(a)
j. ($x)[P(x) Ù ¬S(x)] EG
Premise
UI
Premise
MP
Premise
SE, QN, DM
UI
DS, d,g
AI
In general: the strategy for carrying out a deduction containing statements with quantifiers is
to first eliminate the quantifiers so as to produce formulas that can be operated upon with the
principles of SL and then (if necessary) put quantifiers back on at the end.
In keeping with the consistent emphasis on the analogy between universal quantification and
conjunction and between existential quantification and disjunction, I call your attention at this
point to the fact that UI is the quantificational analogue of AE and EG the analogue of OI.
Like all rules of inference, UI and EG must be applied TO THE ENTIRE STATEMENT WHICH
APPEARS ON A GIVEN LINE. (Recall the discussion of non-occupancy errors in Chapter 7.) To see
what happens if we don’t adhere to this requirement, consider the following example:
a. ¬("x)[¬(P(x) Ú Q(x))]
Premise
b. ¬(P(a) Ú Q(a))
UI
ERROR
c. ¬P(a) Ù ¬Q(a)
SE, DM
d. ¬P(a)
AE
To see that line (d) is not entailed by line (a), consider the following (complete) model:
U = {Alice, Bob, Claire, Dave}
P: {Alice}
Q: {Bob, Dave}
a: Alice
A BEGINNING COURSE IN MODERN LOGIC
185
b: Bob
c: Claire
d: Dave
If ("x)[P(x) Ú Q(x)] is true in the model, then so are all its instances; but P(d) Ú Q(d) is false, so
the statement is false in the model and the premise of the supposed deduction above is true. But
¬P(a) is false in the model, so line (d) is not entailed by line (a).
Demonstration Problems
1. Validate the following schema deductively.
("x)[P(x) ® Q(x)]
P(a)
($x)[Q(x)]
Solution.
a. ("x)[P(x) ® Q(x)]
b. P(a) ® Q(a)
c. P(a)
d. Q(a)
e. ($x)[Q(x)]
Premise
UI
Premise
MP
EG
Discussion. This deduction can be naturally divided into two phases. The goal of the first
phase is to derive line (d), to which EG can then be applied; the second phase is the actuall
application of EG to obtain the conclusion.
2. Symbolize and validate the following argument.
All dogs are intelligent or vicious.
Fido is a dog.
Fido is not vicious.
Some dogs are intelligent.
Symbolization of predicates:
D: is a dog
A BEGINNING COURSE IN MODERN LOGIC
186
I: is intelligent
V: is vicious
Symbolization of names:
f: Fido
Symbolization of argument:
("x)[D(x) ® (I(x) Ú V(x))]
D(f)
¬V(f)
($x)[D(x) Ù I(x)]
Validation:
a. ("x)[D(x) ® (I(x) Ú V(x))]
b. D(f) ® (I(f) Ú V(f))
c. D(f)
d. I(f) Ú V(f)
e. ¬V(f)
f. I(f)
g. D(f) Ù I(f)
h. ($x)[D(x) Ù I(x)]
Premise
UI
Premise
MP
Premise
DS
AI, c,f
EG
We now introduce two further rules of inference. Warning: These rules are trickier than UI and
EG and care must be taken in using them.
EXISTENTIAL INSTANTIATION (EI)
If an existentially quantified statement occupies a line of a deduction, an instance of it may
occupy a later line PROVIDED THAT the individual constant introduced in the course of the
instantiation does not appear EARLIER IN THE DEDUCTION, in a PREMISE or in the FINAL LINE of
the deduction.
First we’ll look at an example of a deduction which makes use of this rule and then we’ll take up
the question of why it’s restricted as it is.
The following is a valid argument schema:
A BEGINNING COURSE IN MODERN LOGIC
187
("x)[P(x) ® Q(x)]
($x)[P(x)] __
($x)[Q(x)]
Here now is a validating deduction which makes use of EI:
a. ($x)[P(x)]
b. P(a)
c. ("x)[P(x) ® Q(x)]
d. P(a) ® Q(a)
e. Q(a)
f. ($x)[Q(x)]
Premise
EI
Premise
UI
MP, b,d
EG
The reasoning here can be broken down as follows. Let M be a model with universe U and
suppose that the premises of our schema are both true in M. Then there’s at let one individual in
U —let’s call that individual I — which is in the set named by P. Now, if M assigns a to i, then
P(a) is true in M and, since ("x)[P(x) ® Q(x)] is also true in M, so, by UI is P(a) ® Q(a). Hence,
Q(a) is true in M, by MP, and the conclusion of the schema then follows by EG — so it too is
true in M.
But what if M DOESN’T assign a to i? Then the reasoning goes like this. We can construct a
model M´ just like M except that M´ assigns a to i, in which case P(a) is true in M´, likewise
($x)[P(x)]. If the premise ("x)[P(x) ® Q(x)] is true in M, then it is also true in M´. So both
premises of the schema are true in M´, as is P(a). But then the conclusion of the schema is also
true in M´ since it is entailed by the two premises and P(a). But if the conclusion is true in M´, it
must also be true in M since the only difference between M and M´ is in the individual assigned
to a, and a does not occur anywhere in the statement.
In sum: if i is in the set fo which P is assigned by M, then if M assigns a to i,
P(a) is true in M and, since ("x)[P(x) ® Q(x)] and P(a) entail the conclusion, the
conclusion is true in M; if, on the other hand, M does not assign a to i, but M´ does, then Q(a) is
true in M´— in which case at least one element of U is in the set assigned by M to Q, and the
conclusion is accordingly true in not just in M´ but in M as well. In other words, NO MATTER
WHAT INDIVIDUAL M ASSIGNS TO a, if the premises of the schema are true in M, then so is the
conclusion. Since M can be any model whatsoever in which the premises are true, the conclusion
has been shown to be true in every such model — which is just another way of saying that the
schema is valid.
Now, why the restrictions on the use of EI? Consider the following argument, which is
INVALID:
A BEGINNING COURSE IN MODERN LOGIC
188
($x)[P(x)]
($x)[Q(x)]
($x)[P(x) Ù Q(x)]
To see the invalidity of the argument, consider a model in which P and Q denote non-empty
subsets of the universe which have no members in common. Then the premises are both true
since the set associated with P and the one associated with Q are both non-empty; but the
conclusion is false since it says that there is at least one individual common to both sets, whereas
these have been defined as having no common members. Now consider the following erroneous
deduction, and note exactly where the error occurs:
a. ($x)[P(x)]
b. ($x)[Q(x)]
c. P(a)
d. Q(a)
e. P(a) Ù Q(a)
f. ($x)[P(x) Ù Q(x)]
Premise
Premise
EI, a
EI, b
AI
EG
[ERROR]
The second application of EI is illegitimate since it results in the introduction of an individual
constant, namely a, that has occurred earlier — namely in the immediately preceding line. Note,
however, that the following is perfectly permissible:
a. ($x)[P(x)]
b. ($x)[Q(x)]
c. P(a)
d. Q(b)
e. P(a) Ù Q(b)
Premise
Premise
EI, a
EI, b
AE
But now there is no way to obtain the conclusion of the argument by EG. This rule allows us to
introduce a quantifier and replace a by the associated variable, or to replace b thereby, BUT NOT
BOTH. The reason is that line (3) of this deduction is not an instance of ($x)[P(x) Ù Q(x)]. (Why
not?)
The restriction on EI also forbids introducing, via this rule, an individual constant that
appears in a premise. To see why this prohibition is assumed, consider the following invalid
schema:
A BEGINNING COURSE IN MODERN LOGIC
189
P(a)
Q(b)
($x)[P(x) Ù Q(x)]
To see that the schema is invalid, consider a model in which U = {Alice, Bob, Claire, Dave}, a
and b are associated with Alice and Bob respectively and P and Q with {Alice} and {Bob}
respectively. Then the premises of the above schema are true but the conclusion is false. But now
suppose that we attempt to validate the schema via the following deduction:
a.
b.
c.
d.
e.
f.
Q(b)
Premise
($x)[Q(x)]
EG
Q(a)
EI
P(a)
Premise
P(a) Ù Q(a) AI
($x)[P(x) Ù Q(x)]
[ERROR]
Note that we’ve avoided introducing a constant that occurs earlier by bringing in the premise
P(a) later in the deduction — hence the need to prohibit introducing constants by EI that occur in
premises of the argument being validated.
The general point illustrated by these examples is simply this. An application of EI amounts,
to our saying ‘We know that there’s at least one individual of whom such-and-such is the case.
For the sake of argument, let’s suppose that so-and-so is the individual in question.’ This is
perfectly legitimate conditional reasoning as long as we don’t stack the deck: for though we
know that such-and-such is the case for SOME individual, WE HAVE NO ASSURANCE THAT THE
INDIVIDUAL IN QUESTION IS ONE MENTIONED IN A PREMISE OR IN ANYTHING DEDUCED FROM A PREMISE.
It is the absence of such assurance which underlies the prohibition, whose purpose is to keep us
from sneaking additional prmises in through the back door.
On the other hand, we also have no assurance that the individual in question is DIFFERENT
from one mentioned in a premise or in something deduced from a premise. But recall that it is
possible for more than one individual constant to be associated with a given individual.
Consequently, observing the prohibition is consistent with the POSSIBILITY (if not the certainty)
that the individual named by the instance is the same as one named in a premise or some earlier
statement.
A BEGINNING COURSE IN MODERN LOGIC
190
We turn now to the motivation for the prohibition on final lines of deductions containing
individual constants introduced by EI. Without this prohibition we would allow invalid
arguments like this one:
Some dogs are intelligent.
Fido is intelligent.
Suppose that we let is a dog, is intelligent and Fido be symbolized respectively by D, I and f.
Then the argument may be symbolized thus:
($x)[D(x) Ù I(x)]
I(f)
One way to rigorously show the invalidity of the argument is to let U = {Fido, Prince, Rover}, to
associate I with {Prince, Rover}, D with U as a whole and f with Fido. Then the premise is true
but the conclusion is false. Here now is a deduction which nonetheless seems to validate this
schema:
a. ($x)[D(x) Ù I(x)]
b. D(f) Ù I(f)
c. I(f)
Premise
EI
AE
[ERROR]
Now, the point here is not that it’s illegitimate to pass from (b) to (c) in the course of the
deduction. But THE DEDUCTION CAN’T END HERE! On the other hand, the valid schema
($x)[D(x) Ù I(x)]
($x)[I(x)]
validated by the following deduction (whose first three steps are just like the ones shown
above):
IS
a.
b.
c.
d.
($x)[D(x) Ù I(x)]
D(f) Ù I(f)
I(f)
($x)[I(x)]
Premise
EI
AE
EG
A BEGINNING COURSE IN MODERN LOGIC
191
As we did before, let’s now step away again from the specific example to consider what is
involved here. When we say that we know that there’s at least one individual of which such-andsuch is the case, and suppose for the sake of the discussion that it’s so-and-so (subject to the
other parts of the restriction on EI already discussed), it needs to be understood that the
PARTICULAR
individual selected is of no importance: the idea, rather, is to show that
WHAT INDIVIDUAL
NO MATTER
we assume for the sake of the argument, the argument is valid. Or, to say the
same thing a different way, the conclusion of the argument
PARTICULAR INDIVIDUAL CHOSEN
CANNOT DEPEND IN ANY WAY ON THE
— any OTHER choice must work just as well. But in the first case
under consideration (the invalid one), only one choice works: f. In the second case, we’re free to
choose any constant we wish in line (b) — it happens, in this specific example, to be f, but
nothing hinges on this specific choice.
The rationale for EI being restricted as it is points to a way in which it is different from any
other rule of inference considered so far. Up to this point, we could always say that the
application of a rule of inference gave us a line of a deduction entailed by one or more preceding
lines. When we apply EI, however, this is untrue: ($x)[D(x) Ù I(x)] does not entail D(f) Ù I(f), for
example. That doesn’t necessarily make our reasoning illegitimate, but it
DOES
require that we
keep unwanted assumptions from creeping in. The restriction imposed on the use of EI holds
such unwanted assumptions at bay.
Because correct use of EI requires that we keep track of what individual constants have been
introduced in the course of the deduction, and whether or not they occur in premises, we will
henceforth adhere to the practice of ‘flagging’ lines in which constants appear for the first time,
as well as premises containing constants. We do this by writing whatever constants appear in
these lines off to the right, as in the following example:
a. Q(b)
Premise
b.
EG
($x)[Q(x)]
c. Q(a)
EI
d. P(a)
Premise
e. P(a) Ù Q(a)
AI
f. ($x)[P(x) Ù Q(x)]
Demonstration Problems
3. Validate the following schema.
b
a
[ERROR — see line (d)]
A BEGINNING COURSE IN MODERN LOGIC
192
("x)[(P(x) Ù Q(x)) ® R(x)]
($x)[P(x) Ù Q(x) ]
__
($x)[P(x) Ù R(x)]
Solution.
a. ("x)[(P(x) Ù Q(x)) ® R(x)]
Premise
b. ($x)[P(x) Ù Q(x) ]
Premise
c. P(a) Ù Q(a)
EI
d. (P(a) Ù Q(a)) ® R(a)
UI
e. R(a)
MP
f. P(a)
AE, c
g. P(a) Ù R(a)
AI
h. ($x)[P(x) Ù R(x)]
EG
a
Discussion. Note that this deduction contains occurrences of a that arise by EI and others that do
not.
For Future Reference. As an additional aid in making sure that individual constants introduced
by EI don’t appear earlier in the deduction, IF POSSIBLE, APPLY EI BEFORE UI IN ANY DEDUCTION IN
WHICH BOTH OCCUR.
4. Symbolize and validate the following argument.
Some poodles are intelligent.
All poodles are dogs.
Some dogs are intelligent.
Solution.
Symbolization of predicates:
P: is a poodle
I: is intelligent
D: is a dog
Symbolization of argument:
A BEGINNING COURSE IN MODERN LOGIC
193
($x)[P(x) Ù I(x)]
("x)[P(x) ® D(x)]
($x)[D(x) Ù I(x)]
Validation:
a. ($x)[P(x) Ù I(x)]
Premise
b. ("x)[P(x) ® D(x)]
Premise
c. P(a) Ù I(a)
EI, a
d. P(a) ® D(a)
UI, b
e. P(a)
AE, c
f. D(a)
MP
g. I(a)
AE, c
h. D(a) Ù I(a)
AI
i. ($x)[D(x) Ù I(x)]
a
EG
5. Validate the following.
P(a)
($x)[P(x) Ù Q(x)]
_______________
P(a) Ù ($x)[Q(x)]
Solution.
a. P(a)
Premise
b. ($x)[P(x) Ù Q(x)]
Premise
c. P(b) Ù Q(b)
EI
d. Q(b)
AE
e. ($x)[Q(x)]
EG
f. P(a) Ù ($x)[Q(x)]
AI, a,e
a
b
Discussion. Here EI on line (b) introduces the constant b since a occurs in a premise — as
shown by the flag at the end of line (a).
6. Here is a schema followed by a ‘deduction’ which purports to validate it:
($x)[P(x)]
Q(a)
A BEGINNING COURSE IN MODERN LOGIC
194
P(b) Ù Q(a)
a.
b.
c.
d.
($x)[P(x)]
Q(a)
P(b)
P(b) Ù Q(a)
Premise
Premise
EI, a
AI
a
b
Show, via an appropriately constructed model, that the schema is invalid and identify the
error in the deduction.
Solution.
Let U = {Akron, Boston, Cleveland, Detroit}. Associate P with {Cleveland}, Q with
{Akron}, a with Akron and b with Boston. Then both premises are true but the conclusion is
false (since P(b) is false. The error in the deduction is that b is introduced by EI and appears
in the final line.
We now consider one last rule of inference, called UNIVERSAL GENERALIZATION (UG).
UNIVERSAL GENERALIZATION (UG)
If an instance S of a universally quantified formula T occupies a line of a deduction then T
may occupy a later line of the deduction PROVIDED THAT
(i)
no constant in S replaced by a variable in forming T occurs in a premise or arises
via EI; and
(ii)
no occurrence of the constant in S replaced by the variable in T remains in T.
Here now is a valid schema accompanied by a deduction using UG which validates it:
("x)[P(x) Ù Q(x)]
("x)[P(x)]
a. ("x)[P(x) Ù Q(x)]
b. P(a) Ù Q(a)
c. P(a)
d. ("x)[P(x)]
Premise
UI
AE
UG
a
A BEGINNING COURSE IN MODERN LOGIC
195
Now, it’s important to point out that we could just as well have validated this schema without
UG, albeit in a more roundabout way, by means of the following deduction:
a. ("x)[P(x) Ù Q(x)]
b. ¬("x)[P(x)]
c. ($x)[¬P(x)]
d. ¬P(a)
e. P(a) Ù Q(a)
f. P(a)
g. P(a) Ù ¬P(a)
h. S Ù ¬S
Premise
Premise
SE, QN
EI
UI
a
AE
AI
d, f
*
EFQ
a
Contradiction
Before we go on, a comment about line (g). Note that a contradiction is reached at line (f) —
compare line (d) — so why didn’t we just stop at that point? The answer is that one of the
provisos on EI requires that the individual constant introduced in an application of the rule not
appear in the last line of the deduction. We get rid of a by using EFQ to obtain a contradictory
statement containing no individual constants at all. (Recall that we may use literals of LSL to
represent entire statements.) However, in indirect deductions by Method 2 no real damage is
done if we do not require this extra step: as soon as a line is obtained which contradicts an earlier
one, we know that there is a way, via EFQ, to get a contradictory statement as the next line, so
we may as well make life easier on ourselves and allow omission of that final step. So, in a
validation by Method 2,
shown below:
BUT ONLY IN THIS CASE,
a. ("x)[P(x) Ù Q(x)]
Premise
b. ¬("x)[P(x)]
Premise
c. ($x)[¬P(x)]
SE, QN
d. ¬P(a)
EI
e. P(a) Ù Q(a)
f. P(a)
UI
AE
we’ll allow truncated deductions like the one
a
a
Contradiction (see line (d))
Even with this shortcut, the indirect way of validating the schema takes longer than the direct
way using UG, so UG is a useful way of shortening the validation process. However, you can
live without UG as long as you’re willing to pay the price of more complicated validations.
*
See problem 3 for Chapter 8.
A BEGINNING COURSE IN MODERN LOGIC
196
Now for the explanation of the two provisos on UG. To see the motivation for (i) consider
first the clearly invalid schema
P(a)
("x)[P(x)]
Suppose that proviso (i) were not in effect. Then the following deduction would validate the
schema.
a. P(a)
b. ("x)[P(x)]
Premise
UG
[ERROR]
Next, consider the invalid schema
("x)[P(x) ® Q(x)]
($x)[P(x)]
("x)[Q(x)]
First of all, let’s be sure we understand why this schema is invalid. Let U = {Alice, Bob, Claire,
Dave} and associate both P and Q with {Alice}, and a with Alice. The premises are then both
true but the conclusion is false. Now consider the following (erroneous) deduction:
a. ("x)[P(x) ® Q(x)]
b. ($x)[P(x)]
c. P(a)
d. P(a) ® Q(a)
e. Q(a)
f. ("x)[Q(x)]
Premise
Premise
EI
a
UI, a
MP
UG
[ERROR]
The reason that UG is inadmissible in these circumstances is that the constant a is introduced by
EI (line (c)).
The underlying rationale for proviso (i) on UG is parallel to that for the prohibition on letting
constants introduced by EI occur in the final line of a deduction: it should be possible to
generalize from a particular instance of a statement to a universal quantification of which it’s an
instance ONLY IF THE CHOICE OF THE CONSTANT IN THE INSTANCE DOES NOT DEPEND ON ANY SPECIAL
ASSUMPTIONS. But the occurrence of a constant in a premise or its introduction via EI DOES
amount to such a special assumption. If, in order to deduce a particular formula we have to rely
on such an assumption then UG is not possible.
A BEGINNING COURSE IN MODERN LOGIC
197
To see why it’s necessary to impose proviso (ii) on UG, consider the following invalid
schema:
("x)[P(x) ® Q(x)]
P(b) ® P(a)
That the schema is invalid can be seen by considering a model whose universe of discourse
consists of Alice, Bob, Claire and Dave and which associates P with {Bob, Claire}, Q with
{Bob, Claire, Dave}, a with Alice and b with Bob. The premise is accordingly true in the model,
but the conclusion is false. Now consider the following supposed validation of the schema:
a. ("x)[P(x) ® Q(x)]
Premise
b. P(a) ® Q(a)
UI
c. ("x)[P(x) ® Q(a)]
UG
d. P(b) ® P(a)
UI
a
b
But there is an error in line (c): although one occurrence of a has been replaced by the variable x,
the other one has not, in violation of proviso (ii).
The rationale here is that even though a constant may have entered the deduction via UI, so
that its presence doesn’t depend on any special assumptions, it doesn’t necessarily follow that
every instance of the result of the generalization is true. In the example under consideration, for
example, P(b) ® Q(a) is false and yet is an instance of line (c); note, however, that the formula
in question is NOT an instance of line (a). The point is that line (a) requires that the SAME constant
occur in both the antecedent and consequence of any conditional which constitutes an instance of
it, whereas line (c) allows for instances in which this is not the case. Hence line (c) can’t be
entailed by (a).
Now, understanding all of this poses some challenges and you might well not feel entirely
confident about employing UG. If not, don’t worry since UG is actually not essential to the
deductive process. Why then did we bother to introduce it at all? The answer is that it frequently
makes deductions quicker. But when in doubt, avoid it!
Demonstration Problems
7. Validate the following schema:
A BEGINNING COURSE IN MODERN LOGIC
198
("x)[P(x) ® Q(x)]
("x)[Q(x) ® R(x)]
________________
("x)[P(x) ® R(x)]
Solution.
a. ("x)[P(x) ® Q(x)]
Premise
b. ("x)[Q(x) ® R(x)]
Premise
c. P(a) ® Q(a)
UI, a
d. Q(a) ® R(a)
UI, b
e. P(a) ® R(a)
HS
f. ("x)[P(x) ® R(x)]
UG
a
Alternate Solution.
The following indirect deduction does not make use of UG:
a. ("x)[P(x) ® Q(x)]
Premise
b. ("x)[Q(x) ® R(x)]
Premise
c. ¬ ("x)[P(x) ® R(x)]
Premise
d. ($x)[¬(P(x) ® R(x))]
SE, QN
e. ¬(P(a) ® R(a))
EI
f. P(a) Ù ¬R(a)
SE, Def. of ®
g. ¬R(a)
AE
h. Q(a) ® R(a)
UI
i. ¬Q(a)
MT
j. P(a) ® Q(a)
UI
k. ¬P(a)
MT
l. P(a)
AE
8. Validate the following schema:
("x)[P(x) ® (Q(x) Ú R(x))]
¬($x)[Q(x)]
("x)[P(x) ® R(x)]
___
a
b
a
f
Contradiction
A BEGINNING COURSE IN MODERN LOGIC
199
Solution.
Lemma A
S ® (T Ú U)
¬T __
S®U
Validation of Lemma:
a. S ® (T Ú U)
b. ¬T
c. ¬(S ® U)
d. ¬¬(S Ù ¬U)
e. S Ù ¬U
f. S
g. ¬U
h. ¬T Ù ¬U
i. ¬(T Ú U)
j. ¬S
Premise
Premise
Premise
SE, Def. of ®
SE, CR
AE
AE
e
AI
b, g
SE, DM
MT
a, i
Contradiction (see
line f)
Validation of Main Schema:
a. ("x)[P(x) ® (Q(x) Ú R(x))]
b. ¬($x)[Q(x)]
c. ("x)[¬Q(x)]
d. P(a) ® (Q(a) Ú R(a))
e. ¬Q(a)
f. P(a) ® R(a)
g. ("x)[P(x) ® R(x)]
Premise
Premise
SE, QN
UI, a
UI, c
Lemma A
UG
a
Alternate Validation of Main Schema:
a. ("x)[P(x) ® (Q(x) Ú R(x))]
b. ¬($x)[Q(x)]
c. ¬("x)[P(x) ® R(x)]
d. ($x)[¬(P(x) ® R(x))]
e. ¬(P(a) ® R(a))
f. P(a) Ù ¬R(a)
Premise
Premise
Premise
SE, QN
EI
SE, Def. of ®
a
A BEGINNING COURSE IN MODERN LOGIC
g.
h.
i.
j.
k.
l.
m.
P(a)
P(a) ® (Q(a) Ú R(a))
Q(a) Ú R(a)
¬R(a)
Q(a)
("x)[¬Q(x)]
¬Q(a)
200
AE
UI
MP
AE
DS
SE, QN
UI
a
f
b
Contradiction (see line (k))
9. This problem is in two parts.
Part I.
Show that the schema
¬("x)[P(x)]
¬P(a)
is INVALID.
Solution. Consider a model M whose universe consists of Alice, Bob, Claire and Dave in
which a and b are assigned to Alice and Bob respectively and in which P is assigned to
{Alice}. Then the premise of the schema is true in M, but the conclusion is false in M (since
P(a) is true in M).
Part II.
Explain what is wrong with the following ‘deduction’, which seems to validate the schema in
Part I.
a.
¬("x)[P(x)]
Premise
b.
¬P(a)
UI
Answer. UI says that a universal quantification entails every one of its instances. But the
occupant of line (a) is NOT a universal quantification — it’s the NEGATION of a universal
quantification. The invocation of UI, in other words, involves a non-occupancy error. The
commssion of such errors is one of the most common mistakes made by student working
with predicate logic — you have been warned!
Optional Section: More About the Provisos on EI and UG
We know from examples given in the chapter that the various provisos on EI and UG are
NECESSARY in order for the rules to work properly. But we also need to be sure that they’re
SUFFICIENT — THAT NO PROVISOS BEYOND THE ONES GIVEN ARE NEEDED. In this section we’ll show
why the provisos given are indeed sufficient. We’re going to do this in two steps. First, we’ll
A BEGINNING COURSE IN MODERN LOGIC
201
show that for every deduction in which UG is involved, there’s a corresponding one in which
UG is not involved (a ‘non-UG deduction’). Then we’ll show that the provisos on EI are
sufficient to assure that the final line of every non-UG deduction is true if the premises of the
deduction are all true.
Every deduction in which UG is involved conforms to the following scheme, or one just like
it but for the variable x:
:
---------------------------------------------- Premises
:
I
:
("x)[S]
UG
:
where I is an instance of ("x)[S] containing, say, the individual constant a in place of x, and
where a occurs neither in a premise nor in S.
Now consider the schema whose premises are the same as those of the deduction just above
and whose conclusion is ("x)[S]. We can validate this schema WITHOUT USING UG indirectly, by
Method 2, as follows:
:
---------------------------------------------- Premises of original deduction
¬("x)[S]
Premise
($x)[¬S]
SE, QN
¬I
EI
a
:
I
(Contradiction)
If a does not occur either in a premise of the original deduction or in S, it does not occur in any
premise of the second deduction. The introduction of a by EI in the second deduction where
shown is accordingly fully in accordance with the requirement that it not occur in a premise or
earlier in the deduction. (That it DOES occur in the final line of the deduction is due only to our
having availed ourselves of a permissible shortcut.) We obtain I in the second deduction in
exactly the same way as it was obtained in the original one. But a must not have been introduced
by EI in the original deduction, since if it had been — contrary to the proviso — then between
A BEGINNING COURSE IN MODERN LOGIC
202
the application of EI in the second deduction to obtain ¬I and and the final line there would be
another application of EI in violation of the proviso that a not appear earlier in the deduction.
Now, the second deduction amounts to nothing more than the validation of a lemma which
gets us ("x)[S] from the original premises; but since we can do this anywhere where the provisos
on UG are satisfied, we can just skip the lemma and use UG itself. Assuming that the provisos on
EI are sufficient, this amounts to showing the sufficiency of the provisos on UG, since
satisfaction of these provisos guarantees satisfaction of the provisos on EI in a validation of the
lemma conforming to the second scheme. But of course we must still show the sufficiency of the
provisos on EI.
For the moment we’re going to confine our attention to deductions in which all the premises
occur at the beginning. If we restrict deductions in this way, then we can ignore the proviso
about premises, since its work is done by the one about earlier lines in the deduction. Suppose
that we’re given a non-UG deduction (an alleged deduction, anyway) which conforms to the
following outline:
($x)[S]
:
I
EI
a
:
C
where I is an instance of ($x)[S] in which a replaces all occurrences of x in S, and the application
of EI by which I is obtained is the only one in the deduction. Suppose further that C is false in
some model M in which the premises of the deduction are all true. It’s helpful to think of this
(alleged) deduction as divided up in the way shown below:
:
A BEGINNING COURSE IN MODERN LOGIC
203
($x)[S]
:
---------------------------------------------I
EI
a
----------------------------------------------
Block 1
:
C
----------------------------------------------
Block 2
Suppose that every non-premise line in block 1 is obtained by the correct application of a
rule, likewise for every statement in block 2. Then all the statements in block 1 are true in M.
From this, it follows that any statement in block 2 which is false in M must have been obtained in
a way which involves the statement I. It also follows that I is false in M, since if it were true in
M, everything in block 2 would be true as well — including C.
Let’s now suppose that C contains no occurrences of the individual constant a. Then C is
false not only relative to M but also relative to every model which is just like M with the possible
exception of the individual associated with a. (Such a model is called an a-VARIANT of M.) Now,
let M´ be an a-variant of M relative to which I is true. (I leave it to you to figure out how we
know that there IS such an a-variant — see problem 4 for this chapter.) Since C is false relative to
M´, some statement in block 1 of the deduction must also be false relative to M´. But since the
only difference between M´ and M is the individual with which a is associated, there must
accordingly be an occurrence of a somewhere in the block — that is, earlier in the deduction
than the line consisting of I. So if the error in the deduction does not consist of there being an
occurrence of a in C, the only other possibility is that it I has an individual constant with an
earlier occurrence in the deduction.
Let’s now remove the restriction on where the premises can come and suppose, in particular,
that we have a deduction conforming to the scheme shown above in which at least one of the
premises is in block 2. Assume, as before, that every non-premise line in blocks 1 and 2 is
obtained by means of correct application of a rule. Again as before, let M be a model in which all
the premises are true and C is false, and M´ an a-variant of M in which I is true. Suppose that
there are no occurrences of a in block 1; then there is a premise in block 2 containing an
occurrence of a — if there weren’t, then it would be possible to rearrange the deduction in such a
way as to put all the premises at the beginning, in which case — as we’ve just shown — the
A BEGINNING COURSE IN MODERN LOGIC
204
provisos on EI would be sufficient to assure that C is true as long as the other lines of the
deduction are all obtained by correct application of rules. So: if the premises of the deduction are
all true, all non-premise lines are obtained by correct application of rules, but C is false and
contains no occurrences of a, then there is an occurrence of a either in a premise or in one of the
lines of block 1.
It remains to show that we will never get unwanted results from ANY application of EI as long
as we adhere to the provisos. Suppose that we have a deduction conforming to the following
scheme:
:
($x)[S]
:
I
EI
a
:
C
where the application of EI shown is the first application of this rule in the deduction and C is the
last non-premise before the next application of EI. We could, if we wished, treat the schema
whose premises are the premises of the original deduction and whose conclusion is C as a
lemma, and, having validated the lemma, use it in a new deduction just like the original one
except that block 1 consists just of C, justified by the lemma. But we can apply the same tactic to
every subsequent application of EI. As long as the provisos are observed, no application of the
rule interferes with any other, so the provisos are sufficient no matter how many times EI is
invoked in the course of a deduction.
New Terms
Instantiation
Universal
Existential
Generalization
A BEGINNING COURSE IN MODERN LOGIC
Existential
Universal
Problems
1. Validate each of the following by means of a deduction:*
*a.
P(a)
¬P(b)
($x)[P(x)] Ù ($x)[¬P(x)]
b.
P(a)
Q(a)
_
($x)[P(x) Ù Q(x)] Ù ($x)[P(x)] Ù ($x)[Q(x)]
c.
P(a) Ú Q(a)
¬($x)[P(x) Ù Q(x)]
P(a) ® ¬Q(a)
d.
P(a) Ù ¬Q(a)
¬("x)[P(x) ® Q(x)]
2. Give a deduction to validate each of the following:
*a.
($x)[P(x) Ù Q(x)]
($x)[P(x)] Ù ($x)[Q(x)]
b.
("x)[P(x) ® Q(x)]
R(a)
($x)[P(x)]
($x)[Q(x)] Ù ($x)[R(x)]
*
These problems can be solved without EI or UG.
205
A BEGINNING COURSE IN MODERN LOGIC
206
("x)[P(x) ® Q(x)]
c.
("x)[P(x)] ® ("x)[Q(x)]
d.
P(a)
("x)[Q(x)]
___
($x)[P(x) Ù Q(x)]
e.
¬($x)[P(x) Ù Q(x)]
("x)[P(x)]
("x)[¬Q(x)]
f.
P(a)
("x)[Q(x)]
_____________________
P(a) Ù ("x)[P(x) ® Q(x)]
*3.
We observed earlier that UI is analogous to the SL rule AE, and that EG is analogous to OI.
To what rule is UG analogous? Explain.
*4.
The demonstration of the sufficiency of the provisos on EI depends on there being an avariant of M in which the statement I is true. Prove that there is such an a-variant.
A BEGINNING COURSE IN MODERN LOGIC
207
18
Relations and Polyadic Predication
Suppose that we have a model whose universe consists of the cities of Amsterdam, Boston,
Chicago, Miami, Minneapolis, New York, Paris and Washington. Let each of these be referred to
by an individual constant of LPL as shown below.
a: Amsterdam
b: Boston
c: Chicago
i: Miami
m: Minneapolis
n: New York
p: Paris
w: Washington
Now consider a statement like Chicago is south of Minneapolis or Paris is south of Amsterdam.
How are we to make such statements in LPL? One way would be to associate a predicate with
the set of cities south of Minneapolis, another with the set of those south of Amsterdam, and so
on — as illustrated below.
A: south of Amsterdam
B: south of Boston
A BEGINNING COURSE IN MODERN LOGIC
208
C: south of Chicago
M: south of Minneapolis
N: south of New York
The following statements would then have the symbolizations shown:
Chicago is south of Minneapolis.
M(c)
Miami is south of Chicago.
C(i)
New York is south of Boston.
B(n)
Paris is south of Amsterdam.
A(p)
Washington is south of New York.
N(w)
But there is a better way, making use of a kind of predicate we haven’t seen before, which
combines not with a single individual constant or variable but with a PAIR of such symbols — as
in S(c, m). This statement is to be understood as attributing to Chicago and Minneapolis a certain
relationship — say that the former is south of the latter. We now need only one predicate to
symbolize our sentences about relative locations of city-pairs: S(c, m), S(i, c) and so on. The
predicate S is accordingly called a TWO-PLACE, or DYADIC, predicate (the predicates we’ve been
considering thus far are ONE-PLACE, or MONADIC).
Another way in which dyadic predicates are helpful can be seen from consideration of the
following argument:
Alice loves herself.
There is someone (or something) that Alice loves.
If we symbolize loves by the dyadic predicate L and Alice by the individual constant a then we
can symbolize this argument as follows:
L(a, a)
($x)[L(a, x)]
(Validating the schema is left as an exercise.) Suppose on the other hand that we were confined
to monadic predicates. Then we would need a predicate — say M — to refer to the property of
A BEGINNING COURSE IN MODERN LOGIC
209
loving oneself, and a different one — say A — to refer to the property of being loved by Alice.
The symbolization of our argument would then be
M(a)
A(a)
But there is no way given our rules of inference to validate this schema. We could, of course,
simply add a new rule for this purpose (though it would be problematical since we surely do not
want in general to be able to deduce A(a) from M(a)). But since we’ve already seen that we can
simplify things in one way by having dyadic predicates (in that we can avoid having to have
separate predicates meaning ‘south of Chicago’, ‘south of Minneapolis’ and so on) and since by
allowing such predicates we can avoid having to add further rules of inference (not to mention
having to deal with the problems involved in formulating this rule), allowing them seems to be
the clearly preferred alternative.
In principle, it is possible to have predicates of any number of places: three, four, five —
there is no limit. For example, we might symbolize a statement like Alice is the ambassador from
the United States to China by writing A(a, u, c); that Alice, Bob, Claire and Dave form a quartet
could be symbllized by writing Q(a, b, c, d) and that Elise, Fred, Gertrude, Henry and Iris are the
top five members of their class might be symbolized T(e, f, g, h, i). To keep things simple,
however, we will not consider predicates of more than two places.
Now to put this in more precise form. Make two selections, one after the other, from among
the members of a set (the same member could be selected each time) and record the selections
and the order in which they were made by writing <x, y> where x indicates the first selection and
y the second. We say that <x, y> is an ORDERED PAIR of members of the set of which x and y are
members.* The objects x and y are called COMPONENTS of the ordered pair, and a set of such pairs
is called a RELATION. The idea is that the first components of the pairs in the set are all related in
a given way to the second components of their respective pairs. For example, in the pairs
<Chicago, Minneapolis>
<Miami, Chicago>
<Washington, New York>
*
There is actually a more technical (and more satisfactory) definition of ordered pair, but this
will be enough to get the idea across for our purposes.
A BEGINNING COURSE IN MODERN LOGIC
210
the first member of each pair is south of the second member. We may then think of the
expression is south of as naming a set of pairs each of which has as its components two
geographical entities the first of which is south of the second. Similarly, we may take an LPL
predicate, such as S, as naming this set. A statement involving this predicate, say S(c, m), is to be
understood as true if and only if the ordered pair whose first member is the individual designated
by the first individual constant between the parentheses and whose second member is the
individual designated by the second constant is a member of the set designated by the predicate.
So, for example, if c and m name Chicago and Minneapolis respectively, then S(c, m) is true if
and only if the ordered pair <Chicago, Minneapolis> is in the set designated by S. Dyadic
predicates are, for this reason, often called RELATIONAL predicates.
Note that it is not in general the case that <x, y> = <y, x>. (It IS the case when x = y.) Hence it
need not be the case that a statement R(a, b) is equivalent to R(b, a).
Demonstration Problems
1. Symbolize and validate the following argument.
Everybody who loves somebody loves him- or herself.
Alice loves Bob.
Somebody loves him- or herself.
Solution.
Symbolization of Predicates:
L: loves
Symbolization of Individual Constants:
a: Alice
b: Bob
Symbolization of Argument:
("x)[($y)[L(x, y)] ® L(x, x)]
L(a, b)
($x)[L(x, x)]
Validation of Schema:
A BEGINNING COURSE IN MODERN LOGIC
211
a. ("x)[($y)[L(x, y)] ® L(x, x)]
b. ($y)[L(a, y)] ® L(a, a)
Premise
UI
c.
d.
e.
f.
Premise
EG
MP, b,d
EG
L(a, b)
($y)[L(a, y)
L(a, a)
($x)[L(x, x)]
a
b
Discussion.
Symbolizing statements like the first premise of the argument in this problem can be tricky
because more than one quantifier is involved and keeping track of scope relations becomes
important. Notice as well that two different variables, x and y, have been used here. Had we
used only one variable, the variable x, the symbolization of the first premise would be
("x)[($x)[L(x, x)] ® L(x, x)]
Notice, however, that we now cannot sort out what is supposed to be bound by what. Leaving
that problem aside, suppose we were to apply UI; we would then obtain
($x)[L(a, a)] ® L(a, a)
which, if it means anything at all, is clearly not what we want. In particular it does not give
us a line that will support the application of MP to obtain line (e).
This example also answers a question that might have come to mind earlier, namely: why do
we have more than one variable? It’s clear, once relational predicates enter the picture, why
we have made provision for more than one.*
2. Show that the following argument is INVALID.
("x)[("y)[("z)[(R(x, y) Ù R(y, z)) ® R(x, z)]]]
("x)[("y)[R(x, y) ® R(y, x)]]
($x)[R(x, x)]
*
In principle, it is necessary to have infinitely many, which we can assure by use of the same
device employed in the case of statement literals, individual constants and predicates.
A BEGINNING COURSE IN MODERN LOGIC
Solution. Assume a model M with universe U in which the set associated with R is empty
and in which M associates an i.c. with every element of U. Then the conclusion is clearly
false in M; but, by the same token, the premises are both true. (Why?)
3. Symbolize each of the following statements in LPL.
1. Every human has a mother.
2. No human is his/her own mother.
3. If one human is a sibling of another, then the latter is a sibling of the former.
4. If one human is an ancestor of another and the second is an ancestor of a third, then the
first is an ancestor of the third.
5. If one human is a parent of another then the latter is a child of the former, and conversely.
Solution.
Predicates:
Monadic:
H: is a human
Dyadic:
A: is an ancestor of
C: is a child of
M: is a mother of
P: is a parent of
S: is a sibling of
Statements:
1. ("x)[H(x) ® ($y)[M(y, x)]]
2. ¬($x)[H(x) Ù M(x, x)]
Alternate symbolization: ("x)[H(x) ® ¬M(x, x)]
212
A BEGINNING COURSE IN MODERN LOGIC
213
3. ("x)[H(x) ® ("y)[(H(y) Ù S(x, y)) ® S(y, x)]]
4. ("x)[H(x) ® ("y)[(H(y) Ù A(x, y)) ® ("z)[(H(z) Ù A(y, z)) ® A(x, z)]]]
5. ("x)[H(x) ® ("y)[(H(y) ® (P(x, y)) « C(y, x))]]
4. Symbolize each of the following arguments and validate it by means of a deduction. In
carrying out the symbolization, use the predicates from the previous problem where
appropriate. (At least one additional predicate is necessary.)
1. If one human is a parent of another then the latter is a child of the former, and
conversely.*
Alice and Bob are both human and Alice is one of Bob’s parents.
Alice has a child.
Symbolization:
("x)[H(x) ® ("y)[(H(y) ® (P(x, y)) « C(y, x))]]
H (a) Ù H(b) Ù P(a, b)
__
($x)[C(x, a)]
Validation:
a. ("x)[H(x) ® ("y)[(H(y) ® (P(x, y)) « C(y, x))]]
b. H (a) Ù H(b) Ù P(a, b)
c. H(a) ® ("y)[(H(y) ® (P(a, y)) « C(y, a))]
d. H (a)
e. ("y)[(H(y) ® (P(a, y)) « C(y, a))]
f. H(b) ® (P(a, b)) « C(b, a))
g. H(b)
h. P(a, b) « C(b, a)
i. (P(a, b) ® C(b, a)) Ù (C(b, a) ® P(a, b))
j. P(a, b) ® C(b, a)
k. P(a, b)
l. C(b, a)
m. ($x)[C(x, a)]
2. Every grandparent is a parent of a parent.
Alice is a grandparent of Bob.
__
*
This is statement 5 from the preceding problem.
Premise
Premise
UI
a
a
AE
b
MP
UI
b
AE
b
MP
SE, Def. of «
AE
AE
b
MP
EG
A BEGINNING COURSE IN MODERN LOGIC
214
Alice is a parent.
Symbolization:
Predicate (dyadic):
G: is a grandparent of
Argument:
("x)[($y)[G(x, y)] ® ($z)[P(x, z) Ù ($w)[P(z, w)]]]
G(a, b)
__
($x)[P(a, x)]
Validation (indirect, by Method 2):
a. ("x)[($y)[G(x, y)] ® ($z)[P(x, z) Ù ($w)[P(z, w)]]]
b. G(a, b)
c. ¬($x)[P(a, x)]
d. ("x)[¬P(a, x)]
e. ($y)[G(a, y)] ® ($z)[P(a, z) Ù ($w)[P(z, w)]]
f. ¬P(a, c)
g. ¬P(a, c) Ú ¬($w)[P(c, w)]
h. ¬(P(a, c) Ù ($w)[P(c, w)])
i. ("z)[¬(P(a, z) Ù ($w)[P(z, w)])]
j. ¬($z)[P(a, z) Ù ($w)[P(z, w)]]
k. ¬($y)[G(a, y)]
l. ("y)[¬G(a, y)]
m. ¬G(a, b)
Premise
Premise
Premise
SE, QN
UI
UI
OI
SE, DM
UG
SE, QN
MT
SE, QN
UI
a, b
a
d
c
e, j
Contradiction
(see line (b))
Alternate validation (without UG):
Lemma A
¬($x)[P(a, x)]
¬($z)[P(a, z) Ù ($w)[P(z, w)]]
Validation of lemma (indirect, by Method 2):
a. ¬($x)[P(a, x)]
b. ¬¬($z)[P(a, z) Ù ($w)[P(z, w)]]
c. ($z)[P(a, z) Ù ($w)[P(z, w)]]
Premise
Premise
SE, CR
a
A BEGINNING COURSE IN MODERN LOGIC
215
d. ("x)[¬P(a, x)]
e. P(a, b) Ù ($w)[P(b, w)]
SE, QN
EI
f. P(a, b)
g. ¬P(a, b)
AE
UI
d
Contradiction
Validation of main schema (indirect, by Method 2):
a. ("x)[($y)[G(x, y)] ® ($z)[P(x, z) Ù ($w)[P(z, w)]]]
b. G(a, b)
c. ¬($x)[P(a, x)]
d. ($y)[G(a, y)] ® ($z)[P(a, z) Ù ($w)[P(z, w)]]
e. ¬($z)[P(a, z) Ù ($w)[P(z, w)]]
f. ¬($y)[G(a, y)]
g. ("y)[¬G(a, y)]
h. ¬G(a, b)
a
c
b
Premise
Premise
a, b
Premise
UI
a
Lemma A
c
MT
e, h
SE, QN
UI
Contradiction (see line (b))
We will conclude our discussion of PL by considering a question that may have occurred to you.
In Chapter 2 we presented a method for determining the validity of arguments containing
quantifiers, that of Venn diagrams. Why, then, have we invested so much effort in presenting an
entirely different approach? In particular, why should we have to go through the process of
deduction or model construction when we could just use Venn diagrams?
The answer is that while we can indeed use the Venn diagram method for SOME of the various
kinds of arguments we have considered, we cannot use this method in ALL such cases. As a case
in point, consider the following valid argument:
Fido is a dog.
Fido’s teeth are dog’s teeth.
The following (trivial) diagram depicts the situation described by the premise of the argument:
A BEGINNING COURSE IN MODERN LOGIC
216
Fig. 17.1
But now the process comes to a dead halt: there is no way to read from this diagram anything
about the relationship between Fido and his teeth. On the other hand, using PL we can show the
validity of this argument. Symbolizing Fido by f, is a tooth by T, is a dog by D and possesses by
P then we can symbolize the argument as follows:
D(f)
___
("x)[(T(x) Ù P(f, x)) ® ($y)[D(y) Ù P(y, x)]]
Validation (indirect, by Method 1):
a. ¬ ("x)[(T(x) Ù P(f, x)) ® ($y)[D(y) Ù P(y, x)]]
Premise
b. ($x)[¬ ((T(x) Ù P(f, x)) ® ($y)[D(y) Ù P(y, x))]]
SE, QN
c. ¬ ((T(a) Ù P(f, a)) ® ($y)[D(y) Ù P(y, a))]
EI
d. T(a) Ù P(f, a) Ù ¬($y)[D(y) Ù P(y, a)]
SE, Def. of ®
e. ¬($y)[D(y) Ù P(y, a)]
AE
f. ("y)[¬(D(y) Ù P(y, a))]
SE, QN
g. ¬(D(f) Ù P(f, a))
UI
h. ¬D(f) Ú ¬P(f, a)
SE, DM
i. P(f, a)
AE
j. ¬D(f)
DS
a
d
But now another question that might well have occurred to you. Fido is a complex entity,
made up of parts. Is it not, therefore, plausible to think of him as a kind of set — namely the set
of these parts (his teeth included)? In such a case we might be able to validate our argument
diagrammatically as follows (at considerable savings in labor):
Fig. 17.2
A BEGINNING COURSE IN MODERN LOGIC
217
For the sake of argument, let’s grant that it makes sense to think of a complex entity as a set
whose members are the various parts of this entity.* Then we can use the diagrammatic method
in this particular case. But we cannot do so in all cases, as the following equally valid argument
will show:
Fido is a dog.
Fido’s owners are dog owners.
Now we are out of luck, for Fido’s owners surely cannot be parts of Fido himself. But we can
validate the argument in much the same way we validated the earlier one. If O symbolizes owns
then the symbolization of the argument is
D(f)
("x)[O(x, f) ® ($y)[D(y) Ù O(x, y)]]
Validating deduction (by Method 1):
a. ¬("x)[O(x, f) ® ($y)[D(y) Ù O(x, y)]]
b. ($x)[O(x, f) Ù ¬ ($y)[D(y) Ù O(x, y)]]
c. O(a, f) Ù ¬ ($y)[D(y) Ù O(a, y)]
d. ¬ ($y)[D(y) Ù O(a, y)]
e. ("y)[¬D(y) Ú ¬O(a, y)]
f. ¬D(f) Ú ¬O(a, f)
g. O(a, f)
h. ¬D(f)
Premise
SE, QN, Def. of ®
EI
AE
SE, QN, DM
UI
AE, c
DS
a
Yet another obvious advantage to the deductive approach to PL is that we can carry over the
principles that we originally developed for SL — whereas SL also does not lend itself to the
Venn diagram technique for validation. There turns out, in other words, to be a greater unity to
SL and PL than there might appear to be if we confined ourselves to the techniques of Chapter 2
in regard to the latter.
There is also an important lesson to be learned here about diagrams and pictorial methods in
general: they are often useful for certain purposes but they also have limitations. Not every
*
Notice that in one very important respect the diagram in Fig. 17.2 is misleading since it implies
that the elements of the set we have identified with Fido are also elements of the set of dogs —
which they clearly aren’t (no dog’s tooth is itself a dog).
A BEGINNING COURSE IN MODERN LOGIC
situation can be conveniently reduced to a visual image. There is nothing wrong with making use
of such images where they’re applicable — as we indeed did in the early stages of this project —
but we can’t allow ourselves to become wholly dependent on them. The deductive method is
much more general in its applicability, and that is the reason we have made it our main focus of
attention.
218
A further question that might have occurred to you is whether there is an analogue to the
truth table technique by which to test arguments in PL for validity — a decision procedure, to
use the term introduced at the end of Chapter 11. The answer to this question is negative, though
the reasons require mathematical background that goes beyond what can be presented in this
course. This is yet another reason for our having, from early on, put the primary emphasis on
deductive methods.
New Terms
monadic (one-place) predicate
dyadic (two-place) predicate
relation
Problems
1. Validate the schema
R(a, b)
($x)[R(x, b)] ® R(b, b)
R(b, b)
*2.
Symbolize and validate the argument
If anyone loves Bob, everyone does.
Alice loves Bob.
Bob loves himself.
*3.
Explain what is wrong with the deduction shown below, which purports to validate the
invalid schema
($x)[("y)[R(x, y)]]
("y)[($x)[R(y, x)]]
A BEGINNING COURSE IN MODERN LOGIC
a. ($x)[("y)[R(x, y)]]
b. ("y)[R(a, y)]]
Premise
EI
c. R(a, b)
d. ($x)[R(a, x)]
e. ("y)[($x)[R(y, x)]]
UI
EG
UG
219
a
b
4. Validate the schema
($x)[("y)[R(x, y)]]
("x)[($y)[R(y, x)]]
5. Validate each of the following:
(i) ($x)[R(x, a)]
("x)[R(x, a) ® B(x)]
¬B(a)
___
($x)[B(x)] Ù ($x)[¬B(x)]
(ii) ("x)[R(x, a) ® ¬R(a, x)]
¬R(a, a)
(iii) ("x)[("y)[R(x, y) ® R(x, a)]]
("x)[B(x) ® R(x, b)]
("x)[B(x) ® R(x, a)]
(iv) ("x)[("y)[(A(x) Ù B(y, x)) ® C(y)]]
A(a)
_
("x)[B(x, a) ® C(x)]
6. Symbolize and validate the argument
No horse has six legs.
___
Every horse owner owns a non-six-legged creature.
Download