Document

advertisement
AD HOMINEM I
• Ad hominem = df. Attacking a person rather than his
or her argument, view, or position on an issue.
• ‘Ad hominem’ is Latin for ‘to the man,’ and the idea in
this kind of fallacy is that criticisms are directed
towards the person rather than to the person’s thoughts
about or arguments for a particular view.
• Ad hominem reasoning is fallacious since,logically, the
faults of the person are one thing and the defects in
what he or she says are quite another.
• The faults of the individual don’t automatically attach
themselves to what he or she says.
AD HOMINEM II
• Joe may be a right wing radical who is anti-government and
against gun control, and I might disagree with such attitudes, but
that in itself will not prove that his reasoning on a particular
political issue is automatically wrong.
• Jane may be a liberal feminist who is pro-choice in abortion, and
I might think that such views are wrong, but that alone will not
show that whatever she says about social issues are either right
or wrong simply in virtue of her holding these opinions.
• Rather, her argument for or against a certain social issue must be
assessed on its own merits, and so independently of her personal
beliefs.
• Thus arguments have to be assessed on their own as arguments,
and so independently of our attitudes towards the person by
whom the argument is given.
PERSONAL ATTACK AD HOMINEMS I
• Disliking either a person or something negative about
him, and finding critically acceptable reasons to
reject a claim that he makes are two different things.
• We commit the error or fallacy of reasoning called
‘ad hominem’ when we reject a person’s claim merely
because we dislike the person or something about the
person.
– This version of ad hominem is called personal attack.
– Because we simply reject a claim made by a person or group
because we dislike the person or group, personal attack ad
hominem pseudoreasoning is much more emotional or
psychological than it is rational or logical.
PERSONAL ATTACK AD HOMINEMS II
• If we think that a person’s negative characteristics
are relevant to rejecting a claim she makes, then it
may be plausible to reject the claim.
– However, we must be able to explain the relation of those
characteristics to the claim being made so that rejecting
the claim for this reason can be made plausible.
– For instance, if we know that a person x is a hater of
another class of people y, then any negative claim about y
made by x may be reasonably regarded as being suspicious
and likely to be false.
– Thus, further information may be needed to assess the
claim accurately.
CIRCUMSTANTIAL AD HOMINEMS
• Circumstantial ad hominem =df. A fallacy in which a person’s
claim is rejected based upon the circumstances of the person
making the claim.
• In circumstantial ad hominem the attack is not on the person, but
her circumstances, and it is inferred that a claim made by the
person is false because someone in her position or circumstances
would make such a claim.
– For instance, thinking that a tax cut proposed by a wealthy congressman
cannot be good for the average person because of his position of wealth is
a circumstantial ad hominem.
– Rejecting Bill’s arguments against the death penalty because he is sitting
on death row is also circumstantial ad hominem.
• Personal attack and circumstantial ad hominems overlap, and
trying to distinguish between them in particular instances can be
pointless.
• The main thing to recognize is that, if it is an ad hominem, then
it is fallacious.
PSEUDOREFUTATION I.
•
•
•
•
Psedorefutation =df. A type of as hominem based on charges of
inconsistency where no relevant inconsistency exists.
Relevant inconsistencies are inconsistencies between claims.
if Jane says that it is both snowing and not snowing at the same time in
the same place, then we can reject her claim as logically inconsistent.
But if there are no inconsistencies between claims then the fact that a
person may not act (or may not always have acted) as if the claim is
true does not allow us to infer that the claim is false or even that the
person thinks it is.
– For instance, we cannot reject Jane’s claim to love Jim simply by noting that she
has never acted as if she loved Jim.
– And we can’t then infer that she herself believes her claim about her love for Jim
to be false. Either would be pseudorefutation.
•
We must also recognize that a person can change his mind.
– Thus, a person does not contradict herself by now agreeing with the morality of
euthanasia when she earlier rejected it.
– To say that this is contradictory is pseudorefutation.
PSEUDOREFUTATION II.
• A child might be punished by her parents for cheating on a test
because she is told by them that “cheating is morally
reprehensible.”
• For the child later to reject the claim that cheating is immoral
because she finds out that her parents cheat on their taxes would
be an example of pseudorefutation.
• She has not used reasoning to reject the view, but has thought
that “if it’s okay for you it’s okay for me.”
• She could rightly accuse her parents of being hypocritical, but
their behavior is not itself a reason for finding cheating
acceptable.
• The child might cheat again and then, after being caught, attempt
to justify her actions by saying “you do it too!”
• This brand of pseudorefutation is called tu quoque or “you too”
pseudoreasoning.
“POSITIVE”AD HOMINEM I.
• To this point we have looked at ad hominem arguments in
relations to something negative about a person, and have seen
that, in abusive or personal attack ad hominem, a person’s claim
is rejected merely because we dislike the person or something
about the person.
• However, it is also fallacious to accept an argument merely we
like the person or something about the person.
• This form of pseudoreasoning does not have a name in the
literature, but it is an ad hominem because a person’s claim is
accepted, not on the basis of critically acceptable arguments, but
because of an agreeable characteristic(s) which the person has.
• Accordingly, it is once again emotional or psychological rather
than rational or logical.
“POSITIVE” AD HOMINEM II.
• However, as before with negative characteristics, if we
think that a positive characteristic(s) of a person is
relevant to accepting a claim she makes, then it may be
plausible to accept the claim.
• However, we must be able to explain the relation of
those characteristics to the claim being made so that
accepting the claim for this reason can be made
plausible.
– For instance, if we know that Jane is a lover of classical music,
and her love has made her listen carefully and extensively to
such music, then a claim which she makes about the worth of a
particular classical composition might be accepted
hypothetically as likely to be true, at the same time that further
information may be needed to assess the claim accurately.
POISONING THE WELL I
• Poisoning the well =df. Attempting to
discredit in advance what a person might
claim by relating unfavorable information
about the person.
• This is a kind of pseudoreasoning which “can
be thought of as an ad hominem in advance.”
– For instance, A says: “B’s loyalty is to her political
party and therefore anything she says about a
particular issue will simply reflect the party line.”
– The idea is that B’s thinking about an issue then
need not be considered on its own, and so A has
poisoned the well of discourse.
POISONING THE WELL II
• When A poisons your mind about B by relating
unfavorable information about B, you may be
inclined to reject what B says to you.
• Psychological studies show that even a statement
such as “It is not true that Professor x grades
unfairly” may bias our thinking about someone in
advance.
• However, a critical thinker must be extra careful
not to reject what a person says just because we
have an unfavorable impression of the individual.
GENETIC FALLACY I
• Genetic fallacy =df. Rejecting a claim, policy, or
position on an issue simply because of its source,
associations, or history.
– For instance, a member of political party x commits the genetic
fallacy when he rejects an idea simply because it came from a
member of a different party y.
• Ad hominem pseudoreasoning is a kind of genetic
fallacy since here we are rejecting a claim or something
due to a person based on where it comes from, namely,
that person.
– For instance, to reject the music of Richard Strauss, to say that
it cannot be good, simply because it is said that Strauss was
sympathetic to the Nazis is to commit the genetic fallacy.
GENETIC FALLACY II
• When we reject a claim, policy, or position
just because of its source, associations, or
history, we commit the genetic fallacy.
• However, we also commit a genetic fallacy
if we accept a claim, policy, or position
only on the basis of its source,
associations, or history.
BURDEN OF PROOF I
• (Inappropriate) burden of proof =df. The burden of
proving an issue is placed on the wrong side of an issue, or
is placed too heavily on one side rather than another.
• When person x makes a claim that a certain thing t is true,
and another person y disagrees and thinks that that thing is
false, one side or the other has the burden of proof: Does x
have the burden of proving that t is true, or does y have the
burden of proving that t is false?
• It depends on the nature of the claim.
– It is legitimate to talk of burden of proof concerning claims.
– Burden of proof names a fallacy or a kind of pseudoreasoning only
when the burden of proof is placed on the wrong or too heavily on
one side: when x should have the burden of proof, but the burden
of proof is placed on y, or the other way around.
BURDEN OF PROOF II
• Initial plausibility. The general rule that most often
governs the placement of burden of proof is: The less
the initial plausibility a claim has, the greater the
burden of proof on the one making the claim.
– For instance, if I assert that there is a rhinoceros in the room
with us, and there is no evidence of that, then I have the burden
of proving that it is in the room.
– This is because any claim which conflicts with your own direct
observations is open to serious doubt.
– Accordingly, the burden of proving that a rhino is not in the
room would be inappropriately placed on you.
– To say: “Prove that it is not in the room” would be the fallacy of
(inappropriate) burden of proof.
BURDEN OF PROOF III
• Affirmative/negative. Other things being equal, the
burden of proof falls automatically on those supporting
the affirmative side of an issue rather than on those
supporting the negative side.
– The basic idea is that reasons should be given for why
something is the case rather than for why something isn’t the
case.
– For instance, a person who maintains that global warming is
due to industrial pollution has the burden of proving that that is
the cause and not something else.
– The affirmative/negative rule also applies to existence versus
nonexistence, so that a scientist who maintains that life exists
elsewhere in the universe has the burden of proving that it does.
BURDEN OF PROOF IV
• In general, the affirmative side gets the burden of proof
because it tends to be much more difficult – or at least much
more inconvenient [and perhaps impossible in some cases]
– to prove the negative side of an issue.
• It may be possible to prove the negative claim that there is
no perceptible rhino in the room, but not that there is no
imperceptible rhino in the room.
• Still, there is no reason which favors such an assertion, and
so it should be rejected.
• Saying “No one has proved that there are no invisible
rhinos, therefore it is perfectly acceptable to believe in
them” is an appeal to ignorance.
• An appeal to ignorance =df. Saying that absence of
evidence against a claim counts as evidence for that claim.
BURDEN OF PROOF V
• Special circumstances. Sometimes getting at the truth
is not the only thing we want to accomplish, and on
such occasions we may purposely place the burden of
proof on a particular side.
– For instance, in our system of justice a person is presumed
innocent until proven guilty, and the burden of proving that an
accused is guilty rests with the prosecution.
• Also, it is reasonable to place a higher burden of proof
on someone who advocates a policy that could be
dangerous or costly if he or she is mistaken.
– For instance, one would place a high burden of proof on
someone who recommends a risky health procedure or
substantial financial investment.
STRAW MAN
• Straw man =df. Ignoring an opponent’s actual
position, and presenting a distorted,
oversimplified, or misrepresented version of that
position in place of that position.
– For instance, saying that “John’s view about abortion is
simply that no one has a right to tell a woman what to do
with her body” is straw man rhetoric if John’s position
about the abortion issue and the rights of the mother
versus the rights of the child is considerably more
sophisticated than this.
FALSE DILEMMA I
• False dilemma =df. Limiting consideration to
only two alternatives when in fact there are others
that deserve consideration.
• Thus, to think that x or y are the only options,
when in fact there is at least one other option z is
to commit the fallacy of false dilemma.
– For instance, saying that either we replace the school
cafeteria with fast food franchises or students will not each
lunch on campus.
– However, another alternative which could be considered
would be to improve the food at the cafeteria.
FALSE DILEMMA II
• False dilemma also occurs when we think
that, of two alternatives x and y, one must be
true and the other false when both x and y
could be false.
• If this is the case, then to say that, because y
is false, x must be true, is to commit the
fallacy of false dilemma.
– For instance, saying that “Either party x is going
to have to get control of Congress in the next
election or we are going to have war” is a false
dilemma in that it might both be false that party x
gets control of Congress and false that we have
war.
FALSE DILEMMA III
• Recall that placing a distorted, oversimplified, or
misrepresented version of a person’s position is called
“straw man” because the position of the real man is
replaced with a position which he or she does not hold
– hence the position is as false as a straw man is unreal.
• False dilemma pseudoreasoning can involve straw man
pseudoreasoning.
– A person who attempts to get someone to accept position x may
present an alternative position y in such a way that y is
distorted, oversimplified, or misrepresented.
– For instance, a candidate who is trying to get you to vote for her
may so misrepresent her opponent’s position in an effort to get
you to vote for her, that the position described is that of a straw
man.
FALSE DILEMMA IV
• A false dilemma is opposed to a true dilemma, one in
which only two alternatives truly present themselves,
as when a physician tells a patient “Either you have
surgery or you will die.”
• False dilemma pseudoreasoning only occurs when
reasonable alternatives are ignored.
• Before you accept x because some alternative y is
false, make certain that x and y cannot both be false;
look for some third reasonable alternative, some way
of rejecting y without having to accept x.
• The either x or y alternative characteristic of false
dilemma can also be stated as “if not x, then y.”
THE PERFECTIONIST FALLACY
• The perfectionist fallacy says “either x is
perfect or it must be rejected.”
– Saying that no piece of legislation should ever
be passed which will not deal exactly with
every problem which it is designed to concern
is an example of the perfectionist fallacy.
– The perfectionist fallacy is a version of false
dilemma.
THE LINE-DRAWING FALLACY
• The line-drawing fallacy =df. Insisting that a line must be
drawn at some precise point when in fact it is not necessary
that such a line be drawn.
– For instance, if we cannot tell exactly how many grains of sand it
takes to have a heap, then we can never say that a number of such
grains is or is not a heap.
– But clearly some things are heaps and some are not, and this is
true even if we can’t specify exactly where to draw the line
between a heap and a number of grains which are not a heap.
• We can treat the line-drawing fallacy as yet another version
of false dilemma.
– The claim is “either there is a precise place where we draw the line,
or else there is no line to be drawn.”
– But there can be a third alternative, namely, that while the line
can’t be drawn in a precise place, still it can be drawn.
– We can make distinctions even in a “fuzzy” world.
SLIPPERY SLOPE I
• Slippery slope =df. Thinking that some thing x
must lead to some other thing y when there is no
logical necessity in y’s following from x, nor is
there any argument given for the necessity.
• Saying that euthanasia should not be legalized
since, if it were, it would inevitably lead to abuses
where people would be euthanized who did not
really want it, is an example of slippery slope
pseudoreasoning.
• The fallacy of slippery slope occurs when there is
no reason to think that x will lead to y and yet it is
maintained that x will lead to y.
SLIPPERY SLOPE II
• A second version of slippery slope occurs when
someone claims we must continue a certain course of
action because we have already begun that course.
– For instance, maintaining that we must continue the program of
military build-up because it has already begun, and so certain
industries are counting on it financially, is slippery slope
pseudoreasoning if it can’t be shown that there are other more
substantial reasons for continuing the build-up other than those
which are simply related to the fact that it has already begun.
• Any force that slippery slope pseudoreasoning has is
psychological rather than logical.
– A thought of y may follow from a thought of x even though y
does not have to follow x in fact.
BEGGING THE QUESTION I
• Begging the question =df. Assuming to be true
what you are trying to prove.
• Obviously, if you assume something in an
argument for it then you have not proved it.
– For example, if you were to attempt to argue for realism (the
view that objects in the external world, such as tables, exist
apart from perception) by saying that, if realism were false,
you would not then see your table when you enter your
kitchen, you would be guilty of begging the question.
– The issue is whether or not objects exist unperceived, but that
is already assumed in the premise that, apart from realism,
you would not perceive your table on entering your kitchen.
– Therefore the conclusion that realism is true repeats the truth
of realism assumed in the premise.
BEGGING THE QUESTION II
• Begging the question is also known as reasoning in a
circle because one returns to a premise in the
reasoning in which the thing to be proved is assumed.
• In the preceding example, the thing to be proved is
that objects exist unperceived, but that they do so
exist is itself returned to (or assumed) in the premise
that you would not perceive what you do perceive
unless objects like tables exist unperceived.
• But if it is legitimate to question the conclusion, as it
is here, then it is legitimate too to question the
premise which the conclusion repeats, which is why a
question begging argument is not proof, but is a form
of pseudoreasoning.
BEGGING THE QUESTION III
• Persuasive definitions can beg questions.
– Defining capital punishment as “state-sanctioned murder”
assumes the immorality of the death penalty in that definition.
– However, the definition, and its implicit view that capital
punishment is immoral, need not be accepted.
• The real problem in cases of question begging is a
misunderstanding of what premises (and definitions)
it is reasonable for one’s audience to accept.
• We are guilty of begging the question when we ask
our audience to accept premises that are as
controversial as the conclusion we’re arguing for and
are controversial on the same grounds.
COMMON GROUND
• If you ever to hope for any measure of success in trying
to convince someone of a claim, you should always try
to argue for it based on whatever common ground you
can find between the two of you.
• The attempt to find common ground from which to start
is what underlies the entire enterprise of rational
debate.
– For instance, two thinkers cannot debate the existence of God if
no definition of ‘God’ can be agreed upon by them.
– Even though the theist and the atheist would support different
positions on the question of theism, their common ground
would be agreeing on what the term ‘God’ means.
– If no agreement could be found here, then the rational debate
could not take place.
Download