In this lecture, I’m going to talk about how to reason logically and about fallacies in arguments. Why? On the one hand, probably the best way to learn to argue and assess arguments is just to spend a lot of time doing those things. We’re going to spend a lot of time doing that in this class. However, it’s good to have a few basic tools to start out with.
The first of these is logic. When I talk about logic, it’s important to emphasize that philosophers don’t use the word the way ordinary people do. When ordinary people talk about logic, often they’re talking about something that a professional 21st century philosopher would call “reason” or “rationality.” Logic, for the ordinary person, is exemplified by Spock from /Star Trek/.
When philosophers talk about logic, all they mean is the relationship between claims. The prime example of a logical inference that gets used in every philosophy 101 class is this:
(1) All men are mortal
(2) Socrates is a man
(3) Therefore, Socrates is mortal
The key relationship between these two claims is just this: if the first two are true, the third cannot possibly be false. You might have worries about the first two. In the case of (1), you might worry that all we really know is that the vast majority of men who’ve ever lived have turned out to be mortal, eventually. That’s a worry that philosophers have had a lot of things to say about, but for now just note that it’s a worry you might have: Socrates could turn out to be the exception to the rule.
Similarly, maybe Socrates isn’t a man. Maybe, like Elvis, he was really an extraterrestrial in disguise, and he didn’t die but rather just went home. Or maybe he was a god in disguise. Or maybe he was the next stage in evolution, a non-mortal, non-man stage. These possibilities–very distant possibilities, you’ll probably think–aren’t the point. The point is just that if it’s really true that all men are mortal, and Socrates really is a man, then Socrates is mortal.
A key feature of philosophical logic is that it only works if you’re very literal about the sentences. My favorite example of this comes from anthropological work on an isolated African group:
>Experimenter: Flumo and Yakpalo always drink cane juice [rum] together. Flumo is drinking cane juice. Is Yakpalo drinking cane juice?
>Subject: Flumo and Yakpalo drink cane juice together, but the time Flumo was drinking the first one Yakpalo was not there on that day.
>Experimenter: But I told you that Flumo and Yakpalo always drunk cane juice together. One day Flumo was drinking cane juice. Was Yakpalo drinking cane juice?
>Subject: The day Flumo was drinking cane juice Yakpalo was not there on that day.
>Experimenter: What is the reason?
>Subject: The reason is that Yakpalo went to his farm on that day and Flumo remained in town on that day.
The confusion in this dialog comes from the fact that the experimenter expected his words to be taken literally, especially the word “always”, but the subject understood “always” in a way that allowed exceptions to the rule. Most of us Yankees, I think, can be literal enough to answer in the expected way in situations like I described. But few if any people are absolutely literalistic in everyday life, but strict logic depends on being literal.
Why study logic if it’s so often disconnected from everyday life? Well, if you’re in and argument with someone, and they make a long speech about why they’re right, then rather than responding in a very vague way, you can break it down into logical steps and try to see which step fails. If you can’t break their argument down, and they can’t either when asked, it’s a good hint that their supposed arguments really don’t fit their conclusion. Also, if you’re arguing for a claim, rather than throw off some vague reasons to accept it, you can lay out a set of assumptions that seem perfectly obvious, and move logically from them to your conclusion. At that point, anyone who wants to deny your conclusion must deny one of your assumptions, and you can insist they explain which one they reject and why.
Logical thinking can be more or less intuitive depending on the context. If you want to get good at logic, learn to transfer patterns of reasoning from the intuitive cases to the unfamiliar ones. For example, suppose you have four cards laid out on the table. In order, they have the following on their visible sides: 7, D, E, 12. Suppose you know that all cards have a letter on one side and a number on the other. Now consider the following rule “if a card has ‘E’ on one side, it must have a ’7′ on the other side. To figure out if this rule holds, which cards do you have to turn over?
Most people will say that you have to turn over 7 and E. This is wrong. Actually, you have to turn over E and 12. Think: I never said if a card has “7″ on one side, it must have “E” on the other. And the “12″ card could turn out to have an “E” on the other side, in which case it would violate the rule.
But consider this situation. Suppose you’re at a place that serves alcohol. They don’t card to get in, but as for serving, they do try to enforce the rule that you have to be 21 to have alcohol. Suppose there are four people with drinks, about whom you have incomplete information: you know the first is 18, the second is 22, the third is drinking diet coke, and the fourth is drinking beer. Who do you have to check ages/drinks on to make sure the rule is being followed? Most people have no trouble getting the right answer: you check the 18 year old and one drinking beer. Notice though that logically, the two problems I just gave are the same.
Saying that the “7″ card must have an “E” on the back is like saying that once you’re 21, you’re no longer allowed to drink soda. But people who would never make the second mistake make the first one. There are various psychological theories on why this happens, but for your purposes, the key point is this: you can learn to reason logically about unfamiliar issues by analogy with familiar ones.
Now fallacies. How may are familiar with the concept? If you’ve ever spent time arguing on the internet, you’ve probably heard someone accused of using a fallacy at some point, even if they didn’t use the word “fallacy.” If you haven’t spent time arguing on the internet, I strongly recommend it. There are plenty of people online who would be bad role models for learning to reason, but it’s still worth having the experience of how back-and-forth arguments, on bulletin boards and blogs, play out.
What are examples of fallacies? How many people have heard of an “ad hominem attack”? Of making an “appeal to authority?” Of “circular reasoning,” also known as “begging the question”?
Here are some definitions, taken straight from a web page that came up when I Googled “fallacies”:
*Ad Hominem: You commit this fallacy if you make an irrelevant attack on the arguer and suggest that this attack undermines the argument itself.
*Appeal to Authority: The Appeal to Authority uses admiration of a famous person to try and win support for an assertion. For example: “Isaac Newton was a genius and he believed in God.”
*Circular reasoning: This fallacy occurs when the premises are at least as questionable as the conclusion reached. Typically the premises of the argument implicitly assume the result which the argument purports to prove, in a disguised form.
I could spend a lot of time drilling definitions of fallacies into you. However, I don’t think that would help much. From seeing internet debate, most people seem to be capable of learning to throw around the terminology at will. And if it’s definitions you care about, there are at least a couple dozen web pages consisting of lists of fallacies and their definitions.
Instead, I want to talk about the following idea: the key to learning to spot fallacious arguments is learning when not to see them, namely when they aren’t there. Consider: online, lots and lots of insults get used. A good portion of the time, these lead to complains about ad hominem attacks. But are they really? According to the definition I just gave, an insult isn’t an ad hominem attack unless it’s suggested that the insult is a reason for dismissing someone’s claims or arguments. And how often do you really hear someone say, “Joe’s arguments for tax cuts are wrong because he’s a poopy head?” Last lecture, I suggested that many people in the media do a lousy job of arguing claims. But I wasn’t committing the ad hominem fallacy, I was just stating something I thought we should be concerned about.
Sometimes there are even greater reasons to say something a bit insulting. If someone has a record of misrepresenting relevant science, or lying outright about the facts, calling them a fool or a liar isn’t out of line. At some point it becomes reasonable to simply ignore what they say rather than continue to waste time fact-checking them.
The issues with ad hominem attacks are tied up with those around appeals to authority. The stereotyped image of an appeal to authority that many philosophers have, or at least used to have, is the medieval or renaissance thinker going around saying that because Aristotle or Ptolemy or whoever said something, it must be true. They said the Earth was the center of the universe, it must be true. But wait: the ancient Greeks may have gotten a lot of things wrong, but we also have up-to-date modern experts, who we generally think we can trust about a lot of things. At least in chemistry and physics, we pretty much take scientists at their word on the basic findings of those disciplines. Biology and medicine are pretty much the same, except in a few special controversies like evolution and alternative medicine. The social sciences and history get really contentious, but we still all believe the Declaration of Independence was signed in 1776, even if we can’t verify it for ourselves.
So what do we think? I’ll say one thing flat out: there are no good appeals to authority in philosophy. As I said before: philosophers generally don’t know what they’re talking about. A professor of mine once said that no interesting issue in philosophy is uncontroversial. From what I’ve seen, controversies that seem to be settled often get reopened a few decades later, and people feel silly about having thought them settled. Perhaps we can say there has to be expert consensus, to appeal to the experts.
Another issue: bias. Sometimes, in debates about religion, you hear claims about what is supposedly the consensus of Biblical scholarship. Some might think that if we can trust historians in general, we can trust Biblical scholars. My personal opinion is that too many Biblical scholars went in for religious reasons, even “skeptical” ones are often pushing liberal Protestantism. A few things seem well-established, but on any big controversy the issue of bias keeps the word of scholars from being decisive. Or, evolution: some creationists accuse evolutionists of being close-minded atheists who only proposed their theory to destroy religion. This claim may be blatantly false, but it shouldn’t be dismissed as irrelevant: non-biologists can take these things into account when deciding whether to trust biologists on evolution. This is the tie-in with ad hominem attacks: when an appeal to authority has been made, attacks on the authority’s credibility become fair game.
Finally, circular reasoning. In it’s most blatant form, it’s saying something like “abortion is immoral because abortion is immoral.” A slightly more plausible example is when someone says, “take my word for it: you can trust me.” If you don’t trust them, you have no reason to take their word for it. But the definition I gave you is more subtle, that’s why I picked it out from other possible definitions. For example, what does it mean to “implicitly assume” the conclusion? It seems like you need something like that to explain why saying “take my word for it: you can trust me” is circular, but what else counts?
Or, though it seems like a good rule, how do we judge whether a premise is as controversial as the conclusion? Suppose you believe that God’s commands are the source of morality, and I say to you “that can’t be right, because if it were, God could order the mass-murder of children and make that moral. But the mass-murder of children can’t be moral, especially not on someone’s say-so.” And suppose you say, “I agree with your first premise, that if morality depends on God’s commands he could make it moral to massacre children. But it’s a basic feature of my moral view that God can do this. So in saying ‘the mass-murder of children can’t be moral,’ you’re just begging the question against my views.” Does this reply make any sense? The exact problems with circular reasoning are one of the most interesting issues in philosophical debate. I’ll say more about those issues in the future, once we’ve spent more time analyzing arguments in a straightforward way.
To wrap up, I want to integrate what I’ve said about logic and fallacies. Sometimes, you hear the phrase “logical fallacy” to refer to any fallacy whatever, but there are a few fallacies that specifically have to do with logic.
Consider this claim: If something is a human being, it must be an animal. From this, if we know that Ginger is a human being, we can infer that she’s an animal. Conversely, if we know that Ginger isn’t an animal, she can’t be a human being, and we might wonder if in this context someone has given the name “Ginger” to a teddy bear or potted plant. But notice what we can’t infer: we can’t infer that Ginger is a human being just from knowing that she’s an animal. And we can’t infer that Ginger isn’t an animal just from knowing that she isn’t a human being. In each case, “Ginger” might refer to someone’s house cat. These last two mistaken forms of reasoning are called “affirming the consequent” and “denying the antecedent,” respectively. But you have to be careful in accusing people of committing these two fallacies.
For example, if someone says “Hawks claimed the Iraq war was a good idea because Iraq had illegal WMDs, and we should take military action when necessary to stop the spread of illegal WMDs. But Iraq didn’t have WMDs.” Are they committing the fallacy of denying the antecedent? Well, maybe they just want to undermine positive arguments for the view, without insisting it’s definitely false. Or they might be assuming a common assumption that war whose main justification is bogus is an unjust war. They aren’t necessarily committing a fallacy.
Another example: your physics teacher tells you that we know Einstein’s theory of relativity is true is because if it were true, we would be able to make all kinds of specific experimental observations, and we can in fact make those observations. This sounds like affirming the consequent. But maybe your physics teacher isn’t meaning to offer up an airtight argument, maybe they just think that if a theory makes lots of successful experimental predictions, it’s very likely to be true. They aren’t necessarily committing a fallacy.
The key take home point is this: there are a few well-established tools, like logic and awareness of fallacies, that can assist in better reasoning. However, they aren’t a substitute for good reasoning. They’re very easy to abuse, and often th e only way to tell whether they’re being abused is to examine and argue about the particular context. Learning to do that is the main point of these lectures.
Final aside: if you’re interested in the psychology of logic, or just in psychology in general, I strongly recommend Steven Pinker’s /How The Mind Works/, which was the source of my information here on psychological experiments. Now, any questions?
1 Comments.