top of page

Fallacies and cognitive biases

1. What is a fallacy?

 

In the book Introduction to logic [1], now a classic on introductory logic, Irving Copi presents fallacies as types of arguments that may seem adequate at first sight, but which, in fact, are not. Simply put, fallacies are thus errors of argumentation. This characterization is sufficiently vague and general to include very diverse types of errors, ranging from simple ambiguity between two terms to the violation of logical or mathematical rules. There are obviously ways to clarify the notion of fallacy from this characterization, depending on the field of research that interests us.

A second way of characterizing fallacies is to define them as faulty reasoning which leads people to draw the wrong conclusions [2]. These lines of reasoning can be called “fallacies” when they are presented voluntarily, and “paralogisms” when they are presented involuntarily. It is emphasized here that the person who uses a fallacy may do so for the explicit purpose of misleading their audience, for example by taking advantage of certain human cognitive tendencies to process information incorrectly (such as the belief bias, that is, the tendency to accept the "likely" conclusions of arguments that are nevertheless invalid).

The first characterization speaks of fallacies in terms of arguments, more associated with language, while the second speaks of them in terms of reasoning, more related to psychology. This is not insignificant, since we can distinguish different aspects of fallacies, including their logical dimension (having to do with the organization of discourse), their semantic dimension (having to do with the meaning of words), their pragmatic dimension (having to do with the effects that can be produced with language), or even their psychological dimension (having to do with our cognitive mechanisms). In practice, these dimensions are interrelated, but distinguishing them makes it possible to better understand their interaction, thus allowing a better understanding of fallacies.

In what follows, we will present a classification of fallacies (section 3) which will allow us to see how fallacies differ from cognitive biases (section 4) before concluding with some possible ways to avoid them (section 5). A box (Section 2) provides a brief definition of some key words.

2. Keywords associated with fallacies

Logic

Logic is a discipline concerned with argumentation and the conditions that arguments must meet to be deemed correct or valid. Since it is common to consider the terms “reasoning” and “argument” as synonyms, logic can also be understood as a study of reasoning. The study of fallacies, which are errors of reasoning or argument, is an important aspect of logic: they must be studied to better guard against them.

Argument

An argument is a set of statements, some of which (the premises) aim to establish the truth of another (the conclusion). The passage from premises to conclusion is an inference. Statements have the property of being true or false, depending on whether they faithfully describe reality or not. For example, the statements “Water freezes at 0ºC” and “All dogs are mammals” are true, while the statements “7+5=13” and “Water does not freeze at 0ºC” are false.

Validity

Validity is the quality of an argument whose premises adequately establish the conclusion. For example, an argument is said to be deductively valid if the truth of the premises necessitates the truth of the conclusion. In other words, an argument is deductively valid if and only if it is impossible for its premises to be true and the conclusion false. Here is an example of such an argument: Premise 1: “All dogs are mammals” Premise 2: “All mammals are animals” Conclusion: “(Therefore) All dogs are animals” In this example, the information in the conclusion is somehow already in the premises, but implicitly; drawing a valid conclusion makes this information explicit. The distinction between truth and validity makes it possible to realize that certain arguments with a similar structure can be invalid even if their premises are true, as in the following example: Premise 1: “All dogs are animals. » Premise 2: “All cats are animals. » Conclusion: “(Therefore) All dogs are cats. » Note that this conclusion is invalid since it cannot be deduced from the premises, even if these are true. Note also that the falsity of the conclusion suggests that the argument is problematic, but this is not always the case. It is indeed possible that we do not see a problem in arguments with plausible conclusions, such as the following argument, whose structure is identical to the preceding argument: Premise 1: “All dogs are animals. » Premise 2: “All mammals are animals. » Conclusion: “(Therefore) All dogs are mammals. » Conversely, arguments can be valid even if the premises and the conclusion that compose them are false. Here is an example: Premise 1: “All logicians are called Bobby Watson.» Premise 2: “All Bobby Watsons are traveling salesmen. » Conclusion: “(Therefore) All logicians are traveling salesmen. » This therefore means that the validity of an argument is not the only criterion leading us to accept its conclusion: its premises must also be acceptable or true.

3. Classification and examples of fallacies

There are many different classifications of fallacies, each emphasizing different aspects and aiming to bring out certain similarities between fallacies. One of the classifications used for understanding a wide range of fallacies is attributed to Ralph Johnson and Anthony Blair, authors known for their contributions to our understanding of the logic of argumentation [3]. It is this classification that will be retained in this article.

This classification relaxes the criterion of truth of the premises, which is related to the link between the statements and reality, as well as the criterion of validity of arguments, which is related to the logical links between statements, criteria which come from deductive logic. It replaces them respectively with a criterion of acceptability of the premises, and criteria of relevance and sufficiency, making it possible to measure the strength of the link of inference. That is to say, to what extent the premises give support to the conclusion of an argument. The criterion of acceptability is a weakening of the criterion of truth in that an acceptable premise plausibly or approximately describes the world, regardless of whether it is literally true. For example, in most contexts, to say that there are 8 billion human beings would be acceptable, even if it is, strictly speaking, false (there are not exactly that number of human beings on our planet).

Similarly, relevance and sufficiency weaken the criterion of validity since, unlike validity, even if a set of premises is relevant and sufficient to establish a given conclusion, those premises need not make the conclusion necessary. For example, if, based on the premise that 99% of doctors pay their taxes, we conclude that Dr. Lasanté paid their taxes, we draw the correct conclusion, the one made more probable by the premise, although this conclusion has not been established beyond any doubt.

According to the classification of Johnson and Blair [3], fallacies can thus be understood as argumentative maneuvers that do not respect one or more of these criteria.

Are the premises of the argument acceptable?

 

Among the fallacies, some are thus arguments that violate the criterion of acceptability of the premises. For example, false dilemmas are arguments in which a premise presents an alternative between two possibilities, one of which must obviously be rejected. The statement "Either you are with us [the Americans] or you are with the terrorists" is a famous example of this by former US President George W. Bush, uttered in the context of US military interventions in the aftermath of the attacks of September 11, 2001. As it is obviously inconceivable to side with the terrorists, it would be necessary by force of circumstances to side with Bush and the American army. However, premises that do not present the possibilities in an exhaustive manner could well be deemed unacceptable. One can in fact imagine a third possibility, namely to oppose the terrorists while also criticizing the intervention of the American army. The premise as formulated by Bush can thus be rejected as false, thus violating the criterion of acceptability or truth.

Are the premises relevant?

 

Other fallacies are rather violations of the criterion of relevance: That is to say, they present premises which may be acceptable, but whose connection with the conclusion to be drawn is not obvious. For example, the appeal to tradition is a fallacy which consists of defending an idea or a practice based on the fact that it has been accepted for a long time. If, for example, one says "It has always been so, and it is acceptable for women to do more household chores than men", one invokes a historical fact, but one which does not seem relevant in the justification of a practice. The premise is acceptable (if one agrees that household chores traditionally fell on the shoulders of women). On the other hand, we could judge that our past practices were based on the wrong reasons and were problematic, if we judge for example that there has historically been discrimination against women. We could also think that the world has changed and that we should no longer act as before. In short, premises can be acceptable without being relevant.

Are the premises sufficient?

 

Finally, some fallacies are arguments that do not meet the sufficiency criterion: the premises of the argument are not strong enough to support its conclusion. Fallacies that violate this criterion include hasty generalization, the mistake of thinking that since a property applies to one or many cases, it applies to all cases. For example, it would be tempting to conclude that the entire political class is corrupt based on an enumeration of a few dishonest ministers. However, it is clear that a single counterexample, that is to say a single honest political figure, could invalidate the general conclusion.

4. Fallacies and cognitive biases

 

To shed light on what distinguishes biases from fallacies, it may be useful to compare some of their characteristics. This will make it possible to present two possible links between fallacies and biases.

The psychologist Rüdiger Pohl, in the introduction to the book Cognitive Illusions [4], suggests that cognitive biases are (1) deviations of our perception, judgment or memory from reality. To be considered biases, these deviations should be (2) systematic and occur in a predictable way, not just randomly. Biases should also be (3) involuntary, i.e. beyond our control, and, by the same token, (4) difficult, if not impossible, to avoid.

On the basis of this characterization, some important contrasts with fallacies become apparent: fallacies in principle do not have to do with the way of representing reality or of processing information (1), but are rather deviations from rules of dialogue or the logic of argumentation. Nor do these deviations have to be, and in fact are not, systematic (2). Fallacies can be committed involuntarily or voluntarily (3), for example with the aim of misleading an audience, which also means that they can be avoided (4).

The fact that fallacies and biases can be distinguished does not mean that no relationship exists between the two. We note here two possible interactions. First, there could be a causal link between certain fallacies committed involuntarily and certain (unconsciously held) biases, in the sense that the presence of the bias would cause the fallacy. For example, motivated reasoning is a cognitive tendency to search for reasons justifying the conclusions one wishes to believe: we direct our attention towards what invalidates the opposing position and towards what supports our own. There is an emotional basis for this bias: reasoning in such a way ensures cognitive consonance (a form of personal coherence) and reduces dissonance (having contradictory beliefs). The confirmation bias would be an example of such a tendency. Some fallacies, such as caricature (the fact of distorting what a person says, for example by making a selective categorization of his position) or hasty generalizations, would result from (or could be explained by) this cognitive tendency, the confirmation bias [5].

We could also distinguish biases from fallacies by focusing on their role or the place they occupy in our cognition. There is no consensus on the function(s) of cognitive biases. They are sometimes considered (a) as simple errors or malfunctions of information processing mechanisms, (b) as side effects of information processing mechanisms (e.g. heuristics, which have evolved to simplify otherwise overly complex cognitive operations, or (c) as mere adaptations that allow rapid and often efficient processing of information. Fallacies, being mediated by language, are not generally seen to have such evolutionary functions, because their link with our cognition does not seem as direct as for biases. That said, some authors argue that the function of argumentation is not primarily knowledge or truth, but rather has to do with social interaction and communication [6]. Fallacies could thus be understood in light of this role of argumentation in our cognition. This hypothesis could explain why we tend to be better at evaluating the arguments of others than at producing arguments, a tendency that can be used to correct biases and errors in reasoning in groups rather than individually. We will return to this point in the last section.

This distinction between fallacies and biases according to their function in our cognition makes it possible to present a second possible interaction between them. Contrary to the causal relationship presented above, according to which biases influence fallacies, we can conceive of an interaction that goes instead in the other direction, that is, from fallacies towards biases. The philosopher Audrey Yap [7], for example, examines how the ad hominem fallacy can undermine a person's credibility, even if the fallacy is detected. In this fallacy, which violates the criterion of relevance, one attacks a person rather than attacking their argument, for example by invoking character traits that have no direct bearing on their point. When Quebec Premier François Legault calls opposition MP Gabriel Nadeau-Dubois “woke,” he does not say how Nadeau-Dubois's argument is flawed; he simply aims to attach a label to his opponent that evidently has a negative connotation for many. This is an example of the ad hominem fallacy.

What Yap is trying to demonstrate is that even if such an argumentative maneuver is explicitly deemed fallacious and the intervention aimed at discrediting a person deemed irrelevant, the judgment on the credibility of the person thus attacked could nevertheless be affected because of implicit (or unconscious) biases, such as the continued influence effect, the repetition effect, stereotypes, or prejudices. Contrary to the previous example, here it is the fallacy that would affect the expression of a bias. We therefore have a reason to believe not only that the fallacy and the bias are different, but also that the bias can persist even if the fallacy is identified, and more generally that biases and fallacies are therefore not always independent.

5. How to deal with fallacies?

 

Now that we have an idea of ​​what fallacies consist in, how to identify them and how to distinguish them from biases, we can ask ourselves how to act to avoid them or to limit their effects. Here we present some lines of thought.

One way to limit bias would simply be to use a metacognitive strategy: that is to say, calling on an awareness of the ways in which we form and revise our beliefs. Being aware of our biases and knowing their causes could help to better intervene to eliminate them, or at the very least mitigate their effects. The same may be said for fallacies: knowing them better would make it easier to identify and avoid them. Many professors no doubt hope that their students will be equipped to combat fallacies and biases by the end of their classes.

This strategy, however, assumes that simply teaching, learning to recognize biases or fallacies in context, would be enough to reduce their occurrence. This optimism is all well and good, but it is an empirical assertion which would need to be supported, and which we otherwise have reason to doubt. The blind spot bias shows, for example, how difficult it is for a person to act on their own biases, despite a good knowledge of the phenomenon of cognitive biases. To make matters worse, some studies have even suggested that being aware of a bias could have the opposite effect and actually amplify it [8, 9]. It is therefore useful to resort to strategies other than metacognition alone, and, in particular, to strategies ensuring that the detection of a bias or a fallacy, and the "de-biasing" or the correction of the bias, no longer depend (only) on us [10]. Here are two other strategies.

The first is to attempt to externalize certain cognitive operations, including our reasoning processes. One way to proceed is, for example, to transpose them to writing [11]. This externalization would make it possible to debias certain cognitive operations by preventing, for example, presuppositions or prejudices from interfering in our reasoning without our knowledge, a bit like using a pencil and paper can help overcome our limits in mental calculation. An illustration of this would be to use formal logic to formulate an argument.

Another principle to apply to avoid biases or fallacies could be to rely on interaction or socialization [12], which amounts to benefiting from the critical judgment of others to avoid making mistakes oneself. This strategy is similar to the use of peer review in science: before an article is published, it is read by qualified people who assess its quality and validity. Relying on other people's expertise in this way could be a central aspect of rationality [13]. The socialization strategy would be particularly effective when the problems to be solved have a correct or “objective” solution [14].  Exposing ourselves to criticism can be a way of increasing the cost of our mistakes, since if we make unsubstantiated claims which turn out to be false, our reputation will be undermined. Such a collective strategy would make it possible, among other things, to depersonalize debates, for example by minimizing the role of certain emotional responses in the evaluation of arguments.

 

 

That said, group tasks often take longer to complete, and certain conditions must be met to make them beneficial.  Among other things, group members must be minimally competent and willing to change their mind in light of relevant evidence. They should also aim for a common goal, such as reaching common ground on an issue. Finally, the group must be sufficiently heterogeneous to avoid “blind spots” and be able to resist conformism.

We can therefore see that, although biases and fallacies are important elements of our cognition, we are not doomed to fall prey to them at every turn. Means are at our disposal to circumvent them and reduce their effects. Reading this article, hopefully, is a step in that direction!

References

[1]  Copi, Irving. (1953/2019). Introduction to logic. Routledge.

[2]  Baillargeon, Normand. (2006). Petit cours d’autodéfense intellectuelle. Lux Éditeur.

[3]  Johnson, Ralph H., & J. Anthony Blair. (1977). Logical self-defense. McGraw-Hill Ryerson.

[4]  Pohl, Rüdiger F. (Ed.). (2017). Cognitive illusions: intriguing phenomena in judgement, thinking and memory. Routledge.

[5]  Correia, Vasco. (2011). Biases and fallacies: The role of motivated irrationality in fallacious reasoning, Cogency: Journal of reasoning and argumentation, 3(1), 107-126.

[6]  Mercier, Hugo, & Dan Sperber. (2011). Why do humans reason? Arguments for an argumentative theory, Behavioral and Brain sciences, 34, 57-111.

[7]  Yap, Audrey. (2013). Ad hominem fallacies, bias, and testimony, Argumentation, 27(2), 97-109.

[8]  Sanna, Lawrence J., Norbert Schwarz, & Shevaun L. Stocker. (2002). When debiasing backfires: Accessible content and accessibility experiences in debiasing hindsight, Journal of Experimental Psychology: Learning, Memory, and Cognition, 28(3), 497-502.

[9]  Duguid, Michelle M., & Melissa C. Thomas-Hunt. (2015). Condoning stereotyping? How awareness of stereotyping prevalence impacts expression of stereotypes, Journal of Applied Psychology, 100(2), 343-359.

[10]  Beaulac, Guillaume, & Tim Kenyon. (2014). Critical thinking education and debiasing. Informal Logic, 34(4), 341-363.

[11]  Menary, Richard. (2007). Writing as thinking, Language Sciences, 29(5), 621-632.

[12]  Mercier, Hugo, & Dan Sperber. (2017/2021). L'énigme de la raison, Traduction par Abel Gerschenfeld. Odile Jacob.

[13]  Levy, Neil. (2022). Bad beliefs: Why they happen to good people. Oxford University Press.

[14]  Laughlin, Patrick R. (2011). Group problem solving. Princeton University Press.

The author

Julien Ouellette-Michaud, MA, is a professor of philosophy at Collège de Maisonneuve in Montreal.

Quote this entry

Ouellette-Michaud, J. (2023). fallacies and cognitive biases. In G. Béghin, E. Gagnon-St-Pierre, C. Gratton, & E. Muszynski (Eds). Shortcuts: A Handy Guide to Cognitive Biases Vol. 5. Online: www.shortcogs.com.

bottom of page