The Basic Idea
All flowers are purple. All trees are flowers. Therefore, all trees are purple. Reading this, you may think that the first two sentences are false, as well as the conclusion itself. However, the conclusion that “all trees are purple” is actually valid in terms of inferential logic.
Inferences are steps in reasoning. They connect premises — which are propositions upon which an argument is based — with consequences.1 Humans are able to make inferences based on our conceptual knowledge and schemas, cognitive frameworks that organize information and provide shortcuts when interpreting information. Inferential logic is commonly done through one of two ways:
- Deductive reasoning: drawing implicit conclusions from the premises (i.e. Someone gives you a bag of coins and tells you it is full of pennies. Based on this, you expect every coin you pull from the bag to be a penny).
- Inductive reasoning: using various specific premises to draw a universal conclusion (i.e. You pull a penny from a bag of coins, followed by a second and third penny. You infer that all coins in the bag are pennies).
Aside from the theory associated with inferential logic, inferences are found in daily life, applied to perception, reading, statistics, and even artificial intelligence.
Conceptual knowledge: Knowledge that enables people to recognize objects and events, and to make inferences about their properties.
Inference: The process by which people create information that isn’t explicitly stated, connecting the available information together.
Fallacy: A form of illogical reasoning that results in an invalid inference. Fallacies are often influenced by human biases.
Pragmatic inference: Inferences that occur when reading or hearing a statement lead one to expect something that isn’t explicitly stated or implied by the statement, based on knowledge gained through experience. Human memory is constructive, based on what actually happened and additional factors like expectations and knowledge.
Schema: One’s knowledge about what is involved in a particular experience, shaped by prior experiences.
The concepts of inductive and deductive reasoning can be traced back to ancient philosophy, with Aristotle in the 300s BCE.2 Ancient Greek philosophers defined a number of syllogisms, which were logical arguments that consisted of three statements and applied deductive reasoning to draw a conclusion from two premises assumed to be true.3 For example:
- All humans are mortal.
- All Greeks are humans.
- All Greeks are mortal.
For syllogisms, “validity” refers to whether the conclusion follows logically from the premises, rather than whether all parts are known as true.3 In this case, we can draw the inferential logic like Greeks -> humans -> mortal. Based on this, the conclusion is valid, while the truth of whether “all Greeks are humans” or “all humans are mortal” could be open for debate.
Since the development of syllogisms by Ancient Greek philosophers, there have also been different types of syllogisms.3 Categorical syllogisms, for example, are syllogisms in which the premises and the conclusion describe the relationship between two categories by using statements that begin with “all”, “no”, or “some” (i.e. No geese are felines. Some birds are geese. Therefore, some birds are not felines). On the other hand, conditional syllogisms are syllogisms with two premises and a conclusion, where the first premise is an “if … then” statement. Conditional syllogisms are the most common form for everyday life (i.e. If I do well on this test, I will finish the course with a good grade).
Ultimately, inferences can be found in almost anything, ranging from inferring the winner of a sports match or the severity of a physical illness.4 Beyond inferential logic, inferences are applied to a variety of contexts and are drawn using a host of strategies, as we’ll explore below.
In the realm of cognitive psychology, inferences are incredibly relevant.3 They are necessary for perception, which is the conscious experience that results from interpreting stimulation from sensory organs. For example, when it comes to our retina, proximal stimulus is the two-dimensional representation of stimuli, and distal stimulus refers to stimuli in the world, most likely three-dimensional objects. The brain uses information from both eyes as well as properties from the proximal stimulus to make inferences about the relative depth of distal stimulus. Additionally, humans tend to make unconscious inferences, meaning that some of our object perceptions are the result of unconscious assumptions that we make about the environment.
As emphasized by cognitive psychologists, humans use data about our environments to form perceptions, gathered through our past experiences.3 This concept is also found in statistics, where inferences use mathematical principles to draw conclusions. Bayesian inferences refer to the idea that our estimates of the probability of an outcome is determined by two factors:
- Prior probability, our initial belief about the probability of an outcome; and,
- Likelihood of outcome, the extent to which the available evidence is consistent with the outcome.
Inferences can happen automatically and unconsciously, as we’ve covered.3 One area where inferences are constantly used — even among children — is when reading. Readers’ roles are to create connections between parts of a story, in order for narratives to be coherent. Anaphoric inferences, for instance, connect objects or people in one sentence to objects or people in another sentence. Two other inferences that are necessary for successful understanding of narratives include:
- Instrument inferences, which are inferences about tools or methods that occur while reading a text or listening to a speech; and,
- Causal inferences, which are inferences that result in the conclusions that events described in one clause or sentence were caused by events that occurred in a previous sentence.
Inferences have also been built into artificial intelligence (AI) systems, with the role of automatically extending knowledge bases.5 These knowledge bases are sets of propositions that represent what the system knows about the world, allowing AI systems to draw conclusions relevant to the task at hand.
The difference between validity and truth can make it difficult to judge whether reasoning is “logical” or not. Not only can valid syllogisms result in false conclusions, but syllogisms can also be invalid even when each of the premises and the conclusion may be true.3 While truth refers to the way the world works as we know it, validity refers to logic. As humans, we are biased to see what we believe to be true to also be logically valid, known as a belief bias.
Kurt Gödel published two incompleteness theorems in 1931, widely interpreted to show that it is impossible to find a complete and consistent set of axioms in logic.6 These axioms are essentially statements that are taken as true, to serve as premises for further reasoning. The first of Gödel’s theorems states that no consistent system of axioms whose theorems can be listed by a mechanical procedure can prove all truths. There will always be statements that are true, which are unprovale within the system. Gödel’s second theorem extends on the first, showing that the system can’t demonstrate its own consistency, even if it is indeed consistent. Ultimately, these incompleteness theorems were the first to consider the limitations of formal systems, including the mechanical procedures of logical inferences.
Experience and the content effect
The Wason selection task is a well-known inferential logic puzzle that relies on deductive reasoning.3 Participants are shown four playing cards, each with a letter on one side (either A or D) and a number on the other side (either 4 or 7). The task is to determine which cards to turn over to test a rule similar to: “If there is an A on one side of the card, then there is a 4 on the other side.” The Wason selection task follows the falsification principle, a logical principle that to test a rule, it is necessary to look for situations that would falsify said rule.
Cheng and Holyoak proposed that people think in terms of schemata, and had found in prior studies that training college students on logic did not improve their performance on the Wason selection task.3 Students still showed similar decisions regardless of logic training:
- 33% would only turn over the “A” card, which only confirms the rule, but does not disconfirm other options;
- 45% would turn over the “A” and “4” cards, trying to confirm both parts of the rule, but again, not negating anything;
- Only 4% would turn over the “A” and “7” cards, which is the correct answer, as this could both confirm and negate the rule.
While formal logic training did not change much, Giggs and Cox performed a study in 1982 which showed that experience and relevance of the task can improve performance.3 The researchers changed the rule from “If there is an A on one side of the card, then there is a 4 on the other side” to “If a person is drinking beer, than that person is over 19”. The cards were also changed accordingly so that there were four cards, each with an image on one side (either a can of soda or a pint of beer) and a number on the either side (either 16 or 25). The researchers found a content effect, such that most college students correctly said that they would flip over the beer and “16” cards. Since the participants had more experience with the task’s situation, their conceptual knowledge shaped their inferences and deductive reasoning.
An experiment in 1983 showed how syllogisms can be tricky.3 The researcher provided participants with two syllogisms, both of which followed inferential logic:
- No democrats are conservatives. Some Americans are conservatives. Therefore some Americans are not democrats.
- No healthy things are cheap. Some vitamin pills are cheap. Therefore some vitamin pills are not healthy things.
When asking participants which of the two syllogisms followed logically, 89% of participants accepted the first one about politics in America, while only 56% accepted the second one about vitamins and health.3 However, both were formally equivalent and followed logically, emphasizing the importance of separating out what we typically know to be true (i.e. vitamins are healthy), from what we recognize as logically valid. Syllogisms can point out the gaps in our logical reasoning.
Related TDL Content
What’s the opposite of thinking logically? How about thinking with our emotions? If you’re interested in exploring decision making characteristics that are the opposite of logical inferences, the affect heuristic is a type of mental shortcut in which we rely on our emotions to make decisions.
We constantly make inferences, and behavioral science research tells us that humans have a tendency to wrongly infer that actions are due to robust character traits (rather than situational factors). This is known as the fundamental attribution error and can play a role in our judgments, an important fact to recognize at the policy making level. Take a read through this interview with Tom Spiegler, co-founder and managing director at The Decision Lab, to learn about the inferences we make regarding politics and democracy.
- (2017, June 16). Encyclopedia Britannica. https://www.britannica.com/topic/inference-reason
- Gattei, S. (2009). Karl Popper’s Philosophy of Science: Rationality without Foundations.
- Goldstein, E. B. (2019). Cognitive Psychology: Connecting mind, research, and everyday experience. Cengage Learning.
- Rieskamp, J. (2008). The importance of learning when making inferences. Judgement and Decision Making, 3(3), 261-277.
- Wetzstein, G., Ozcan, A., Gigan, S., Fan, S., Englund, D., Soljačić, M., … & Psaltis, D. (2020). Inference in artificial intelligence with deep optics and photonics. Nature, 588(7836), 39-47.
- Smorynski, C. (1977). The incompleteness theorems. In Studies in Logic and the Foundations of Mathematics (Vol. 90, pp. 821-865). Elsevier.