The Basic Idea
Growing up, we’re all taught that lying is bad. Yet, most of us encounter lying on a daily basis. Whether it’s a small and harmless lie like saying we didn’t eat the last cookie (when we obviously did), or a more significant lie like breaching academic integrity policies, we’ve probably all been the liar and the person who was lied to at one point or another.
Now, consider this situation: while playing a game of poker, you make a large bet despite knowing that you have a poor hand of cards. You didn’t outright claim to have good cards, but your actions suggest it. Is this considered lying?
If you ask economists, game theorists, and poker players, they prefer to call this bluffing. Bluffing is commonly referred to as a strategic move, during which someone deceives another person about their intentions or knowledge.1 Essentially, bluffing can be thought of as “strategic lying.” Anecdotally, you may act like you have a good hand of cards, when you really have two 2’s and two 7’s. Although bluffing is a specific play in poker, it also refers to studying a more general pattern in human behavior.
Theory, meet practice
TDL is an applied research consultancy. In our work, we leverage the insights of diverse fields—from psychology and economics to machine learning and behavioral data science—to sculpt targeted solutions to nuanced problems.
The history of bluffing could not be told without exploring game theory and its beginnings. Mathematician John von Neumann was one of the first to examine the game of poker through a mathematical lens.2 Von Neumann was interested in poker because he thought that figuring out the game would be a path toward developing a unique form of math. He wanted to form a general theory that could be applied to business strategy, diplomacy, and evolution, among other things.
Von Neumann’s work became what is now known as game theory, the study of mathematical models of strategic and rational decision making.3 He researched different mathematical economic theorems and integrated them in his 1928 paper On the Theory of Parlor Games.4 Von Neumann then worked with economist Oskar Morgenstern to publish Theory of Games and Economic Behavior in 1944, the groundbreaking text that created the field of game theory.2
One of the most important of Von Neumann’s findings is his famous minimax theorem, which aims to minimize one’s maximum possible loss.3 Von Neumann believed this existed in every two-player game with the following criteria:
- The game is finite, such that the number of options at each move is finite and the game always ends in a finite number of moves.
- The game is zero-sum, such that one player’s gain is equal to the other’s loss.
- The game is of complete information, such that each player knows all the options available to them and their opponent, including the value of each possible outcome.
When game theory was first introduced to the world, it amazed the general public.2 For academics, it promised an original and rigorous foundation for modern social science, most notably for economics. The 1950s saw the conceptualization of the prisoner’s dilemma, an example that showed why two completely rational individuals might not cooperate with each other, even though it is in their best interest to do so.5 Among the same time, mathematician John Forbes Nash Jr. developed the concept of a Nash Equilibrium, which extended game theory beyond Von Neumann and Morgenstern’s two-person, zero-sum games.6 As more concepts related to game theory were developed in the 1950s, game theory started being applied to philosophy and politics.
To better understand game theory’s relevance, let’s analyze the following incident. In 1984, the city of Chicago forbade a bar owner from setting up computerized poker and blackjack machines in the bar, claiming that the games relied on luck, rather than skill.7 The bar owner sued the city, and managerial economist Ehud Kalai testified as an expert witness on the owner’s behalf. Kalai showed that the machines were games that required strategy and bluffing, convincing the judge that the games indeed required a certain level of skill. After Kalai’s demonstration to the judge, it was deemed legal for the bar owner to set up the machines.
John von Neumann
Hungarian-American mathematician John von Neumann was one of the world’s foremost mathematicians by his mid-twenties.8 His work in applied mathematics has influenced quantum theory, economics, and defense planning. Although well-known for pioneering game theory, Von Neumann was also a lecturer of quantum theory at Princeton University and one of the inventors of the digital computer, along with Alan Turing.
Bluffing is commonly discussed in the context of poker due to the card game’s analytical nature: it involves calculating or estimating probabilities, using expected gain as a decision criterion and using mixed strategies in bluffing situations.1 The second most common situations in which bluffing will occur are business interactions.9
However, bluffing extends beyond card games and business. Game theory courses are popular offerings at universities, whether offered by the computing,10 economics,11 math,12 or psychology departments.13 Bluffing has even been applied to sports: Kalai, the expert witness in the 1984 case mentioned above, has also used behavioral economics to help the Chicago Bears, a professional football team.7 Similar to poker, Kalai pointed out that teams do not want to be known for playing certain moves in certain situations. He suggested that if a situation would typically call for running the ball, the team should occasionally pass it to keep the opposing team on their toes.
Kalai’s advice to the Chicago Bears is an example of what game theorists call mixed strategy, a procedure for two-person, zero-sum games.7 Mixed strategy is based on the assumption that opponents can think just as strategically as one’s own side: randomizing football moves and increasing unpredictability will decrease the chances of the opposing team anticipating and blocking a strategy. Bluffing strategies, then, are sustained and systematic, ensuring just enough randomization to keep bluffs effective.
While game theory is impressive, it’s important to remember its limitations.7 For starters, people have been shown to be more risk-averse under high pressure situations, decreasing their willingness to bluff in fear that it will fall back on them. In cases of warfare and politics, randomization is especially tough. For example, during the 1967 Six-Day War, the Israeli military was faced with a challenge: they knew that some Egyptian convoys were using Israeli symbols on the roofs of trucks. This meant that Israeli pilots could not tell the true identities of the convoys and risked harming their own military if they chose to bomb at random.
The ethics of bluffing - especially as it pertains to conducting business - has also been debated, such as in this article by business magazine Forbes. While bluffing is an acceptable practice in the poker world, some hold that interactions in the real world should be held to higher ethical standards, including business transactions.14 Others believe that business is a game, just like poker, in which “normal” ethical standards do not apply.15 However, there are still concerns surrounding the extent to which bluffing is used by those in the corporate world.
Bluffing and negotiations
For buyer-supplier relationships in business, successful collaboration requires commitment and trust, both of which can be negatively impacted by bluffing.16 Yet, bluffing is still a common practice which can strain buyer-supplier relationships. One of the key ways that buyers and suppliers interact is through negotiations, so a group of management researchers wanted to assess bluffing in buyer-supplier negotiations. Specifically, they considered moral disengagement theory, which predicts that decision makers adhere to their moral standards as long as their self-sanctions are stronger than external incentives.
Bluffs were defined as deceptions that are acceptable to both parties during negotiation, while lies were defined as deceptions that are unacceptable to both parties.16 The researchers’ results distinguished bluffs and lies in negotiations as two separate constructs. Explicit liars had increased levels of moral disengagement while there was no difference in levels of moral disengagement between bluffers and honest negotiators. On the receiving end, the targets of bluffs experienced higher degrees of self-directed anger for falling for the bluff, but were willing to engage in further negotiations. On the other hand, the targets of lies experienced higher degrees of anger directed at the liar and were not very willing to continue negotiations.
Considering the prevalence of bluffing in business transactions, this study’s findings emphasize the importance of strategy and critical thinking.16 When attempting to bluff in a negotiation, actors must ensure that their bluff will indeed be perceived as a bluff - rather than a lie - if they are found out. For those in managerial positions, they should familiarize themselves with moral disengagement methods that can justify increasingly aggressive negotiation styles or deny its consequences.
The prisoner’s dilemma and social identity
The prisoner’s dilemma is undoubtedly the best known example of game theory. The example consists of questioning two “prisoners” separately, and informing them of the following consequences:17
- If neither prisoner A or prisoner B confesses, both will receive six months in jail;
- If one prisoner confesses and the other does not confess, the prisoner who confessed will receive ten years in jail while the other goes free; or,
- If both prisoners confess, each will go to jail for eight years.
Since its origin in 1950, the prisoner’s dilemma has been adapted in research to address issues such as cooperation, altruism, and decision making, among others.18 According to game theory, the “prisoners” face a dilemma in which confessing is the rational strategy to prevent being betrayed and ultimately receiving the maximum sentence. Yet, many people in both experiments and real-life situations end up cooperating.
In fact, most of the research on the prisoner’s dilemma has shown that allowing the prisoners to communicate with one another will significantly increase cooperation.18 To leverage this, a group of American researchers wanted to explore the role that social identity and norms play in cooperation. They sorted 86 undergraduate students into four groups, differing on the other “prisoner” they faced.
In the first group, the potentially cooperative “prisoner” was another person.18 For the other three groups, the participants were faced with three types of computers that varied in the degree of their human-like features. The “most human-like” computer displayed a picture of a person on the screen and communicated with participants through a voiceover. The “human-like” computer communicated with participants through a voiceover but had no picture on the screen, while the “least human-like” computer communicated with participants by displaying text on its screen.
The goal was to assess whether communicating with a computer partner would change participants’ feelings and the norms that influence cooperation choices.18 The researchers found that cooperation with a computer dilemma was half as frequent as that with a person, and that cooperation rates were notably higher when the computer was programmed to resemble the participant. For example, some computers used the participant’s native language. Surprisingly, participants were more likely to break their promises to cooperate with a computer, especially when the computer was more human-like.
These findings support the role of social identity in cooperation: participants might have felt a group identity with human-like computers, but they disliked it for its imperfections and thus took advantage of it.18 This can be reflective of intragroup downgrading of marginal group members (in this case, the perceived inferior computer) when there is competition. Overall, the findings show that communication is important for cooperation in the sense that cooperation depends on perceived group identity. Humans might not be as rational as game theory would expect them to be: if we were truly rational, factors like norms and social identity would not influence our decisions.
Related TDL Content
We have explored how some theorists have distinguished between a bluff and a lie. Game theorists agree that bluffing is a form of deception. Lies, too, are a form of deception. What if AI could assist with lie detection? Would this extend to bluffs? Take a look at this article and decide for yourself.
We touched on how the prisoner’s dilemma has been adapted to consider a variety of situations, and amidst the COVID-19 pandemic, it has also been used to explain the rational decision of wearing a face mask! If you’re curious about this application, this article offers some tangible insights.
- Friedman, L. (1971). Optimal bluffing strategies in poker. Management Science, 17(12), B764-B771.
- Harford, T. (2006, December 14). A beautiful theory. https://www.forbes.com/2006/12/10/business-game-theory-tech-cx_th_games06_1212harford.html?sh=78014e6f5e94
- Méro, L. (1998). John von Nuemann’s Game Theory. In Moral Calculations.
- Von Neumann, J. (1928). Theorie der Geselleschaftsspiele. Mathematische Annalen, 100, 295-320.
- Kuhn, S. (2019, April 2). Prisoner’s Dilemma. Stanford Encyclopedia of Philosophy. https://plato.stanford.edu/cgi-bin/encyclopedia/archinfo.cgi?entry=prisoner-dilemma
- Osborne, M. K., & Rubinstein, A. (1994). A Course in Game Theory. Cambridge.
- Calvert, D. (2015, March 2). To bluff or not to bluff. Kellogg Insight. https://insight.kellogg.northwestern.edu/article/to-bluff-or-not-to-bluff
- Poundstone, W. (2021, February 4). John von Neumann. Encyclopedia Britannica. https://www.britannica.com/biography/John-von-Neumann/Princeton-1930-42
- Guidice, R. M., Alder, G. S., & Phelan, S. E. (2009). Competitive bluffing: An examination of a common practice and its relationship with performance. Journal of Business Ethics, 87(4), 535-553.
- CSC304H1: Algorithmic game theory and mechanism design. (2021). University of Toronto. https://artsci.calendar.utoronto.ca/course/csc304h1
- ECON 212: Introduction to game theory. (n.d.). University of Waterloo. https://my.cel.uwaterloo.ca/p/form/courses/search/course/sub/ECON/cat/212/topic/0
- Mathematics 3157A/B: Introduction to game theory. (2021). Western University. https://www.westerncalendar.uwo.ca/Courses.cfm?CourseAcadCalendarID=MAIN_024184_1&SelectedCalendar=Live&ArchiveID=
- Experimental approaches to social and strategic decision-making. (2021). Queen’s University. https://www.queensu.ca/psychology/sites/webpublish.queensu.ca.psycwww/files/files/Undergraduate/Course%20Syllabi/2020-2021/Winter/PSYC_398_Winter_2021.pdf
- Friedman, M. (1970, September 13). A Friedman doctrine -- The social responsibility of business is to increase its profits. New York Times. https://www.nytimes.com/1970/09/13/archives/a-friedman-doctrine-the-social-responsibility-of-business-is-to.html
- Carr, A. Z. (1968, January). Is business bluffing ethical? Harvard Business Review. https://hbr.org/1968/01/is-business-bluffing-ethical
- Kaufmann, L., Rottenburger, J., Carter, C. R., & Schlereth, C. (2017). Bluffs, lies, and consequences: A reconceptualization of bluffing in buyer-supplier negotiations. Journal of Supply Chain Management, 54(2), 49-70.
- Lave, L. B. (1962). An empirical approach to the prisoners’ dilemma game. The Quarterly Journal of Economics, 76(3), 424-436.
- Kiesler, S., Sproull, L., & Waters, K. (1996). A prisoner’s dilemma experiment on cooperation with people and human-like computers. Journal of Personality and Social Psychology, 70(1), 47-65.