Bluffing and negotiations
For buyer-supplier relationships in business, successful collaboration requires commitment and trust, both of which can be negatively impacted by bluffing.16 Yet, bluffing is still a common practice which can strain buyer-supplier relationships. One of the key ways that buyers and suppliers interact is through negotiations, so a group of management researchers wanted to assess bluffing in buyer-supplier negotiations. Specifically, they considered moral disengagement theory, which predicts that decision makers adhere to their moral standards as long as their self-sanctions are stronger than external incentives.
Bluffs were defined as deceptions that are acceptable to both parties during negotiation, while lies were defined as deceptions that are unacceptable to both parties.16 The researchers’ results distinguished bluffs and lies in negotiations as two separate constructs. Explicit liars had increased levels of moral disengagement while there was no difference in levels of moral disengagement between bluffers and honest negotiators. On the receiving end, the targets of bluffs experienced higher degrees of self-directed anger for falling for the bluff, but were willing to engage in further negotiations. On the other hand, the targets of lies experienced higher degrees of anger directed at the liar and were not very willing to continue negotiations.
Considering the prevalence of bluffing in business transactions, this study’s findings emphasize the importance of strategy and critical thinking.16 When attempting to bluff in a negotiation, actors must ensure that their bluff will indeed be perceived as a bluff – rather than a lie – if they are found out. For those in managerial positions, they should familiarize themselves with moral disengagement methods that can justify increasingly aggressive negotiation styles or deny its consequences.
The prisoner’s dilemma and social identity
The prisoner’s dilemma is undoubtedly the best known example of game theory. The example consists of questioning two “prisoners” separately, and informing them of the following consequences:17
- If neither prisoner A or prisoner B confesses, both will receive six months in jail;
- If one prisoner confesses and the other does not confess, the prisoner who confessed will receive ten years in jail while the other goes free; or,
- If both prisoners confess, each will go to jail for eight years.
Since its origin in 1950, the prisoner’s dilemma has been adapted in research to address issues such as cooperation, altruism, and decision making, among others.18 According to game theory, the “prisoners” face a dilemma in which confessing is the rational strategy to prevent being betrayed and ultimately receiving the maximum sentence. Yet, many people in both experiments and real-life situations end up cooperating.
In fact, most of the research on the prisoner’s dilemma has shown that allowing the prisoners to communicate with one another will significantly increase cooperation.18 To leverage this, a group of American researchers wanted to explore the role that social identity and norms play in cooperation. They sorted 86 undergraduate students into four groups, differing on the other “prisoner” they faced.
In the first group, the potentially cooperative “prisoner” was another person.18 For the other three groups, the participants were faced with three types of computers that varied in the degree of their human-like features. The “most human-like” computer displayed a picture of a person on the screen and communicated with participants through a voiceover. The “human-like” computer communicated with participants through a voiceover but had no picture on the screen, while the “least human-like” computer communicated with participants by displaying text on its screen.
The goal was to assess whether communicating with a computer partner would change participants’ feelings and the norms that influence cooperation choices.18 The researchers found that cooperation with a computer dilemma was half as frequent as that with a person, and that cooperation rates were notably higher when the computer was programmed to resemble the participant. For example, some computers used the participant’s native language. Surprisingly, participants were more likely to break their promises to cooperate with a computer, especially when the computer was more human-like.
These findings support the role of social identity in cooperation: participants might have felt a group identity with human-like computers, but they disliked it for its imperfections and thus took advantage of it.18 This can be reflective of intragroup downgrading of marginal group members (in this case, the perceived inferior computer) when there is competition. Overall, the findings show that communication is important for cooperation in the sense that cooperation depends on perceived group identity. Humans might not be as rational as game theory would expect them to be: if we were truly rational, factors like norms and social identity would not influence our decisions.