Drone Policy (2/3): Understanding The Issues
If you missed “Drone Policy (1/3): Reducing The Human Cost“, click here.
Moral Disengagement and Euphemistic Language
Preeminent psychology researcher Dr. Albert Bandura notes that the aforementioned psychological processes are far from the only forms of moral disengagement apparent in military operations. For Bandura, moral disengagement encompasses the personal, behavioral, and environmental processes by which individuals enable themselves to violate their own moral norms while feeling ok about doing so.
The effects of depersonalization
For example, most of us would feel guilt or remorse if we stole money out of someone’s hand. But it is a lot easier to justify pirating music online from “file sharing” sites because we can more readily convince ourselves that nobody is really being harmed by our actions. The depersonalized nature of online interactions, the abstract nature of the victim, and many other factors contribute to why the latter scenario seems more morally ambiguous than the personal and concrete nature of snatching a purse or wallet.
Agent-less vs Agentic phrases
Another mechanism of moral disengagement found in both civilian and military contexts is the use of euphemistic language. In his new book, “Moral Disengagement: How People Do Harm and Live with Themselves,” Bandura examines the literature on euphemistic language and how it is often used to “depersonalize doers from harmful activities,” (2016). Research has shown that the acceptability of certain actions is influenced by what those actions are called, and using sanitized, agent-less, and technical terms instead of clear-cut, agentic, and common phrases enables us to do things we may not be comfortable with otherwise. “File-sharing” seems more justifiable than stealing in the same way that “servicing the target,” “visiting a sight” and “coercive diplomacy” seem more justifiable than bombing.
The Euphemistic Language of Drone Policy
In addition to his concerns about the depersonalized and abstract nature of drone operations, Bandura worries that using agent-less jargon like “unmanned aerial vehicles” contributes to the lack of individual accountability reported in drone operations. However, the more troubling use of euphemistic language in drone operations, revealed in The Drone Papers, comes from the way targets on the ground are classified by military officials thousands of miles away:
“The documents show that the military designated people it killed in targeted strikes as EKIA — “enemy killed in action” — even if they were not the intended targets of the strike. Unless evidence posthumously emerged to prove the males killed were not terrorists or “unlawful enemy combatants,” EKIA remained their designation, according to the source. That process, he said, “is insane. But we’ve made ourselves comfortable with that. The intelligence community, JSOC, the CIA, and everybody that helps support and prop up these programs, they’re comfortable with that idea.”
Weighing up moral rightness vs operational success
From a moral and ethical standpoint, classifying potentially innocent victims as “enemies” by default is reprehensible; however, from an operational standpoint it makes perfect sense. Operators would be much more hesitant to pull the trigger if they were completely aware of how often they had killed innocent civilians during their missions. Bandura and others note the stress drone operators report despite being removed from the front lines and that, “Having to turn one’s morality off and on, day in and day out, between lethal air strikes and prosocial home life makes it difficult to maintain a sense of moral integrity,” (2016). Officials in the intelligence community may justify their use of these designations as a way to protect the already strained psyches of their drone teams.
Feedback Loop of Moral Disengagement in Drone Policy
An unintended consequence of these euphemisms is the implicit message that’s been conveyed down the chain of command: collateral damage isn’t a concern. Former drone operator and instructor Michael Haas claims that he was punished when he failed a student on a training mission in which the student insisted his targets were suspicious despite having no evidence to back the judgment:
The AI Governance Challenge
behavior change 101
Start your behavior change journey at the right place
“Short on operators, his superiors asked him to explain his decision. “I don’t want a person in that seat with that mentality with their hands on those triggers,” Haas says. “It’s a dangerous scenario for everyone to have someone with that bloodlust.” But the student’s detached outlook wasn’t as important as training new recruits. Hass was ultimately punished for failing the student and barred from teaching for 10 days.”
On some level Haas’ superiors surely want to limit the amount of civilians killed in their attacks, but the euphemistic language on which their policy objectives are gauged allows them to dismiss Haas’ concerns and carry on with training a potentially dangerous recruit. When collateral damage is a hidden statistic, there’s no reason to be concerned when looking at the stat sheet: sanitizing the language of war continually enables fatal mistakes to be overlooked or go unpunished.
It is problematic that individuals within drone bureaucracies are morally disengaging while on the job and maintain the policy status quo, but the bigger problem is that the policies and systems of drone warfare internally manufacture these kinds of moral disengagements where they may not arise otherwise.
Liked this? Read part 3.
About the Author
Jared Celniker
Jared is a PhD student in social psychology and a National Science Foundation Graduate Research Fellow at the University of California, Irvine. He studies political and moral decision-making and believes that psychological insights can help improve political discourse and policymaking.
About us
We are the leading applied research & innovation consultancy
Our insights are leveraged by the most ambitious organizations
“
I was blown away with their application and translation of behavioral science into practice. They took a very complex ecosystem and created a series of interventions using an innovative mix of the latest research and creative client co-creation. I was so impressed at the final product they created, which was hugely comprehensive despite the large scope of the client being of the world's most far-reaching and best known consumer brands. I'm excited to see what we can create together in the future.
Heather McKee
BEHAVIORAL SCIENTIST
GLOBAL COFFEEHOUSE CHAIN PROJECT
OUR CLIENT SUCCESS
$0M
Annual Revenue Increase
By launching a behavioral science practice at the core of the organization, we helped one of the largest insurers in North America realize $30M increase in annual revenue.
0%
Increase in Monthly Users
By redesigning North America's first national digital platform for mental health, we achieved a 52% lift in monthly users and an 83% improvement on clinical assessment.
0%
Reduction In Design Time
By designing a new process and getting buy-in from the C-Suite team, we helped one of the largest smartphone manufacturers in the world reduce software design time by 75%.
0%
Reduction in Client Drop-Off
By implementing targeted nudges based on proactive interventions, we reduced drop-off rates for 450,000 clients belonging to USA's oldest debt consolidation organizations by 46%