We live in a world now where we know how to lie. With advances in AI, it is very likely that we will soon live in a world where we know how to detect truth. The potential scope of this technology is vast — the question is how should we use it?
Some people are naturally good liars, and others are naturally good lie detectors. For example, individuals who fit the latter description can often sense lies intuitively, observing fluctuations in pupil dilation, blushing, and a variety of micro-expressions and body movements that reveal what’s going on in someone else’s head. This is because, for the vast majority of us who are not trained deceivers, when we lie, or lie by omission, our bodies tend to give us away.
For most of us, however, second guessing often overtakes intuition about whether someone is lying. Even if we are aware of the factors that may indicate a lie, we are unable to simultaneously observe and process them in real time — leaving us, ultimately, to guess whether we are hearing the truth.
Now suppose we did not have to be good lie detectors, because we would have data readily available to know if someone was lying or not. Suppose that, with this data, we could determine with near-certainty the veracity of someone’s claims. We live in a world now where we know how to lie. With advances in AI, it is very likely that we will soon live in a world where we know how to detect truth. The potential scope of this technology is vast — the question is how should we use it?
The Future of AI Lie Detection
Imagine anyone could collect more than just wearable data showing someone’s (or their own) heartbeat, but continuous data on facial expressions from video footage, too. Imagine you could use that data, with a bit of training, to analyze conversations and interactions from your daily life — replaying ones you found suspicious with a more watchful gaze. Furthermore, those around you could do the same: imagine a friend, or company, could use your past data to reliably differentiate between your truths and untruths, matters of import and things about which you could not care less.
This means a whole new toolkit for investigators, for advertisers, for the cautious, for the paranoid, for vigilantes, for anyone with internet access. Each of us will have to know and understand how to manage and navigate this new data-driven public record of our responses.
The issue for the next years is not whether lying will be erased — of course it will not —but rather, how these new tools should be wielded in the pursuit of finding the truth. Moreover, with a variety of potential ways of mis-reading and misusing these technologies, in what contexts should they be made available, or promoted?
The Truth About Knowing the Truth
Movies often quip about the desire to have a window into someone else’s brain, to feel assured that what they say describes what they feel, that what they feel describes what they will do, and what they will do demonstrates what everything means for them. Of course, we all know the world is not so neat, and one might fall prey to searching for advice online. What happens when such advice is further entrenched in a wave of newly available, but poorly understood, data?
What will happen, for example, when this new data is used in the hiring process, with candidates weeded out by software dedicated to assessing whether and about what they’ve lied during an interview? What will happen when the same process is used for school selection, jury selection, and other varieties of interviews, or when the results are passed along to potential employers. As the number of such potential scenarios grows, the question we have to ask is when is our heartbeat private information?
Is knowledge of our internal reactions itself private, simply because until now only a small segment of perceptive people could tell what was happening? Communities often organize around the paths of least resistance, creating a new divide between those who understand and can navigate this new digital record, and those who cannot.
Imagine therapists actively recording cognitive dissonance, news shows identifying in real time whether or not a guest believes what they are saying, companies reframing interviews with active facial analysis, quick border security questioning. The expanding scope of sensors is pushing us away from post-truth to an age of post-lying, or rather, an end to our comfort with the ways in which we currently lie. As with everything, the benefits will not be felt equally.
We might even be able to imagine the evolution of lie detection moving towards brain-computer interfaces — where one’s right to privacy must then be discussed in light of when we can consider our thoughts private.
In court rooms, if we can reliably tell the difference between reactions during a lie and during the truth, do witnesses have a right to keep that information private? Should all testimonials be given in absolute anonymity? Researchers at the University of Maryland developed DARE, the Deception Analysis and Reasoning Engine, which they expect to be only a few years away from near perfect deception identification.
How then should we think about the 5th amendment of the US constitution, how should we approach the right to not incriminate oneself? With the advent of these technologies, perhaps the very nature of the courtroom should change. Witnesses are not given a polygraph on the stand for good reason: it’s unreliable — but there may be little stopping someone with a portable analytics system to tell their vitals or analyze a video feed from a distance, and publish the results for the court of public opinion. How should our past behavior be recorded and understood?