Virtual Agents

What is a Virtual Agent?

A virtual agent is an AI-powered software designed to simulate human conversation and perform tasks typically handled by customer service representatives. These agents interact with users via text or voice, leveraging technologies like natural language processing (NLP), machine learning, and robotic process automation (RPA) to understand queries and provide relevant responses.

The Basic Idea

We are increasingly seeing a new employee supporting companies’ customer service efforts—virtual agents. You have likely run into them before. They often include those “Let’s talk” or “chat with us” functions on the bottom right corner of a website or the automated voice in phone calls asking you to summarize your problem in a few keywords. You probably find them annoying, yearning for a human customer service representative who can recognize the nuance of your problem. Despite our frustration, these virtual agents are quickly becoming our first point of contact with any organization. So, what are they, exactly?

Virtual agents are, in essence, a software or bot that mimics human actions through automation without needing human involvement. They achieve dialogue, text, or voice, to interact with users through the direction of pre-set rules and data that is fed into the software. The capabilities of virtual agents are thanks to natural language processing (NLP), machine learning, and robotic process automation (RPA). So, their ability to maintain an interaction with humans makes them useful for routine customer service. 

In addition to virtual agents, you may have also heard the term “chatbot.” A chatbot is a subset of a virtual agent describing a computer program that can mimic a text-based conversation in real-time. An example would be Bank of America’s Erica, an AI-powered chatbot designed to assist customers with a range of banking-related services like transaction history inquiries. The interactions are instructed by a set of pre-programmed inputs that lead to a corresponding answer in the flow of a decision tree. Virtual agents are not just extremely sophisticated chatbots. They have the power to understand the intention behind a freeform text or speech from users and automate a response while continuously improving its performance. In a nutshell, chatbots respond while virtual agents comprehend, learn, and perform.

Sometimes, the term “virtual agents” is also mixed up with “virtual assistants.” Virtual assistants are what we describe as digital products like Apple’s Siri or Amazon’s Alexa. Obviously, they aren’t humans providing service remotely.

From your experience with these digital products, you are probably thinking that virtual agents and virtual assistants are the same thing. Yes, they are both self-automated systems that imitate human activity. But on a technicality, a virtual agent works for an enterprise with commercialized systems.1 So, while Siri or Alexa functions as a personal assistant on your device, a true virtual agent functions as a business assistant, automating services for customers and employees. But for this article, we are going to talk about all types of virtual agents, including virtual assistants.

Here is a table summarizing the different tasks they complete:

Virtual agent technology is powered by a mix of natural language processing, robotic process automation (RPA), and intelligent search. But more recently, AI technology and machine learning have been incorporated into virtual agent technology. Integrating machine learning with virtual agent technology opened an avenue for virtual agents to continuously grow and improve their understanding of human language. We have since passed the era of scripted responses and have stepped into a time in technology where unpredicted human queries can be analyzed and genuine responses can be provided.

Advances in AI technology make virtual agents capable of responding to a wider range of requests compared to traditional virtual agents. It is especially thanks to natural language processing technology that AI virtual agents can analyze any form of text or speech with greater accuracy. Interactions with traditional chatbots, on the other hand, are limited to a predetermined set of rules and predefined terms. This makes conversations with AI agents seem natural and feel almost as though you are speaking to a human.

Given just how powerful virtual agents are nowadays, it makes sense that businesses opt to use them to support and enhance their operations. Virtual agents come in types and can be customized based on the business’s specific needs and the resources available to maintain them. For example, a company can combine virtual agents with their customer service department as an integrated solution to address long waiting times for a human agent. Or, a company can use a virtual agent in the onboarding of new hires as an end-to-end solution to address the shortage of available trainers. 

The real problem is not whether machines think but whether men do”


– B.F. Skinner

Theory, meet practice

TDL is an applied research consultancy. In our work, we leverage the insights of diverse fields—from psychology and economics to machine learning and behavioral data science—to sculpt targeted solutions to nuanced problems.

Our consulting services

Key Terms

Natural Language Processing (NLP): A branch of artificial intelligence that enables computers to understand, interpret, and respond to human language. Its methods include text analysis, sentiment analysis, language translation, and speech recognition. The goal is to bridge the gap between human communication and computer understanding. 

Robotic Process Automation (RPA): It uses automation technology to perform repetitive business tasks deemed tedious for human workers such as data extraction, form filling, and file transfers. Automation technology should not be confused with artificial intelligence as it is directed by a predefined workflow rather than machine learning.

Intelligent Search: It is driven by artificial intelligence to help users extract required information quickly and easily regardless of the format. It goes beyond traditional keyword-based searches by understanding user intent, context, and semantics to deliver personalized and meaningful results.

Virtual Agents: Software programs created to simulate human conversation and perform specific tasks for businesses. They often make use of advanced AI techniques like machine learning and NLP. Examples include customer service bots that handle queries on e-commerce websites.

Chatbots: A subset of virtual agents that simulate human conversation primarily through text. These can range from simple rule-based systems to more advanced AI-driven models. An example is a chatbot that answers frequently asked questions on a website.

Virtual Assistants: Often integrated into smart devices, they are AI-driven tools designed to assist individuals with personal tasks. Examples include Apple's Siri, Amazon's Alexa, and Google Assistant.

History

While many have experimented with speech or text recognition systems since the early 20th century, it wasn’t until Professor Joseph Weizenbaum developed ELIZA that we saw the first breakthrough. ELIZA is a natural language processing chatbot that simulates a conversation between a psychotherapist and a client. Although its skills were rather basic, ELIZA highlighted the possibility of computers imitating human-like communication.2

The 1970s marked a significant period when academics and researchers poured funding into natural language processing technology. And so, in under a decade, we saw great improvements. Psychiatrist, Kenneth Colby, introduced the chatbot, PARRY, in 1972. It has the capability of understanding and responding to human users more genuinely, which is a far cry from the days of ELIZA.

By the 1990s, personal digital assistants (PDAs) were being introduced to the market. These were handheld devices designed to help users manage their personal information and tasks. You may have heard of the Apple Newton or the PalmPilot, both of which are examples of the 90s-era PDAs.

Fast-forward to the 2010s and PDAs have evolved into digital virtual assistants. In 2011, Apple announced Siri which was the first virtual assistant to be installed on a smartphone as a feature of the upcoming iPhone 4s (fun fact, the “s” stood for Siri). At first, Siri was designed to help enhance users’ experience with the iPhone by aiding tasks like sending messages, making calls, and setting an alarm. It has since expanded its task repertoire to giving food recommendations, browsing the internet, and providing road directions. 

In 2020, the launch of OpenAI’s ChatGPT-3 signaled how advanced chatbots can be. The program gained popularity not only for its accessibility but also for the human-like conversations people can have with the program. The rise of AI language models in this decade highlights the developments in accuracy and personalization we have made in virtual agents. These virtual agents can now support a wider range of tasks from managing home temperature to providing real-time translation.

People

Joseph Weizenbaum: A German American computer scientist who is credited as the creator of the ELIZA chatbot at MIT in 1966. ELIZA’s ability to simulate conversations humans would have with an empathetic psychologist laid the groundwork for future virtual agent systems. Despite being a pioneer in the field of artificial intelligence, Weizenbaum is a known skeptic about the potential for AI to replicate human intelligence and emotions authentically. 

Kenneth Colby: An American psychiatrist who developed the PARRY chatbot in 1972 designed to interact with people diagnosed with paranoid schizophrenia. Colby is known to be dedicated to the theory and application of computer science and the combination of artificial intelligence in the field of psychiatry.

Adam Cheyer: An American computer scientist and entrepreneur, best known as a co-founder of Siri Inc. The company developed the Siri virtual assistant, which was later acquired by Apple and integrated into iOS devices. He was also formerly the director of engineering for Apple’s iPhone branch. 

Consequences

The rising popularity of virtual agents has numerous consequences in society and technology. First, let’s look at how they impact businesses.

Nowadays, virtual agents can be “smart” enough to function like other employees in a company. So, their services offer businesses a cost-effective solution (we don’t need to pay robots…yet) to customer service problems. This frees up time and resources, allowing businesses to allocate them to more important areas of their operations. Additionally, virtual agents can help boost business productivity and improve efficiency. They can streamline human workflows by handling the tedious tasks and leaving the trickier work to actual employees. The same productivity and efficiency argument can be applied to people’s daily lives too. Humans can deploy virtual agents to do things for them, although the scope of what they can do is limited by the available technology at the time.

Now, let’s turn to individuals. A uniquely human consequence is how virtual agents can help increase accessibility as options to communicate via speech or text help support all users with varying degrees of need. It appears that technology can also function as a medium to make the world a little bit more inclusive. Finally, virtual agents can help facilitate a smoother user experience. The ability for virtual agents to personalize their systems to suit the needs and preferences of users can provide a more relevant experience for them.

The Human Element

When Professor Weizenbaum asked his secretary to pilot ELIZA she requested that he leave the room so that she could have a private conversation. Weizenbaum was surprised that such minimal exposure to a basic computer program could foster a deep connection with the human user.3 This gave rise to the aptly named ELIZA effect which describes the tendency to assume that computer systems, particularly NLP systems, have more understanding and intelligence than they actually do.

With that, let’s delve into one of the psychological effects of interacting with AI agents—the uncanny valley. The uncanny valley refers to the discomfort people feel when androids, humanoid robots, or simulations strongly mimic humans but aren’t fully realistic.4 A 2019 study had participants interact with a virtual agent with varying degrees of “humanness.” Some spoke to a simple chatbot through text messages and others communicated with a human-like avatar that read the conversation out loud.

Researchers found that not only was the chatbot preferred among participants, but those who interacted with the avatar had higher heart rates and frowned more often during their conversation—showing evidence of arousal and negative feelings.5 Similar findings were gathered in a 2022 study. Participants reported a sense of uneasiness, reduced trust, and an unlikeliness to buy a laptop that had an anthropomorphic chatbot installed.6

A 2020 study examined participants’ level of perceived trust, affinity, and preferences between two types of virtual assistants. Some spoke to a human travel agent via Skype while others had a very realistic avatar to talk to. As with the previously mentioned studies, using a human-like avatar as a virtual agent lowered participants’ preferences and perceptions of trust.7 They also found that participants had reduced affinity to the avatar.8 This finding is interesting considering that humanoid robots are thought to be key in advancing human-computer interaction (HCI) because they can help strengthen the affinity between humans and machines through familiarity.9

What this suggests to us is that while we have the technology to develop the fanciest and most realistic virtual agent, there is little point in realizing this possibility if the users of these agents don’t prefer them. It teaches us that the psychological effects AI agents have on human users limit how advanced virtual agents can be. There is a balance between practicality and potential to be achieved.

Controversies

Too much yelling?

Research suggests that many attempt to test the boundaries of the virtual agents by asking inappropriate questions or throwing profanities at them. Studies have demonstrated that 10%-50% of our interactions with them are abusive.10 But because the UX field strives to provide the best and most ideal user experience, virtual agents may end up tolerating widespread cruelty. As a result, this poses the question of whether tolerance is promoting abusive behavior and whether program developers have an ethical responsibility to limit it.

Sure, AI agents don’t experience emotion, so no one is being offended though the problem doesn’t lie with the recipient of the abuse, but rather the instigator. Venting to virtual agents may seem harmless but studies highlight that instead of letting go of negative emotions, we are simply rehearsing it.11 As such, releasing our anger to virtual agents could seep into our interactions with sentient humans. Considering this, virtual agents walk the fine line between appeasing the user and promoting aggression.

Misinformation? 

The ability of virtual agents to produce coherent and contextually appropriate responses conveniences the user but may contribute to the proliferation of misinformation. After the release of ChatGPT, researchers examined the sort of response AI chatbots would give when the prompt teetered with conspiracies and false narratives. The results were troubling and a demonstration of how dangerous these AI tools can be when prompted in a certain light.12 What makes it even more disturbing is that the accessibility of AI agents could make misinformation cheap and easy to spread.13 So, while virtual agents have brought us greater access to a wealth of information, it may not always be factual. The spreading of misinformation may also be perpetuated by bias in AI training. AI technology first learns from training data provided by humans, but what if that set of data is already biased? Some AI-powered virtual agents may contribute to misinformation without even being prompted. 

Case Study

Bank of America’s Erica

We mentioned earlier Bank of America’s use of virtual agent technology, Erica, in supporting their customer services. Let’s take a deeper dive into it. Launched back in 2018, Erica was designed to help customers manage their finances more efficiently. It can perform a number of tasks ranging from simple account information retrieval to bill payments. It also has the power to provide some financial advice too! The goal of Erica was to provide quick and convenient banking services that were also personalized. Since its launch, Erica has helped over 42 million BofA clients and surpassed over 2 billion interactions with clients. Erica represents a new era of banking–a more personalized and customer-centric one, and exemplifies how AI can enhance customers' experiences with financial institutions. 

Related TDL Content

Deep Learning

A subgroup of artificial intelligence, deep learning, has pushed the boundaries of what computers can do. This technique programs computers similar to how the human brain functions. That way, machines can learn on their own with limited human intervention. The deep learning technique created incredible digital products like Apple’s Siri and Amazon’s Alexa, but it isn’t without its consequences or controversies. Read more about it in this TDL article.

Humans and AI: Rivals or Romance?

We mentioned how virtual agents can support business operations. They are the more cost-effective option which no doubt benefits the business but what about the employees? There is an increasing worry that AI is eliminating available jobs in the market. Turns out, humans need AI as much as AI needs humans. Because of this, perhaps viewing AI as our rival in the job market is taking things the wrong way. Maybe AI has been our friend all along.

References

  1. IBM. (n.d.). What is a virtual agent? IBM. https://www.ibm.com/topics/virtual-agent
  2. Jovanovic, P. (2023, April 21). The History and Evolution of Virtual Assistants. Tribulant Blog. https://tribulant.com/blog/software/the-history-and-evolution-of-virtual-assistants-from-simple-chatbots-to-todays-advanced-ai-powered-systems/
  3. Weizenbaum, J. (1976). Computer Power and Human Reason: From Judgment to Calculation. W.H. Freeman.
  4. Gillis, A. S. (2024). Uncanny valley. TechTarget. https://www.techtarget.com/whatis/definition/uncanny-valley#:~:text=The%20uncanny%20valley%20is%20a
  5. Ciechanowski, L., Przegalinska, A., Magnuski, M., & Gloor, P. (2019). In the shades of the uncanny valley: An experimental study of human–chatbot interaction. Future Generation Computer Systems, 92, 539–548. https://doi.org/10.1016/j.future.2018.01.055
  6. Song, S. W., & Shin, M. (2022). Uncanny Valley Effects on Chatbot Trust, Purchase Intention, and Adoption Intention in the Context of E-Commerce: The Moderating Role of Avatar Familiarity. International Journal of Human–Computer Interaction, 1–16. https://doi.org/10.1080/10447318.2022.2121038
  7. Seymour, M., Yuan, L. (Ivy), Dennis, A. R., & Riemer, K. (2021). Have We Crossed the Uncanny Valley? Understanding Affinity, Trustworthiness, and Preference for Realistic Digital Humans in Immersive Environments. Journal of the Association for Information Systems, 22(3), 591–617. https://doi.org/10.17705/1jais.00674
  8. See above. 
  9. Sproull, L., Subramani, M., Kiesler, S., Walker, J., & Waters, K. (1996). When the Interface Is a Face. Human-Computer Interaction, 11(2), 97–124. https://doi.org/10.1207/s15327051hci1102_1
  10. Siegel, J. (n.d.). The Ethical Implications of the Chatbot User Experience. Bentley University. https://www.bentley.edu/centers/user-experience-center/ethical-implications-chatbot-user-experience
  11. Grogan, J. (2015, June 30). Venting Your Feelings Isn’t Enough. Psychology Today. https://www.psychologytoday.com/ca/blog/encountering-america/201506/venting-your-feelings-isnt-enough
  12. Hsu, T., & Thompson, S. A. (2023, February 8). Disinformation Researchers Raise Alarms About A.I. Chatbots. The New York Times. https://www.nytimes.com/2023/02/08/technology/ai-chatbots-disinformation.html
  13. Goldstein, J. A., Sastry, G., Musser, M., DiResta, R., Gentzel, M., & Sedovsa, K. (2023). Generative Language Models and Automated Influence Operations: Emerging Threats and Potential Mitigations. ArXiv (Cornell University). https://doi.org/10.48550/arxiv.2301.04246

About the Author

Samantha Lau

Samantha Lau

Samantha graduated from the University of Toronto, majoring in psychology and criminology. During her undergraduate degree, she studied how mindfulness meditation impacted human memory which sparked her interest in cognition. Samantha is curious about the way behavioural science impacts design, particularly in the UX field. As she works to make behavioural science more accessible with The Decision Lab, she is preparing to start her Master of Behavioural and Decision Sciences degree at the University of Pennsylvania. In her free time, you can catch her at a concert or in a dance studio.

Read Next

Notes illustration

Eager to learn about how behavioral science can help your organization?