• Sunday, December 22, 2024
businessday logo

BusinessDay

The risks of using AI to interpret human emotions

AI

What do people really feel?

This has never been an easy thing for companies to determine. For one thing, emotions are inherently difficult to read. For another, there’s often a disconnect between what people say they feel and what they actually feel.

A lot of companies use focus groups and surveys to understand how people feel. Now, emotional artificial intelligence technology can help businesses capture the emotional reactions in real-time — by decoding facial expressions, analyzing voice patterns, monitoring eye movements and measuring neurological immersion levels, for example. The ultimate outcome is a much better understanding of their customers — and even their employees.
Because of the subjective nature of emotions, emotional AI is especially prone to bias. One study found that emotional analysis technology assigns more negative emotions to people of certain ethnicities than to others. AI is often also not sophisticated enough to understand cultural differences in expressing and reading emotions, making it harder to draw accurate conclusions. A smile might mean one thing in Germany and another in Japan. If left unaddressed, conscious or unconscious emotional bias can perpetuate stereotypes and assumptions at an unprecedented scale.

Based on our research and experience working with global clients, we see businesses using emotional AI technology in four ways:

— UNDERSTANDING HOW EMOTIONALLY ENGAGED EMPLOYEES ACTUALLY ARE: When AI is used to gauge employee emotions, it can have serious effects on how work is allocated. Employees often think they’re in the right role, but upon trying new projects might find their skills are better aligned elsewhere. Some companies are already allowing employees to try different roles once a month to see what jobs they like most. Here’s where bias in AI could reinforce existing stereotypes. In the U.S., where 89% of civil engineers and 81% of first-line police and detective supervisors are male, an algorithm that has been conditioned to analyze male features might struggle to read emotional responses and engagement levels among female recruits. This could lead to flawed role allocation and training decisions.

— IMPROVING THE ABILITY TO CREATE PRODUCTS THAT ADAPT TO CONSUMER EMOTIONS: With emotion tracking, product developers can learn which features elicit the most excitement and engagement in users. Take, for example, Affectiva’s Auto AI platform, which can recognize emotions like joy and anger and adapt a vehicle’s in-cabin environment accordingly. Cameras and microphones can pick up on passenger drowsiness — and may lower the temperature or jolt the seat belt as a result. A smart assistant might change its tone in response to a frustrated passenger. But a biased adaptive in-cabin environment could mean that some passengers are misunderstood. Elderly people might be more likely to be wrongly identified as having driver fatigue (the older the age of the face, the less likely it is that expressions are accurately decoded). This could lead to higher insurance premiums for older people, as the data would suggest that, despite many prompts to rest, the driver pressed on.

— IMPROVING TOOLS TO MEASURE CUSTOMER SATISFACTION: Companies like the Boston startup Cogito are giving businesses the tools to help their employees interact better with customers. Its algorithms can not only identify “compassion fatigue” in customer service agents but can also guide agents on how to respond to callers via an app. An upset customer might, for example, call to complain about a product. Recording and analyzing the conversation, Cogito’s platform would then suggest that the agent slow down or prompt them on when to display empathy. A biased algorithm, perhaps skewed by an accent or a deeper voice, might result in some customers being treated better than others — pushing those bearing the brunt of bad treatment away from the brand. A male caller could be subject to less empathy than a woman, reinforcing societal perceptions of men as “emotionally strong.” Or a female caller might be viewed as a less tough negotiator, resulting in less compensation being offered. The agents themselves may not possess these biases, but, clouded by the misconception that the algorithms are highly accurate, they may follow their advice blindly. In this way, biases spread.

— TRANSFORMING THE LEARNING EXPERIENCE: Emotional insights could be used to augment the learning experience across all ages. It could allow teachers to design lessons that spur maximum engagement, putting key information at engagement peaks and switching content at troughs. It could offer insights into the students, helping to identify who needs more attention. China is already introducing emotion-detection systems into classrooms to track students’ focus. But, if biases exist, wrongly suggesting someone is disengaged could result in learning experiences tailored toward certain groups rather than others, given different learning styles. Incorrect engagement readings could affect learning outcomes all the way to the workplace, meaning that even in work training programs, only a fraction of employees could enjoy full professional development.

Some technologies are better than others at tracking certain emotions, so combining these technologies could help to mitigate bias. A Nielsen study testing the accuracy of neuroscience technologies such as facial coding, biometrics and electroencephalography found that when used alone, accuracy levels were at 9%, 27% and 62% respectively. When combined, accuracy levels shot up to 77%. Testing the results with a survey brought this up to 84%. Such combinations, therefore, serve as a check on the accuracy of results.

Having diverse teams creating emotional AI algorithms will be crucial to keeping bias at bay and fully capturing the complexity of emotions. This means not just gender and ethnic diversity, but also diversity in socioeconomic status and views — negating anything from xenophobia to homophobia to ageism. The more diverse the inputs and data points, the more likely it is that we’ll be able to develop AI that’s fair and unbiased.

Companies will also need to be vigilant about not perpetuating historical biases when training emotional AI. While historical data might be used as a basis to train AI on different emotional states, real-time data will be needed for context. Take smiles, for example. One study showed that of the 19 different types of smile, only six happen when people are having a good time. We also smile when we are in pain, embarrassed and uncomfortable — distinctions that can be drawn only with context.

Emotional AI will be a powerful tool, forcing businesses to reconsider their relationships with consumers and employees, and redefining products as we know them. As businesses foray into the world of emotional intelligence, the need to prevent biases from seeping in will be essential. Failure to act will leave certain groups systematically more misunderstood than ever — a far cry from the promises offered by emotional AI.

Mark Purdy is managing director with Accenture Research, where Omaro Maseli is a senior analyst. John Zealley is senior managing director and customer insight and growth global lead at Accenture

Join BusinessDay whatsapp Channel, to stay up to date

Open In Whatsapp