News Warner Logo

News Warner

AI is providing emotional support for employees – but is it a valuable tool or privacy threat?

AI is providing emotional support for employees – but is it a valuable tool or privacy threat?

  • Employers are increasingly using AI-powered systems to assess workers’ psychological well-being and provide emotional support in the workplace, raising concerns about privacy and potential misuse.
  • The use of AI for emotional support in the workplace has shown some benefits, including making people feel heard and providing consistent, supportive responses, but also raises new concerns about surveillance and stigma.
  • Many companies are using AI systems to analyze employee communication, track emotional states, and generate well-being scores, which can create a thin boundary between support and surveillance, leading to increased stress and anxiety among employees.
  • The use of AI for emotional support in the workplace also raises questions about bias, authenticity, and human judgment, with some studies finding that emotion-tracking AI tools have a disproportionate impact on certain groups, such as employees of color and those living with mental illness.
  • To ensure the effective and ethical implementation of AI-powered emotional support systems in the workplace, companies must prioritize clear ethical boundaries, strong privacy protections, and explicit policies about how emotional data is used, while also recognizing the importance of human empathy and authentic connections with their teams.

Does AI that monitors and supports worker emotions improve or degrade the workplace? Marta Sher/iStock via Getty Images

As artificial intelligence tools like ChatGPT become an increasingly popular avenue for people seeking personal therapy and emotional support, the dangers that this can present – especially for young people – have made plenty of headlines. What hasn’t received as much attention is employers using generative AI to assess workers’ psychological well-being and provide emotional support in the workplace.

Since the pandemic-induced global shift to remote work, industries ranging from health care to human resources and customer service have seen a spike in employers using AI-powered systems designed to analyze the emotional state of employees, identify emotionally distressed individuals, and provide them with emotional support.

This new frontier is a large step beyond using general chat tools or individual therapy apps for psychological support. As researchers studying how AI affects emotions and relationships in the workplace, we are concerned with critical questions that this shift raises: What happens when your employer has access to your emotional data? Can AI really provide the kind of emotional support workers need? What happens if the AI malfunctions? And if something goes wrong, who’s responsible?

The workplace difference

Many companies have started by offering automated counseling programs that have many parallels with personal therapy apps, a practice that has shown some benefits. In preliminary studies, researchers found that in a doctor-patient-style virtual conversation setting, AI-generated responses actually make people feel more heard than human ones. A study comparing AI chatbots with human psychotherapists found the bots were “at least as empathic as therapist responses, and sometimes more so.”

This might seem surprising at first glance, but AI offers unwavering attention and consistently supportive responses. It doesn’t interrupt, doesn’t judge and doesn’t get frustrated when you repeat the same concerns. For some employees, especially those dealing with stigmatized issues like mental health or workplace conflicts, this consistency feels safer than human interaction.

But for others, it raises new concerns. A 2023 study found that workers were reluctant to participate in company-initiated mental health programs due to worries about confidentiality and stigma. Many feared that their disclosures could negatively affect their careers.

Other workplace AI systems go much deeper, analyzing employee communication as it happens – think emails, Slack conversations and Zoom calls. This analysis creates detailed records of employee emotional states, stress patterns and psychological vulnerabilities. All this data resides within corporate systems where privacy protections are typically unclear and often favor the interests of the employer.

illustration of a giant eyeball watching a woman working on a computer at a desk

Employees might feel that AI emotional support systems are more like workplace surveillance.
Malte Mueller/fStop via Getty Images

Workplace Options, a global employee assistance provider, has partnered with Wellbeing.ai to deploy a platform that uses facial analytics to track emotional states across 62 emotion categories. It generates well-being scores that organizations can use to detect stress or morale issues. This approach effectively embeds AI into emotionally sensitive aspects of work, leaving an uncomfortably thin boundary between support and surveillance.

In this scenario, the same AI that helps employees feel heard and supported also generates unprecedented insight into workforce emotional dynamics. Organizations can now track which departments show signs of burnout, identify employees at risk of quitting and monitor emotional responses to organizational changes.

But this type of tool also transforms emotional data into management intelligence, presenting many companies with a genuine dilemma. While progressive organizations are establishing strict data governance – limiting access to anonymized patterns rather than individual conversations – others struggle with the temptation to use emotional insights for performance evaluation and personnel decisions.

Continuous surveillance carried out by some of these systems may help ensure that companies do not neglect a group or individual in distress, but it can also lead people to monitor their own actions to avoid calling attention to themselves. Research on workplace AI monitoring has shown how employees experience increased stress and modify their behavior when they know that management can review their interactions. The monitoring undermines the feeling of safety necessary for people to comfortably seek help. Another study found that these systems increased distress for employees due to the loss of privacy and concerns that consequences would arise if the system identified them as being stressed or burned out.

When artificial empathy meets real consequences

These findings are important because the stakes are arguably even higher in workplace settings than personal ones. AI systems lack the nuanced judgment necessary to distinguish between accepting someone as a person versus endorsing harmful behaviors. In organizational contexts, this means an AI might inadvertently validate unethical workplace practices or fail to recognize when human intervention is critical.

And that’s not the only way AI systems can get things wrong. A study found that emotion-tracking AI tools had a disproportionate impact on employees of color, trans and gender nonbinary people, and people living with mental illness. Interviewees expressed deep concern about how these tools might misread an employee’s mood, tone or verbal queues due to ethnic, gender and other kinds of bias that AI systems carry.

A study looked at how employees perceive AI emotion detection in the workplace.

There’s also an authenticity problem. Research shows that when people know they’re talking to an AI system, they rate identical empathetic responses as less authentic than when they attribute them to humans. Yet some employees prefer AI precisely because they know it’s not human. The feeling that these tools protect your anonymity and freedom from social consequences is appealing for some – even if it may only be a feeling.

The technology also raises questions about what happens to human managers. If employees consistently prefer AI for emotional support, what does that reveal about organizational leadership? Some companies are using AI insights to train managers in emotional intelligence, turning the technology into a mirror that reflects where human skills fall short.

The path forward

The conversation about workplace AI emotional support isn’t just about technology – it’s about what kinds of companies people want to work for. As these systems become more prevalent, we believe it’s important to grapple with fundamental questions: Should employers prioritize authentic human connection over consistent availability? How can individual privacy be balanced with organizational insights? Can organizations harness AI’s empathetic capabilities while preserving the trust necessary for meaningful workplace relationships?

The most thoughtful implementations recognize that AI shouldn’t replace human empathy, but rather create conditions where it can flourish. When AI handles routine emotional labor – the 3 a.m. anxiety attacks, pre-meeting stress checks, processing difficult feedback – managers gain bandwidth for deeper, more authentic connections with their teams.

But this requires careful implementation. Companies that establish clear ethical boundaries, strong privacy protections and explicit policies about how emotional data gets used are more likely to avoid the pitfalls of these systems – as will those that recognize when human judgment and authentic presence remain irreplaceable.

The Conversation

The authors do not work for, consult, own shares in or receive funding from any company or organization that would benefit from this article, and have disclosed no relevant affiliations beyond their academic appointment.

link

Q. Is AI-powered emotional support for employees in the workplace a valuable tool or a privacy threat?
A. The answer is complex, as it depends on how the technology is implemented and used. While AI can provide consistent and supportive responses, it also raises concerns about employee data privacy and potential misuse of emotional insights.

Q. Can AI really provide the kind of emotional support workers need in the workplace?
A. Preliminary studies suggest that AI-generated responses can be at least as empathic as human ones, but more research is needed to fully understand its effectiveness.

Q. What happens if the AI malfunctions or fails to recognize an employee’s emotional state?
A. If the AI system fails, it could lead to a range of negative consequences, including increased stress and anxiety for employees, and potentially even harm to their mental health.

Q. Who is responsible when something goes wrong with an AI-powered emotional support system in the workplace?
A. The responsibility lies with the organization that implemented the system, as they have control over how the technology is used and must ensure that it is used ethically and responsibly.

Q. Do employees feel safer talking to an AI system for emotional support than a human?
A. Some employees prefer talking to an AI system because it provides consistent and supportive responses without judgment or interruption, but this can also be seen as a drawback if the system fails to recognize bias or cultural differences.

Q. Can AI systems carry biases that affect their ability to detect emotions in certain groups of people?
A. Yes, research has shown that emotion-tracking AI tools can have a disproportionate impact on employees of color, trans and gender nonbinary people, and people living with mental illness due to ethnic, gender, and other kinds of bias.

Q. How do employees perceive the authenticity of emotional support provided by AI systems compared to humans?
A. Research shows that when people know they’re talking to an AI system, they rate empathetic responses as less authentic than when they attribute them to humans.

Q. What does it reveal about organizational leadership if employees consistently prefer AI for emotional support over human connection?
A. It suggests that some organizations may be prioritizing efficiency and consistency over human empathy and connection, which can have negative consequences for employee well-being and job satisfaction.

Q. How can individual privacy be balanced with organizational insights from AI-powered emotional support systems?
A. This requires careful implementation, including clear ethical boundaries, strong privacy protections, and explicit policies about how emotional data gets used.