Can AI Mimic Emotions?
Can AI experience anxiety?
While it does not feel emotions as humans do, a Swiss study suggests AI models like ChatGPT-4 exhibit behavioural changes when exposed to distressing content.
Researchers from the University of Zurich and the University Hospital for Psychiatry Zurich found that traumatic narratives can heighten AI’s sensitivity, influencing its responses and reinforcing biases, including racial and gender biases.
This phenomenon falls within the field of "affective computing," which examines how AI mimics human emotions and adapts to users in real time.
As generative AI gains traction, its role in mental health is under scrutiny.
Many users already turn to AI chatbots for psychological support, with platforms like Character.ai offering a "Psychologist" chatbot and apps like Elomia providing 24/7 AI-driven mental health assistance.
While AI’s ability to simulate human interaction is advancing, its potential to internalise and amplify negative stimuli raises critical ethical and practical concerns.
Using Emotionally Charged AI Has Its Risks
The use of AI in mental health support raises pressing ethical concerns, particularly as models like ChatGPT can exhibit biased behaviour when exposed to negativity.
However, researchers suggest that mindfulness-based interventions may help mitigate this effect.
In a recent study, AI chatbots responded more neutrally and objectively when prompted with breathing exercises and guided meditation techniques—methods commonly used in human therapy.
This finding hints at the potential for AI models to incorporate emotional regulation strategies before engaging with distressed users.
Still, AI cannot replace mental health professionals.
One of the study’s authors and a postdoctoral researcher at the Yale School of Medicine, Ziv Ben-Zion, explained:
"For people who are sharing sensitive things about themselves, they’re in difficult situations where they want mental health support, [but] we’re not there yet that we can rely totally on AI systems instead of psychology, psychiatric and so on."
Everyday anxieties can mask deeper issues, and in extreme cases, lead to serious emotional distress.
In October, a Florida mother sued Character.AI, alleging that its chatbot played a role in her 14-year-old son's suicide.
In response, the company strengthened its safety measures.
Rather than serving as a substitute for mental health professionals, AI’s real value lies in assisting them—streamlining administrative tasks, supporting early-stage assessments, and enhancing patient care.
The challenge now is determining how to integrate AI effectively without compromising the core principles of mental health care.