AI models identify racial features, yet exhibit diminished empathy due to inherent racial bias
In the digitally driven world, seeking mental health support through anonymity and the companionship of strangers has become increasingly popular, especially with over 150 million Americans living in areas short of mental health professionals. This trend is apparent on Reddit, a social media platform where users can seek advice in smaller, specialty forums, known as "subreddits."
Alarmed by the growing demand for AI-generated mental support and its potential hazards, researchers from MIT, NYU, and UCLA devised a framework to evaluate the equity and overall quality of chatbots using large language models (LLMs), such as GPT-4. On a mission to ensure these chatbots are truly beneficial, their work was recently presented at the 2024 Conference on Empirical Methods in Natural Language Processing (EMNLP).
To perform their evaluation, they recruited two licensed clinical psychologists who assessed the level of empathy in generated responses, without knowing whether they were from a real Redditor or AI-powered GPT-4. The psychologists found that GPT-4 responses were not only more empathetic overall but also more effective at promoting positive behavioral changes.
However, the team discovered that GPT-4's empathy levels were reduced for Black and Asian posters by up to 17 percent and 15 percent, respectively, compared to white or unknown-race posters. To address this discovery, they suggest explicitly providing demographic attributes can help alleviate bias.
Saadia Gabriel, the first author of the paper and a recent MIT postdoc now at UCLA, initially doubted the effectiveness of mental health support chatbots. Through her research in the Healthy Machine Learning Group led by Marzyeh Ghassemi, she found that these chatbots hold immense potential, with GPT-4's responses being less affected by demographic leaking compared to human responders, except in Black female demographic groups.
Despite AI demonstrating remarkable strides, the unintended consequences of AI-provided mental health support have come under scrutiny due to fatal tragedies, such as a Belgian man who died from suicide after exchanging with ELIZA, a chatbot developed using GPT-J in March 2023. A month later, the National Eating Disorders Association discontinued their chatbot Tessa due to dispensing dieting tips to patients with eating disorders.
Used with caution, chatbots like GPT-4 can revolutionize mental health support by offering 24/7 accessibility, consistency, and anonymity, helping reduce barriers to therapy for diverse populations. However, ongoing research is necessary to ensure they provide equitable outcomes for all demographics and to mitigate potential safety risks.
- The growing popularity of seeking mental health support through anonymity and companionship, as seen on Reddit, has sparked interest in the field of artificial intelligence (AI).
- Researchers from MIT, NYU, and UCLA are investigating the use of AI-generated chatbots, like GPT-4, for mental health support, focusing on equity and quality evaluation.
- In their research, the team discovered that GPT-4 responses were more empathetic and effective at promoting positive behavioral changes but showed reduced empathy for Black and Asian posters compared to white or unknown-race posters.
- Saadia Gabriel, the first author of the paper, initially doubted the effectiveness of mental health support chatbots but found that AI demonstrates significant potential, especially in reducing demographic leaking compared to human responders, except for Black female demographic groups.
- News of fatal tragedies, such as a Belgian man who took his life after exchanging with ELIZA, a chatbot developed using GPT-J, highlights the need for ongoing research to ensure AI-provided mental health support is equitable and safe for all demographics.
- The integration of AI in health-and-wellness and mental-health sectors, through advancements in technology like artificial intelligence and chatbots like GPT-4, aims to revolutionize mental health support by offering 24/7 accessibility, consistency, and anonymity, while addressing potential safety risks and ensuring equitable outcomes for all.