Multimodal emotion recognition for emphatic virtual agents in mental health interventions
Contenido multimedia no disponible por derechos de autor o por acceso restringido. Contacte con la institución para más información.
| Tag | 1 | 2 | Valor |
|---|---|---|---|
| LDR | 00000cab a2200000 4500 | ||
| 001 | MAP20260002712 | ||
| 003 | MAP | ||
| 005 | 20260211190523.0 | ||
| 008 | 260205e20251208esp|||p |0|||b|eng d | ||
| 040 | $aMAP$bspa$dMAP | ||
| 084 | $a922.134 | ||
| 100 | $0MAPA20260002125$aHuerta Espinoza, Marcelo Alejandro | ||
| 245 | 1 | 0 | $aMultimodal emotion recognition for emphatic virtual agents in mental health interventions$cMarcelo Alejandro Huerta Espinoza, Ansel Y. Rodríguez González and Juan Martinez Miranda |
| 520 | $aDepression and anxiety disorders affect millions of individuals globally and are commonly addressed through psychological interventions. A growing technological approach to support such treatments involves the use of embodied conversational agents that employ motivational interviewing, a method that promotes behavioral change through empathic engagement. Despite its critical role in therapeutic efficacy, empathy remains a significant challenge for virtual agents to emulate. Emotion Recognition (ER) technologies offer a potential solution by enabling agents to perceive and respond appropriately to users' emotional states. Given the inherently multimodal nature of human emotion, unimodal ER approaches often fall short in accurately interpreting affective cues. In this work, we propose a multimodal emotion recognition model that integrates verbal and non-verbal signals (text and video) using a Cross-Modal Attention fusion strategy. Trained and evaluated on the IEMOCAP dataset, our approach leverages Ekman's taxonomy of basic emotions and demonstrates superior performance over unimodal baselines across key metrics such as accuracy and F1-score. By prioritizing text as the main modality and dynamically incorporating complementary visual cues, the model proves effective in complex emotion classification tasks. The proposed model is designed for integration into an existing conversational agent aimed at supporting individuals experiencing emotional and psychological distress. Future work will involve embedding the model in the conversational agent platform for emotionally distressed users, aiming to assess its real-world impact on engagement, user experience, and perceived empathy | ||
| 650 | 4 | $0MAPA20080611200$aInteligencia artificial | |
| 650 | 4 | $0MAPA20110010515$aSalud mental | |
| 650 | 4 | $0MAPA20210005503$aMedicina bioelectrónica | |
| 650 | 4 | $0MAPA20080550400$aDepresión | |
| 650 | 4 | $0MAPA20260002156$aAnsiedad | |
| 700 | 1 | $0MAPA20260002132$aRodriguez Gonzalez, Ansel Y. | |
| 700 | 1 | $0MAPA20260002149$aMartinez Miranda, Juan | |
| 710 | 2 | $0MAPA20260002095$aIBERAMIA, Sociedad Iberoamericana de Inteligencia Artificial | |
| 773 | 0 | $wMAP20200034445$g08/12/2025 Volume 28 Number 76 - December 2025 , p. 28 - 39$x1988-3064$tRevista Iberoamericana de Inteligencia Artificial$d : IBERAMIA, Sociedad Iberoamericana de Inteligencia Artificial , 2018- | |
| 856 | $uhttps://journal.iberamia.org/index.php/intartif/article/view/2508 |