Empathy, bias, and data responsibility: evaluating AI chatbots for gender-based violence support

dc.contributor.authorSanz Urquijo, Borja
dc.contributor.authorLópez Belloso, María
dc.contributor.authorIzaguirre Choperena, Ainhoa
dc.date.accessioned2025-09-26T10:31:08Z
dc.date.available2025-09-26T10:31:08Z
dc.date.issued2025-07-30
dc.date.updated2025-09-26T10:31:08Z
dc.description.abstractArtificial Intelligence (AI) chatbots are increasingly deployed as support tools in sensitive domains such as gender-based violence (GBV). This study evaluates the performance of three conversational AI models—including a general-purpose Large Language Model (ChatGPT), an open-source model (LLaMA), and a specialized chatbot (AinoAid)—in providing first-line assistance to women affected by GBV. Drawing on findings from the European IMPROVE project, the research uses a mixed-methods design combining qualitative narrative interviews with 30 survivors in Spain and quantitative natural language processing metrics. Chatbots were assessed through scenario-based simulations across the GBV cycle, with prompts designed via the Systematic Context Construction and Behavior Specification method to ensure ethical and empathetic alignment. Results reveal significant differences in emotional resonance, response quality, and gender bias handling, with ChatGPT showing the most empathetic engagement and AinoAid offering contextually precise guidance. However, all models lacked intersectional sensitivity and proactive attention to privacy. These findings highlight the importance of trauma-informed design and qualitative grounding in developing responsible AI for GBV support.en
dc.description.sponsorshipThis research was funded by the European Union, HORIZON Europe Innovation Actions, Grant Agreement number 101074010en
dc.identifier.citationSanz Urquijo, B., López Belloso, M., & Izaguirre-Choperena, A. (2025). Empathy, bias, and data responsibility: evaluating AI chatbots for gender-based violence support. Frontiers in Political Science, 7. https://doi.org/10.3389/FPOS.2025.1631881
dc.identifier.doi10.3389/FPOS.2025.1631881
dc.identifier.eissn2673-3145
dc.identifier.urihttps://hdl.handle.net/20.500.14454/3757
dc.language.isoeng
dc.publisherFrontiers Media SA
dc.rights© 2025 Sanz Urquijo, López Belloso and Izaguirre-Choperena
dc.subject.otherAI biases
dc.subject.otherArtificial intelligence (AI)
dc.subject.otherChatbots
dc.subject.otherGender-based violence (GBV)
dc.subject.otherIMPROVE European project
dc.subject.otherModel evaluation
dc.subject.otherPrompt design
dc.subject.otherQuality of empathic responses
dc.titleEmpathy, bias, and data responsibility: evaluating AI chatbots for gender-based violence supporten
dc.typejournal article
dcterms.accessRightsopen access
oaire.citation.titleFrontiers in Political Science
oaire.citation.volume7
oaire.licenseConditionhttps://creativecommons.org/licenses/by/4.0/
oaire.versionVoR
Archivos
Bloque original
Mostrando 1 - 1 de 1
Cargando...
Miniatura
Nombre:
sanz_empathy_2025.pdf
Tamaño:
1.05 MB
Formato:
Adobe Portable Document Format
Colecciones