Research project
Barriers and solutions to responsible applications of AI in mental health across cultures
- Start date: 28 February 2026
- End date: 27 February 2027
- Principal investigator: Skylar Wan
- Co-investigators: Dr Stuart W Flint, University of Leeds, UK; Dr Peikai Li, University of Leeds, UK; Dr Rebecca E Hudson Breen, University of Alberta, Canada; Dr Phoenix KH Mo, The Chinese University of Hong Kong; Dr Nii-Boye Emmanuel Quarshie, University of Ghana; Dr Memory Muturiki, University of Cape Town, South Africa; Dr David Lim, University of Technology Sydney, Australia; Dr Kris Mikel-Hong, Renmin University of China; Dr Xiaoya Xun, Shenzhen University, China & University of Cambridge, United Kingdom; Ms Ruiying Zhao, Vrije Universiteit Amsterdam, The Netherlands.
Description
Mental health conditions affect 970 million people worldwide, yet treatment access remains severely limited. WHO data shows 71% of individuals with psychosis receive no professional support. This gap is most severe in low-income countries, perpetuating poverty cycles and representing a pressing health equity challenge.
AI could address mental health workforce shortages. However, without proper guidelines, AI can harm. Patients raise concerns about privacy, safety, and accountability in AI-based care. Current AI tools reflect their training populations – mainly high-income, English-speaking countries. When deployed elsewhere, they lack cultural sensitivity and fail to meet diverse expectations for responsible AI.
While general ethical guidelines exist for AI, none are mental health-specific or provide a comparative analysis essential for globally applicable yet locally relevant frameworks. No research has systematically developed operational AI guidelines for mental health across diverse cultures. Without cultural tailoring, we risk widening mental health disparities.
This project moves beyond general principles to generate evidence-based, culturally sensitive operational frameworks for responsible AI deployment in mental health support. Rather than imposing definitions, we build frameworks through direct engagement with service users, caregivers, clinicians, policymakers, and tech developers across economic and cultural contexts.
Research overview
This project aims to develop evidence-based, culturally sensitive operational guidelines for responsible AI deployment in mental health care. WUN’s global network enables engagement with diverse populations and healthcare systems across multiple continents, ensuring the resulting guidelines reflect varied cultural contexts rather than imposing Western-centric standards. This framework will enable healthcare systems worldwide to harness AI capabilities while maintaining ethical standards and cultural sensitivity.
Using mixed methods, we will:
- Conduct a systematic review and meta-analysis identifying barriers to responsible AI implementation across cultures
- Engage diverse stakeholders (service users, caregivers, clinicians, policymakers, and technology developers) across seven countries (Ghana, South Africa, China, the UK, the Netherlands, Australia, and Canada), spanning five continents, to explore culture-specific expectations
- Synthesise findings into culturally adapted operational guidelines.
Contact
This project is funded by the Worldwide Universities Network (WUN).