Do androids dream of lived experience? A call for human connection in collaborative research amidst the growth of AI
Date Submitted: Mar 30, 2026
Open Peer Review Period: Mar 31, 2026 - May 26, 2026
Professionals, leaders, and institutions in healthcare and health research are rapidly adopting and integrating AI systems and chatbots into their regular work, but this poses risks for patients in the case of patient and public involvement and engagement (PPIE). AI offers economical solutions for overstretched health systems and burned-out staff, already shows strengths in speeding up more long-term and minute research practices, and providing unique accessibility accommodations. However, AI can also be used to create personas and virtual PPIE panels, which can speak completely or partially for human patients with lived experience of conditions, thus minimising, distorting, or erasing their voices from collaborative research processes. AI pose risks through several distorting factors, including hallucinations, overconfidence, sycophancy, bias, sexism, and racism. Staley and Barron have argued that learning is the greatest outcome of PPIE. However, if researchers, professionals, and staff use AI chatbots in conjunction with or in lieu of human collaborators, the amount of learning that takes places is greatly reduced, according to AI expert and cultural critic, Ethan Mollick. In conclusion, we provide a checklist to guide professionals and researchers in ethical and responsible uses of AI that preserves the voices and roles of patients, members of the public, and lived experience.
