Pressing need for ethical and regulatory oversight of therapeutic voice AI, 911勛圖 expert urges
Reprinted with permission from 911勛圖 News
by Robyn Stubbs
As voice artificial intelligence (AI) speeds toward use in clinical settings, a researcher from 911勛圖 is highlighting the urgent need for ethical, legal, and social oversightespecially in therapeutic care.
Voice AI analyzes vocal patterns to detect signs of physical, cognitive, and mental health conditions based on vocal qualities like pitch and jitter or fluency and specific words people use. Some tech companies have even dubbed it the new blood of healthcare because of its potential to act as a biomarker, but 911勛圖 health sciences researcher Zoha Khawaja urges caution.
In her , Khawaja, a member of the explores the potential and the perils of voice-based AI apps in the mental health field.
Khawajas study used structured, multi-round surveys to gather insights from 13 stakeholders, including clinicians, ethicists, and patients. While 77 per cent of participants supported using voice AI to improve patient outcomes, 92 per cent agreed that governance models should be established by healthcare or governmental organizations to oversee its integration.
Voice AI holds real promise as an objective tool in the mental health field, which has always relied on subjective diagnostics like self-reporting and interviews, says Khawaja. But the entrepreneurial speed of the tech is outpacing regulatory oversight in such a high-stakes environment like healthcare.
Some companies already offer apps that analyze short voice samples to assess mental fitness. However, Khawaja warns that these tools often operate in a wellness gray zoneavoiding classification as medical devices and sidestepping privacy protections.
Theres a real risk of therapeutic misconception, where people may believe these apps are providing clinical diagnoses or treatment, when in fact theyre not," Khawaja explains. Thats particularly dangerous for vulnerable users who may not have access to traditional care.
Key concerns raised by participants included algorithmic bias, lack of transparency, erosion of human connection in care, and unclear accountability. The study advocates for a digital compassionate care approach, where AI tools supportnot replacehuman relationships in therapy.
Patients might feel safer talking to a chatbot than a person, Khawaja says. But that can lead to overreliance and isolation. These tools should strengthen the clinician-patient bond, not undermine it.
She also recommends a shared responsibility model among developers, regulators, and healthcare providers to prevent ethics dumpingthe unfair shifting of ethical burdens onto clinicians. Notably, 83 per cent of participants agreed that healthcare practitioners should be held accountable for adverse events resulting from the use of voice AI tools.
But clinicians are already overburdened, Khawaja says. Expecting them to bear the ultimate responsibility of these technologies is unrealistic.
Clinical trials to validate voice as a biomarker are currently underway in the U.S., where regulatory sandboxescontrolled environments for testing new technologiesare being proposed to anticipate ethical challenges and inform policy before voice AI enters clinical practice.