spot_img
HomeBehaviour or Mental HealthDocumenting the Larger Dilemma About Using AI to Support Mental Health

Documenting the Larger Dilemma About Using AI to Support Mental Health

Iris Telehealth, a leading provider of transformative behavioral health services for health systems and community healthcare organizations, has officially published the results from its 2025 AI & Mental Health Emergencies Survey.

Going by the available details, this particular survey took into account the opinion of more than 1,000 consumers, examining public perceptions of AI in identifying and responding to high-risk mental health cases, including trust in technology, expected follow-up actions, and the role humans should play when it comes to high-stakes decisions.

As for the results, they reveal that, while respondents recognize AI’s potential in expediting the detection of behavioral health crises, they continue to oppose any model where AI makes final care decisions without any human involvement whatsoever.

Talk about the published results on s slightly deeper level, we begin from how privacy concerns were deemed to be less of a barrier than expected. This conclusion was reached upon after half of consumers (49%) said they would allow AI to monitor facial expressions, voice tone, and typing patterns, if it meant earlier detection of mental health risk.

Next up, we must dig into how human oversight continues to hold significant importance despite AI’s growing adoption. You see, a staggering seventy-three percent went on to claim that human providers should make the final call in AI-flagged emergencies, whereas on the other hand, no more than 8% would trust AI to act independently.

Hold on, we still have a couple of bits left to unpack, considering we haven’t yet touched upon how 21% view AI as innovative and potentially life-saving, but at the same time, most emphasize caution, citing concerns about false positives (30%) and overreliance on technology over human connection (23%).

Rounding up highlights would be the fact that among top two preferred responses, in event of a crisis is detected, are notifying a trusted friend/family member (28%) or receiving a call from a counselor within 30 minutes (27%).

Among other things, it ought to be mentioned that, from a demographic standpoint, men were found to be more open to AI detection (56% would use automatic monitoring vs. 41% of women). Women, on the other hand, were deemed as more likely to insist that providers make the final decision (78% vs. 68%).

Furthermore, nearly one-third of millennials and Gen Z feel “very comfortable” with AI monitoring, compared to just 5% of boomers.

If we focus on the income demographic, we would learn that lower-income consumers are more receptive (61% of those earning $25k or less would use AI monitoring) than high earners (44%).

“Our findings should serve as a call to action for healthcare leaders,” said Andy Flanagan, CEO of Iris Telehealth. “Consumers are willing to accept certain privacy trade-offs if it means faster, more effective intervention, but they are equally clear that human oversight must remain central to care. AI can and should accelerate detection, but “…trust, accountability, and clinical judgement should remain in human hands — for now, at least.”