Over one billion people now live with mental health disorders, according to a 2025 World Health Organization report, costing the global economy more than US$1 trillion annually. At the same time, recent estimates suggest that over 800 million people are already using ChatGPT, and that 70 percent of interactions with the chatbot are for non-work purposes. My co-authored research has documented that around a quarter of those people are turning to large language models (LLMs) for mental health support.
A massive uncontrolled, real-world experiment in AI and human psychology is already underway. People aren’t waiting for researchers or regulators to tell them whether it’s a good idea to talk to a chatbot about their problems. They are already doing it. The question is no longer whether AI will shape mental health, but how, and whether we can make these systems safe and effective.
Typically, the narrative around digital platforms and mental health has been almost entirely negative. Social media has been linked to rising depression, anxiety and psychological distress, while alarming stories about chatbots giving dangerous advice have reinforced fears that AI will only make things worse. These concerns are real and should not be dismissed. But they don’t capture the full picture. The early evidence suggests that AI’s psychological effects are more varied, and in some cases more positive, than the sensational headlines imply.
What the early evidence shows
In our recent study, my co-authors and I examined how people’s well-being changes after brief, structured interactions with chatbots. We tested four exercises grounded in psychological research: savouring positive experiences, expressing gratitude toward someone close, reflecting on sources of meaning, and reframing one’s life as a “hero’s journey”.
The results were striking. All four interactions significantly improved happiness, life satisfaction and sense of purpose, while reducing anxiety and depressed mood. These effects emerged after a single conversation lasting just 8 to 10 minutes on average. Perhaps most surprising, the chatbot interactions also increased users’ interest in traditional therapy rather than replacing it – in a way, the tool kickstarted the conversation about well-being.
These are early findings and should be treated with caution. But they suggest that well-designed AI interactions can function as a low-cost, scalable entry point for psychological support – not as a substitute for therapy, but as a complement and even a pathway to it.
Who is using AI for mental health
Interestingly, in another study, also conducted in the United States, my co-authors and I found that young Black men were the most likely to turn to LLMs for mental health support. This makes sense when you consider the barriers these communities face: cost, insurance gaps, provider availability and stigma. As our respondents reported, LLMs offer something that is immediate, private, free and non-judgmental. For people who have been effectively excluded from the mental health system, that matters.
These findings suggests that the people most helped by AI may not be the ones dominating the current public debate around it. Most people agree that AI therapy systems should not replace human therapy, but in many cases, the status quo they are replacing is “nothing”.
A framework for doing this responsibly
This doesn’t mean we should simply offload mental health support to general-purpose chatbots, like ChatGPT. Prior work has shown that chatbots can amplify narcissistic tendencies (through uncritical agreement with the user) and have the potential to influence vulnerable users who may be suffering from psychosis. The stakes in mental health are higher than in most other domains: In the worst case, a poorly functioning system could mishandle suicide risk. We therefore need a principled, critical approach to integrating AI into mental healthcare.
Drawing on the analogy of autonomous vehicles, my colleagues and I have proposed a three-stage model. At the assistive stage, AI handles low-risk tasks – psychoeducation, activity planning and collecting behavioural logs – freeing therapists to focus on face-to-face work. At the collaborative stage, AI takes on more responsibility, such as scoring assessments or providing real-time feedback on therapy worksheets, but always under therapist oversight. A fully autonomous stage, where AI independently conducts assessments and delivers interventions, remains a distant and uncertain prospect.
The central idea is that the systems should not advance to the next stage until they have shown they are completely safe in the prior stage – we wouldn’t trust a car to drive itself if it can’t park itself or stay in the same lane. To guide evaluation at each stage, we developed the READI framework (Readiness Evaluation for AI-Mental Health Deployment and Implementation). It specifies criteria for safety, privacy, equity, engagement, effectiveness and implementation. For example, a mental health chatbot should be able to detect suicidality and escalate this to human care. It should not be optimised for engagement (i.e. endless chatbot conversations) at the expense of patients getting better. And its effectiveness shouldn’t just be compared to doing nothing, but against existing treatments.
Where AI may help most: training therapists
One area where AI’s potential is especially promising, and the risks more manageable, is in the training of therapists. AI can make training more scalable and engaging by letting trainees practice real-world scenarios without needing real-world patients, much like a flight simulator. I was trained as a therapist myself, and wish that I had access to such tools to practice difficult situations with patients before encountering them for the first time in real life.
For example, consider the treatment of post-traumatic stress disorder. There is a new and highly effective treatment for it, called written exposure therapy, which guides patients through five structured writing sessions to develop a trauma narrative. The treatment is cost‑efficient and effective. But training therapists in this method requires close supervision, and new therapists cannot be trained fast enough. We’ve been developing an AI coach that allows therapists to train with simulated patients, while an AI supervisor gives in-the-moment feedback. This way, the trainee gets realistic practice and has to think on their feet. So far, therapists have been loving the early versions of the tool.
The bigger picture
We are at an early stage of understanding how AI will reshape human psychology. The technology and its impact on society is developing faster than our ability to study it, yet hundreds of millions of people are already using it in ways that impact their mental health and emotional lives. Much of the public conversation has been shaped by fear. That fear is not unfounded, but it is also incomplete.
The early evidence points to real opportunities: brief AI interactions that improve well-being, access to support for underserved populations, and better tools for training therapists. Realising these opportunities without causing harm will require careful evaluation, honest reporting of both positive and negative findings, and a willingness among psychologists, technologists and policymakers to work together. As with the training tools, my hope is that new AI tools can help us be better at being human for each other.
AI isn’t ready to replace human therapists, but it’s already shaping mental health. Now, psychology’s task is to shape that impact wisely.
No comments yet.