top of page

AI Chatbots and Mental Health: Navigating the Pros and Cons of Digital Support

As the demand for mental health services continues to soar, the limitations of traditional

healthcare systems have become increasingly apparent. In response to this pressing need,

the emergence of AI-powered mental health chatbots has opened new avenues for mental

health care.

You may be familiar with several notable AI bots in the mental health field. To name a few,

Replika is a social AI bot, Woebot Health and Wysa provide emotional support based on CBT principles, Limbic Access specialises in mental health triage, and Vivibot is designed to teach positive psychology skills to reduce anxiety in young people undergoing cancer treatment.

Mental health chatbots are AI-powered software that offer accessible and convenient

support through smartphones for individuals who are dealing with mental health

challenges. Through natural language processing and machine learning algorithms, these

chatbots engage in human-like conversations, and adapt their responses based on user

input to provide personalised support.

The support provided can takes various forms such as direct assistance to users like

providing emotional support, psychoeducation, and coping strategies, as well as improving

services through better diagnosis, triage and tracking of mental health symptoms. While

many programs primarily address common issues such as depression and anxiety, they also

extend to other conditions like autism, substance misuse, dementia, and suicidality. 2

This article will outline general pros and cons of the use of chat bots in the mental health


First, some key advantages that AI chatbots can provide include:

1) Breaking stigma around mental health issues and help seeking:

Stigma surrounding mental illness and mental health services is a significant barrier that

discourages individuals from seeking help and openly discussing their struggles. However, AI

chatbots can make a real difference. They are non-judgmental, unbiased, and keep

everything confidential, creating a safe space where you can freely share your thoughts and

emotions without. In fact, a study discussed how veterans were more comfortable talking

about their trauma to a virtual agent rather than a healthcare professional because of the

stigma they faced. 4 This shows that AI chatbots have the potential to make it easier to seek

help from professionals and get the support you need.

2) Increasing accessibility to mental health support:

AI bot brings more accessibility to mental health support, as it provides mental health

support whenever and wherever it is needed. Unlike traditional therapy sessions, which

require scheduling and appointments, AI bots are available 24/7. These bots can be

conveniently accessed through smartphones and seamlessly integrated with various apps,

allowing for easy collaboration with health professionals and even syncing with your health

data. Additionally, the cost-effectiveness of AI bots is a game-changer. While traditional

therapy can be prohibitively expensive and inaccessible for many, AI bots often come at a

fraction of the cost if not free, providing an affordable alternative for accessing essential

mental health support.

3) Reducing burden on professionals and services:

AI mental health bots can also reduce burden on mental health services by handling routine

inquiries, scaling support, providing continuous availability, assisting in screening, triage,

and diagnosis, and offering data analysis insights. This optimises resource allocation,

reduces wait times, and allows professionals to focus on complex cases, improving the

efficiency of mental health services. It's important to note that the developers of these

programs emphasise that they are not designed to replace the current standard of care but

rather to complement traditional mental health services and support with long waiting time

to receive mental health support.

It is also important to highlight common concerns and challenges of AI chatbots, which


1) Lacking emotional awareness and empathetic response:

AI mental health bots lack the nuanced emotional awareness and empathetic response of

human professionals. While they can provide valuable support, their pre-programmed

nature limits their ability to fully understand and empathise with complex human emotions

and experiences. While there is generally a positive outlook on the use of AI in mental

health, some professionals, particularly physicians, hold the belief that AI could never match

the empathetic care provided by a human healthcare professional.

2) Addressing ethical and legal concerns:

AI chatbots bring about significant legal and ethical concerns that demand careful

attention. 6 Who would be responsible if the bot made a mistake in diagnosing a user or

misinterprets their distress? Additionally, there are concerns about how AI chatbots can

effectively manage risk, especially in rural areas where access to necessary services may be

limited. For example, what should be done if a therapy bot detects signs of higher self-harm

risk, but appropriate support is not available nearby? Another important consideration is

ensuring, fairness and avoiding bias in the training of machine learning models, as they can

sometimes favour specific genders or ethnicities. These ethical concerns highlight the need

for transparency, accountability, and thoughtful regulation in the development and use of AI


3) Developing risk of dependency:

AI chatbots can be incredibly helpful when it comes to providing mental health support, but

it is important to be mindful of the potential risk of dependency they can pose. While they

offer valuable guidance, it is essential not to rely on them solely for validation and support.

Building genuine human connections and developing interpersonal skills play a vital role in

our overall well-being, and AI chatbots cannot fully replace the importance of those

connections. Additionally, becoming overly dependent on chatbots can have negative

effects on our well-being. A study demonstrated that some users formed emotional

attachments and even felt a sense of responsibility towards social bots, treating them as if

they had their own needs. 7 It is therefore important to find a balance between making use

of AI chatbots as a valuable tool and fostering meaningful connections in real life.

In a nutshell, AI chatbots have made their way into the mental health support scene,

offering a glimmer of hope. They bring convenience, privacy, and quick responses to those

seeking help. However, it is crucial to be mindful of their limitations. These chatbots may

lack the human touch that we need, and there is a risk of fostering dependency or the

potential for serious bias and mistakes with ethical and legal consequences. To make the

most of their potential while ensuring a friendly and comprehensive care experience, it is

essential to integrate AI chatbots with traditional mental health services and uphold strong

ethical standards.


1. Vaidyam AN, Wisniewski H, Halamka JD, Kashavan MS, Torous JB. Chatbots and

Conversational Agents in Mental Health: A Review of the Psychiatric Landscape. Can

J Psychiatry. 2019 Jul;64(7):456–64. Available from:

2. Abd-Alrazaq AA, Alajlani M, Alalwan AA, Bewick BM, Gardner P, Househ M. An

overview of the features of chatbots in mental health: A scoping review. Int J Med

Inform. 2019 Dec. Available from:

3. Boucher EM, Harake NR, Ward HE, Stoeckl SE, Vargas J, Minkel J, et al. Artificially

intelligent chatbots in digital mental health interventions: a review. Expert Review

of Medical Devices. 2021 Dec ;18(sup1):37–49. Available from:

4. Lucas GM, Rizzo A, Gratch J, Scherer S, Stratou G, Boberg J, et al. Reporting Mental

Health Symptoms: Breaking Down Barriers to Care with Virtual Human Interviewers.

Frontiers in Robotics and AI. 2017;4. Available from:

5. Doraiswamy PM, Blease C, Bodner K. Artificial intelligence and the future of

psychiatry: Insights from a global physician survey. Artif Intell Med. 2020

Jan;102:101753.Available from:

6. Henz P. Ethical and legal responsibility for Artificial Intelligence. Discov Artif Intell.

2021 Sep 22;1(1):2. Available from:

7. Laestadius L, Bishop A, Gonzalez M, Illenčík D, Campos-Castillo C. Too human and

not human enough: A grounded theory analysis of mental health harms from

emotional dependence on the social chatbot Replika. New Media & Society. 2022

Dec 22. Available from:

64 views0 comments


bottom of page