Artificial Intelligence in Digital Mental Health – Navigating the New Frontier

Artificial Intelligence in Digital Mental Health – Navigating the New Frontier

Over the last 20 years, the world of digital mental health has been an ever-changing landscape. Never before has the field changed so quickly, with the proliferation of the application of Artificial Intelligence (AI). The rapid development of these technologies has seemingly outpaced any solid framework to guide regulators, health practitioners, consumers and their support people in evaluating whether these technologies are evidence-based and safe to use.

AI is already integrated into wearable devices and smartphones which use biometric and GPS data to detect behaviours and signs associated with mental illness (Ehiabhi & Wang, 2023). AI based algorithms have been trialled to develop more personalised and adaptive digital treatment programs, and a range of speech and language analysis approaches can detect distress levels and mental health issues (Graham et al., 2019). Chat bots can lend an immediately accessible empathic ear and even offer tips and coping strategies, reducing the demand on human therapists. When performed responsibly and safely, there is the potential for AI to facilitate earlier detection of mental health issues, more accurate monitoring of mental state, improve the accuracy of diagnoses, enhance therapeutic outcomes, provide greater accessibility, improve cost effectiveness, and provide more timely delivery of digital mental health services.

However, there are also areas of risk associated with the field of AI and ethical and legal considerations for using this technology in digital mental health care. Concerns include potential biases, lack of transparency with algorithms, data privacy concerns in AI training models, and safety and liability issues with AI use in clinical settings (Reddy, Allan, Coglan & Cooper, 2020). Without current clarity or guidelines on using AI in digital mental health, it is vital that health practitioners thoroughly investigate, evaluate and oversee use of any digital mental health intervention utilising AI. This comes from understanding how the application works, investigating the research and checking that the application is safe and trustworthy. Some of the issues to consider whether it is safe or ethical to use AI digital mental health applications are discussed herein:

  • Does the digital mental health intervention actually help people? Is the consumers’ wellbeing the primary goal? Is there evidence that the digital mental health intervention might improve mental health outcomes for the consumer? Is this evidence vague or lacking?

 

  • Does the digital mental health intervention cause harm or have the potential to cause harm? Digital mental health interventions utilising AI are only as good as the data used to train them and the accuracy of the training process, which can be done using fabricated or biased data. Does the application only use peer reviewed sources for its data? AI applications are unable to understand context, think critically or make judgements, which can lead to errors. AI therefore, has the potential to spread misinformation, provide unsafe information, or to even offer no response to a risky situation that requires clinician follow up (Coeckelbergh, 2020).AI also has also been shown to pose a risk of social and cultural bias, given that inputted data is not always representative of the audience using the application. In this way, biases can be perpetuated and their impact even amplified through use of AI, further alienating marginalised groups (Centre for Democracy and Technology, 2018).

 

  • Does the AI application promote equitable access to mental health care? Is there a large fee involved, making it difficult to access for those who use it?

 

  • Another vital consideration is around informed consent. Is the consumer made aware that they are using AI upfront? Consumers need to know if they are receiving psychotherapy from an AI-driven chatbot or generative AI. Service users are usually vulnerable and it is important that this type of interaction is recognised as non-human. Not doing so may lead consumers to inadvertently humanize the chatbot due to its human-like language capabilities, potentially leading to manipulation, harm or dependence (Fiske et al., 2019; Seiferth et al., 2023). Often, the application of AI is invisible.Additionally, if the application changes the way that it collects, uses or shares consumer data, are consumers notified to provide informed consent again?

 

  • The most publicised issues in AI applications come from large-scale data privacy and security breaches. Given that consumers enter personal and sensitive information into these applications, it is vital to determine whether this data is secure. Are users fully informed about what data is collected, how their data will be used and/or shared with third parties? Is the data being used for another purpose? How the data secured safely?

 

  • Do you trust the developer of the application to do what they say they will? Is there accountability and transparency around the actions taken by the organisation?

National Safety and Quality Digital Mental Health Standards

To find out more comprehensive information about using a digital mental health service safely with your consumers, take a look at the National Safety and Quality Digital Mental Health Standards. These standards were developed to improve the quality of digital mental health services and to protect end users from harm. Their website provides resources and tip sheets for clinicians and consumers in evaluating a digital mental health service.

Digital Mental Health Service Directories

Are you looking for a particular service? Evaluating whether a program is evidence-based for consumers to use can be time-consuming. Fortunately, there are some sources that regularly update available digital mental services from trusted Australian service providers, and these are a great place to start:

For more resources and training in digital mental health visit emhprac.org.au.

Listen to the latest episode of our Digital Mental Health Musings podcast to hear more about navigating the legal and ethical dilemmas around AI and Digital Mental Health from renowned socio-legal researcher Dr Piers Gooding.

Listen Now

References

Centre for Democracy and Technology, 2018. “Digital decisions.” https://cdt.org/issue/privacy-data/digital-decisions/.

Coeckelbergh, Mark. AI Ethics, MIT Press, 2020. ProQuest Ebook Central, https://ebookcentral.proquest.com/lib/qut/detail.action?docID=6142275.

Ehiabhi J, Wang H. A Systematic Review of Machine Learning Models in Mental Health Analysis Based on Multi-Channel Multi-Modal Biometric Signals. BioMedInformatics. 2023; 3(1):193-219. https://doi.org/10.3390/biomedinformatics3010014

Fiske, A., Henningsen, P. & Buyx, A. (2019). Your robot therapist will see you now: ethical implications of embodied artificial intelligence in psychiatry, psychology, and psychotherapy. J. Med. Internet Res. 21, e13216.

Graham S, Depp C, Lee EE, Nebeker C, Tu X, Kim HC, Jeste DV. Artificial Intelligence for Mental Health and Mental Illnesses: an Overview. Curr Psychiatry Rep. 2019 Nov 7;21(11):116. doi: 10.1007/s11920-019-1094-0. PMID: 31701320; PMCID: PMC7274446.

Reddy, S., Allan,, S., Coglan, S. & Cooper, P. (2020). A governance model for the application of AI in health care. Journal of the American Medical Informatics Association, 27(3), 491–497 doi: 10.1093/jamia/ocz192

Seiferth, C., Vogel, L., Aas, B., Brandhorst, I., Carlbring, P., Conzelmann, A., … & Löchner, J. (2023). How to e-mental health: a guideline for researchers and practitioners using digital technology in the context of mental health. Nature Mental Health1(8), 542-554. https://doi.org/10.1038/s44220-023-00085-1