World events such as the COVID-19 pandemic and extremely rapid developments in artificial intelligence have greatly increased the interest in virtual therapy or ‘therapy bots’. But the ethics of these remain obscure, ill-defined and under-examined. This post takes a three-minute look at the ethics of AI-powered psychotherapy.

Clients are largely unaware of the ethical issues of virtual therapy

During the COVID-19 pandemic, some of my clients had searched online for virtual therapists and experimented with these. Anecdotally, they found some positives. For example, such therapists could provide supportive words that helped them manage their mood or feelings; or provide guidance and material for mindfulness or positive mental attitude. And the overwhelming view was “these are not really therapy”. Fair enough!”

Despite this early stage, it got me asking some of the more obvious ethical questions that will inevitably come up. Such as: how did you feel about entering your personal story into a computer? And about how it is stored and used? How do you feel about an AI driving the therapy through processes that even the developer can’t see or understand? As a generalisation, they had not thought about such things. The need for support and to alleviate distress tends to override the motivation to check out the technology first.

Ethical issues of AI-powered psychotherapy: a starter list

That got me thinking, what are they issues here? If you’re thinking of getting involved in virtual therapy as developer, therapist or client, there are many! A longer than usual train journey gave me time to scribble a few down. Here’s a list. I hope to expand on some of these in future, maybe in conversation with you all!

Confidentiality

Has the app provider set out clearly the limits of confidentiality ? Or when it will disclose the client’s speech or apparent situation to a third party? How good is the AI at identifying and flagging risk? In what circumstances would a human intervene and manage any risk to the client or others? Is there even a human involved?

Privacy

The questions here are similar to a human therapist. How is the clients’ data being used? Does data processing conform to local laws? Is the privacy notice, if any, readable and understandable to the client? Do data security measures adequately protect sensitive information from unauthorized access or breaches?

Clients should be fully informed about the use of AI in their therapy and understand its implications. Informed consent should address how AI algorithms will be used, the limitations of AI compared to human therapists, and the potential risks and benefits. Is this achievable when developers cannot always know why the outputs which emerges from the ‘black box’ do so?

Quality of Care

While AI can provide valuable insights and support, it cannot replace or replicate the empathy, intuition, and interpersonal skills of human therapists. But quality issues are wide and varied and way beyond the scope of this post.

Bias

AI algorithms can inadvertently perpetuate biases present in the data that are used to train them. Developers must actively mitigate bias in AI systems to ensure fair and equitable treatment for all patients, regardless of factors like race, gender, or socioeconomic status. Do we know yet how we are building bias into virtual therapy apps or how to mitigate it?

Transparency and Accountability

AI algorithms can be opaque, making it challenging or impossible to understand how they arrive at their outputs or conclusions. Developers should strive for transparency in AI systems to enable clinicians and patients to understand the basis of AI recommendations and interventions.

Dependency and Autonomy

Patients could potentially become overly reliant on AI systems, diminishing their autonomy and agency in decision-making. It will be important to maintain balance between the use of AI as a tool to support or deliver therapy and empowering patients to take an active role in their treatment and their lives.

Transition between human and virtual therapists

Has the therapy provider addressed concerns about the continuity of care if patients accustomed to interacting with AI systems need or want to transition to human therapists? Or vice versa. Strategies to ensure seamless transitions and dove-tail treatment are essential.

Regulation and Oversight

How robust is regulation and oversight to ensure patient safety and ethical practice? Have regulatory bodies and professional published guidelines and standards for the development, deployment, and monitoring of AI systems in psychotherapy? Do they have any jurisdiction if a client is unhappy with the therapy service?

The ethics of AI-powered psychotherapy are challenging

Many of these issues are already complex when humans are delivering therapy. The development of virtual therapy simply adds to them because there cannot be end-to-end human oversight of the automated processes or the content generated.

Addressing these ethical implications requires a high level of collaboration between clinicians, researchers, ethicists, policymakers, and AI developers. Meta AI has published a document called “Responsible Use Guide. Resources and best practices for responsible development of products built with large language model” (You’re probably aware that Meta is the company that owns Facebook, Instagram and WhatsApp.) I have no idea how developer and health communities view such documents. But flick through and you will see the level of tech-based complexity we are dealing with here. And this makes no mention yet of specific mental health or therapy use cases.

Overlaying the tech issues with multi-layered ethical concerns is a little mind-boggling. It makes me wonder why anyone would ever venture into making a virtual therapy app! But virtual therapy is here to stay. And we will need to develop clinical and technology frameworks and guidelines which promote the responsible, risk-based use of AI in psychotherapy with client well-being front and centre. It seems we are just setting out on what may prove to be a long and tortuous journey!

Header image by Delta Works.

Leave a Reply