The risks of therapy

Francisco Pérez Bes, Deputy of the Spanish Data Protection Agency and expert in Digital Law: "Reality leads us to ask ourselves if we are not delegating too quickly to AI a profoundly human function"

3 minutes

OPINIÓN PLANTILLA (2)

OPINIÓN PLANTILLA (2)

Comment

Published

3 minutes

Most read

The promise of an always available (and apparently empathetic) artificial intelligence has found in mental health one of its most promising, but also most problematic, fields of expansion.

Based on a recent study by Brown University, the European Union has warned of the growing tendency of citizens to use commercial AI systems as substitutes —or at least a complement— to traditional psychological therapy.

Although the potential that technology has in a social sphere so in need of reinforcement is seen, such reality leads us to ask ourselves if we will not be delegating too quickly to AI a profoundly human function.

The cited study merely puts figures and methodology to a known fact: these systems are not designed, neither technically nor ethically, to assume the role of therapists. However, millions of users use them as if they were, hoping to find there a quick and cheap emotional support.

In addition to the clear risk caused by using inadequate tools in such a sensitive aspect for mental health, the citizenry must be aware that these artificial intelligence systems do not “understand” human suffering nor do they apply therapeutic techniques in a strict sense. On the contrary, they limit themselves -without further ado- to generating plausible responses from statistical patterns.

Perhaps, the success of this type of practices responds to the human tendency consisting in that the more sophisticated the answer received from these novel tools seems, the greater is the illusion of professional competence. And, therefore, the credibility of these electronic “advice” that, in no case, possess scientific basis nor professional supervision.

And although the difference may seem semantic, in contexts of psychological vulnerability the impact is substantive. The potential harm we face is clear: a slightly biased response, an inappropriate validation, or an omission in a crisis situation can have real negative consequences for people.

Even more concerning is the void of responsibility. While mental health professionals operate under deontological codes, collegiate supervision, and potential legal liability, AI systems move in a regulatory gray area. Indeed, the lack of clarity regarding the subjective imputation of legal responsibility for the results of using these platforms is not only a legal problem, but also an ethical one.

Notwithstanding the foregoing, outright rejecting the use of AI in this field would be a strategic error. The global mental health crisis in which we find ourselves demands scalable solutions. And in this environment, technology can play a relevant role in the future of psychology, in the sense of offering us preliminary support, early risk detection, or complementary accompaniment.

But, at this moment of technological evolution, there is also no doubt that this integration must be done under strict conditions -technical and transparency- and supervised.

At heart, the researchers' warning is a reminder of prudence: we are not dealing with digital therapists, but with linguistic simulators of therapy. Confusing both levels is not -only- a simple conceptual matter: it is a risk for individual and collective health, which must be included in the health strategies of any state.

And, while there are numerous questions about this topic, one of the main ones is not whether artificial intelligence will be part of the mental health ecosystem —that seems inevitable—, but under what rules, with what guarantees, and with what degree of human supervision.

One of the report's conclusions is particularly representative: "there is a real opportunity for AI to play a role in fighting the mental health crisis that our society faces, but it is of utmost importance that we take the time to criticize and evaluate our systems every step of the way to avoid causing more harm than good."

That is to say, we must raise awareness among citizens about the unsuitability of using this type of general-purpose technologies with the aim of improving their mental well-being. While we ensure that AI is incorporated appropriately into this type of professional disciplines and services as a relevant improvement element, that allows addressing new human and social challenges in this complex field.

about the signatory:

Francisco Pérez Bes is deputy of the Spanish Data Protection Agency. In addition, he was a partner in the Digital Law area of Ecix Group and is former Secretary General of the National Cybersecurity Institute (INCIBE)