The anthropomorphic services and the new ethical risks of AI

Francisco Pérez Bes, deputy of the Spanish Data Protection Agency and expert in Digital Law, analyzes in Demócrata the provisional measures for the administration of humanized interactive services based on AI from China

4 minutes

OPINIÓN PLANTILLA (2)

OPINIÓN PLANTILLA (2)

Comment

Published

4 minutes

In late 2025, the Cyberspace Administration of China submitted for public consultation a document with a revealing title: provisional measures for the administration of humanized interactive services based on artificial intelligence. The text went practically unnoticed in Europe. However, I believe it anticipates some of the most uncomfortable ethical debates —and urgent— that we will have to face in an increasingly near future.

The stated objective of these measures is to promote an “adequate” development of the so-called anthropomorphic interactive services. That is, artificial intelligence systems with human appearance and advanced emotional interaction capability. But what is truly relevant is not the social context in which it is framed, but the approach. Unlike the purely technological view (and reactive) that usually dominates Western debate, the Chinese regulator approaches AI from a logic of systemic, social, and psychological impact.

And there is the key.

The guiding principles of the document allow to identify quite clearly the dilemmas that will emerge when these services begin to be deployed or, at least, to be offered, in the market.

The emotional detachment of AI

One of the central axes of the proposal is the prohibition of designing systems that can confuse or deceive the user through excessive anthropomorphism. In simple terms: it is intended to guarantee that a reasonably informed person can know, at all times, if they are interacting with a machine or with another human being.

One of the central axes of the proposal is the prohibition of designing systems that can confuse or deceive the user through excessive anthropomorphism

This approach acquires special relevance if one takes into account the rapid evolution of robotics, synthetic materials, and the mobility of androids. One might think that, in a near future, distinguishing between people and machines with human appearance will be increasingly difficult. And not all groups will start from the same capacity to do so.

The most disruptive measure of the text is, without a doubt, the obligation to “break the user's immersion” after a maximum interaction time, which is set at two hours. This forced reminder acts as a kind of reality anchor, designed to prevent citizens from replacing their human social relationships with an emotional loyalty towards algorithms with a face and voice.

From an ethical perspective, the risk that is attempted to be mitigated is not minor: the progressive erosion of social cohesion and the weakening of interpersonal bonds.

The proactive responsibility of the provider

Unlike Western regulatory approaches, generally reactive, the proposal imposes on manufacturers and suppliers an active responsibility for user well-being. It is not just about complying with rules, but about anticipating harm.

Thus, if a system detects patterns of addiction, social isolation or emotional dependence, the company would be legally obliged to introduce limitations or restrictions in the service. The AI provider comes to play, de facto, a role of ethical surveillance and public health protection, with all the implications and controversies that this entails.

The company would be legally obliged to introduce limitations or restrictions in the service if a system detects patterns of addiction, social isolation, or emotional dependence

The threshold of the massive scale

Another especially significant element is the establishment of a critical user threshold. In this regard, any service that exceeds one million users, or one hundred thousand monthly active users, must undergo a series of evaluations, aimed at verifying that there is no risk of manipulation or conditioning on the part of the population, which at certain times can come to be considered as a threat to national security.

The concern is not so much technological as cultural: to prevent AI systems with emotional persuasion capability from massively influencing public opinion or social values without prior supervision.

Reinforced protection of minors

In the case of services aimed at minors, the text introduces a categorical prohibition: the use of AI that imitates authority figures or family members is not allowed. The objective is to prevent forms of “cognitive kidnapping” capable of conditioning values, behaviors, or biases from an early age through the emotional design of the system.

An uncomfortable look from Europe

From a state stability perspective —or national security, depending on who you ask—, the Chinese strategy can be interpreted as a preventive success. By controlling the “personality” of AI, the State reduces the risk of virtual assistants emerging that are capable of ideologically influencing through empathy and subtle persuasion.

From the point of view of innovation, however, these restrictions can generate competitive disadvantages in areas such as emotional AI or entertainment, where user retention and the realism of interaction are key factors. It is the usual argument when talking about time to market: whoever innovates without limits reaches the market sooner, even if they assume compliance risks.

But perhaps the most uncomfortable thing for the West is the fundamental ethical debate that has been put on the table. Indeed, while in Europe and the United States we continue to focus the discussion on data privacy or algorithmic transparency, China poses a deeper question: do we have the right to fall in love, to depend emotionally, or to delegate part of our social life to an artificial intelligence?

The more machines resemble us, the harder it will be to keep dodging that question. And the more urgent it will be to answer it.