The Europeans agree to prohibit sexual "deepfakes" generated with AI

The reform of the European regulation seeks to curb non-consensual intimate content and child sexual abuse material generated with artificial intelligence

4 minutes

Comment

Published

Last updated

4 minutes

The Twenty-Seven continue immersed in the processing of the simplification packages known in community jargon as “Omnibus”. This Friday, the Council has set its position on the reform of the regulations related to Artificial Intelligence. During this process, the States have taken the opportunity to introduce into the legislative dossiers a measure that until now was not contemplated: to end generative tools such as Grok.

Specifically, the meeting of ambassadors has agreed to add a new provision in the Artificial Intelligence Act that would prohibit the generation of non-consensual sexual and intimate content, as well as child sexual abuse material. In addition, the text that will emerge from the Council to reach the trilogue negotiations modifies the fixed deadline for the late application of high-risk rules. Countries want these dates to be December 2, 2027 for independent high-risk AI systems and August 2, 2028 for high-risk AI systems integrated into products.

The Cypriot presidency defends the urgency of the reform

From the Cypriot presidency of the Council they assure that this matter was of “highest priority” and celebrate the fact that the “Member States share the sense of urgency”. “The streamlining of AI rules is essential to guarantee the digital sovereignty of the EU,” stated the Deputy Minister for European Affairs of Cyprus, Marilena Raouna. In their delegation they point out that the proposal will provide greater legal certainty, will make the rules more proportionate and will guarantee a more harmonized application across all Member States.

In addition, the Council has established the obligation for providers to register Artificial Intelligence systems in the European Union database for high-risk systems, in cases where their systems are considered exempt from high-risk classification. It has also been proposed to postpone the deadline for the establishment of regulatory sandboxes by the competent authorities until December 2 of next year.

The States also want the European Commission to provide guidance to help economic operators of these systems comply with the high-risk requirements of the AI law, in a way that minimizes the compliance burden.

The Digital Omnibus to simplify regulation

In November, the Community Executive presented the “Digital Omnibus”, which is composed of two proposals: one of them aimed at simplifying the digital legislative framework and the other with the objective of reducing burdens in the application of AI regulation.

Now negotiations will begin with the working groups of the European Parliament. Cypriot sources state that they are “ready to work with our co-legislators in our joint efforts to support our businesses, facilitate innovation, and build a more competitive Europe”.

After several debates, the European Parliament is in favor of banning the use of these tools for the creation of sexual content. The socialist MEP Laura Ballarín celebrated in Demócrata the fact that the European Union "begins to respond legally to phenomena such as the dissemination of manipulated images that disproportionately affect women."

In this discussion, populars and socialists seem to agree, just as MEP Rosa Estarás recognizes: “The creation and dissemination of intimate images without consent, including AI-generated sexual deepfakes, already constitute a crime. The question is not whether the law exists, but whether it is applied with determination.”

Brussels investigates X for its Grok system

In January, Brussels took a new step in its digital offensive against X. This time, the Community Executive launched a new investigation against the tech giant owned by Elon Musk for its artificial intelligence service Grok, focused on the management of its recommendation systems and content generation.

The procedure will analyze whether the company has adequately examined and resolved the risks related to the dissemination of illegal content in the European Union. Among the elements that will be studied is the spread of manipulated sexually explicit images, as well as any other content that may constitute child sexual abuse material.

From the department of executive vice-president Henna Virkkunen emphasized that non-consensual sexual falsifications of women and minors constitute a form of violent and unacceptable degradation.

«With this investigation we will determine if X complied with its legal obligations under the Digital Services Act (DSA) or if it treated the rights of European citizens, including those of women and children, as collateral damage of its service,» said the head of the digital area of the community Executive.

Changes in digital literacy, biases, and content labeling

With the new regulation in place, the unspecified obligation that operators had regarding AI literacy will be transformed into an obligation for the Commission and the Member States.

In matters of bias detection, which is part of the AI Act that came into force during the summer of 2024, compliance with data protection laws will be facilitated by allowing providers and users of all AI systems and models to process special categories of personal data to ensure the detection and correction of biases, provided that adequate safeguards are applied. This exception only applies if bias detection cannot be effectively fulfilled through the processing of other data.

Featured story

Europe

Furthermore, a transitional one-year grace period will be introduced for compliance with the obligation of “watermarking” (marking or detection) for generative AI systems that were placed on the market before these obligations became applicable.

The Commission raises the possibility of offering that same one-year grace period for compliance with the obligation of “watermarking” for generative AI systems that were already on the market before these rules came into force.