In terms of artificial intelligence, the recent initiative promoted in United States to guide the future legislative activity of its congressmen (titled “AI National Policy Framework”) significantly places the protection of minors at the center of the debate.
Not in vain, of the seven blocks that make up said framework, the protection of minors is the first and the most extensive.
This fact should not be considered a minor or merely symbolic decision. On the contrary, it reflects a sound political intuition: the first line of legitimacy of any technological regulation passes, today, through its capacity to protect the most vulnerable groups in an increasingly difficult-to-govern digital environment. In fact, the digital education of future citizens should, in my opinion, become a matter of national security or, at least, strategic from a public health perspective.
The aforementioned block on child protection raises a series of recommendations and mandates to the legislator, with in an approach that combines three elements: platform responsibility, parental empowerment, and the development of age verification mechanisms. On paper, the formula seems balanced.
AI services are required to incorporate measures to reduce evident risks —such as deepfakes, sexual exploitation or the inducement to self-harm—, while the role of parents as main managers of their children's digital environment is reinforced.
In fact, the proactive adult responsibility is a specific point within this document. Indeed, this regulatory framework requires the United States Congress to, in its future legislative activity, foresee obliging platforms to provide parents and guardians tools robust with which to adequately manage and configure privacy, usage time, exposure to inappropriate content, as well as other controls applicable to user accounts.
In addition, "reasonable" age verification requirements are introduced from a commercial point of view and must be respectful of privacy, which evidences a concern for not generating intrusive or disproportionate solutions.
However, the apparent clarity of these principles faces a much more complex technical and legal reality.
The first of the great challenges lies in the very definition of “child protection” in environments based on artificial intelligence. Unlike traditional platforms, AI systems not only distribute content, but also generate it dynamically, which enormously complicates ex ante control and shifts the problem towards much more sophisticated risk mitigation models.
In this context, requiring platforms to reduce risks without precisely defining the applicable technical standards can generate an inevitable tension between regulatory compliance and operational viability.
A second critical element is that related to age verification (age assurance). While Europe continues to advance in the promotion of these technologies (led, in large part, by the Spanish Data Protection Agency), the US proposal opts for other types of formulas, such as parental declaration, avoiding more intrusive systems.
This caution responds to the concern for a possible judicialization of the digital ecosystem
However, this approach raises obvious questions about its real effectiveness. Indeed, experience shows that the most robust self-identification mechanisms —based on biometrics or document verification— open a complex debate regarding privacy and proportionality. This dilemma, in addition to being technical, is profoundly political: what level of intrusion are we willing to accept to protect minors in digital environments?
Likewise, the mandate to the Congress so that future regulation avoids ambiguous standards on content or excessively open responsibility regimes is especially relevant. This caution largely responds to the concern about a possible judicialization of the digital ecosystem, which could lead to undesired effects, mainly the increase in multimillion-dollar lawsuits that provoke self-censorship by platforms and, ultimately, the limitation of innovation.
Nevertheless, this same prudence can translate into a certain normative indeterminacy that hinders the effective application of the proposed measures.
This initiative also introduces an interesting element of contrast. While the European Union has opted for a more structured and preventive approach —such as that included in the regulatory framework for artificial intelligence—, the American model seems to lean towards a combination of general principles and distributed responsibility among actors. The key question will be to determine which of these models proves more effective in practice to protect minors without compromising other essential values, such as innovation or freedom of expression.
Ultimately, the protection of minors against the risks of artificial intelligence cannot be addressed solely from a regulatory logic. It also requires an educational, technological, and social approach that recognizes the complexity of the phenomenon. Rules are necessary, but not sufficient. The real challenge lies in building a digital ecosystem where the safety of minors does not depend exclusively on technical or legal barriers, but on a true digital culture based on shared responsibility among platforms, institutions, and society.
Because, deep down, the question is not to protect minors from artificial intelligence, but what kind of digital environment we are willing to build for the generations that will grow up within it.
about the firm:
Francisco Pérez Bes is deputy of the Spanish Data Protection Agency. In addition, he was a partner in the Digital Law area of Ecix Group and is former Secretary General of the National Cybersecurity Institute (INCIBE).