The regulatory moment in artificial intelligence has changed. We are no longer in the phase of grand principles, where abstract values and generic objectives were debated, but in the difficult one, ensuring that regulation is coherent, technically viable, and compatible with European competitiveness. Achieving this requires us to advance in parallel in three deeply connected areas: understanding, execution, and evaluation.
Understand to be able to comply
As the saying goes, 'a good listener needs few words', and in the case of AI, literacy is a very relevant condition for any regulatory framework to function. Beyond the ambiguity of Article 4 of the European AI Regulation, literacy must be an operational requirement, proportional and demonstrable, especially in organizations that design, deploy, or supervise systems with real impact. Otherwise, we run the risk of creating rhetorical obligations, very visible in public discourse, but weak in practice.
For many SMEs and medium-sized companies, this concretization offers more legal certainty than an open-ended obligation. Taking into account that article 4 focuses on training the people responsible for the organization's artificial intelligence systems, we consider it necessary to establish a clear and verifiable training base, something that from Adigital we propose in a structured way in three itineraries: management, technical, and operational.
To this is added a set of organizational culture lines: regulation, human supervision, risk management, data governance, transparency and cybersecurity, which allows demonstrating institutional maturity and preparedness. Only then will literacy cease to be an abstract concept.
"Literacy must be an operational, proportional, and demonstrable requirement, especially in organizations that design, deploy, or oversee systems with real impact"
Clear standards for real execution
Secondly, it is not enough to understand the regulation; it is essential to have a code of good practices and clear standards. This is where the standardization work of CEN-CENELEC (the European organizations responsible for developing harmonized technical standards) comes into play, one of the major bottlenecks in recent years.
It is useless to demand complex, high-risk obligations if common standards, technical specifications, and implementation guides are not sufficiently developed.
The AI Omnibus promises to consolidate this framework, but the advancement of different standards remains complicated. The codes of good practice published by the European AI Office are necessary and very useful, as are the guides published by AESIA (the supervisory agency); but they are not sufficient.
Only an approach based on implementable standards, integrable into GRC (governance, risk, and compliance) tools, and accessible to both SMEs and large organizations will allow achieving effective compliance in critical matters. This is key in sectors such as health, transport, or finance, where risks are not theoretical, but real.
The sandbox as a laboratory
In third place, the evaluation, where we consider prominent the use of regulatory sandboxes, controlled environments that allow public-private collaboration with rigor but with peace of mind from a regulatory point of view for organizations. The pioneering experience of the Spanish sandbox 2025-2026 must become a structural governance instrument, which transcends AI and is applied to future strategic technologies.
It is important to distinguish between two sandbox models. The first, purely regulatory, serves to interpret rules and understand their scope. The second, regulatory-technological, goes much further: it allows testing documentation, methodologies, evaluation, traceability, and supervision in controlled or real environments. This latter approach may be more complex, but it is what Europe needs to turn regulation into a competitive advantage, capable of guiding safe and responsible innovation.
This roadmap only works if each actor assumes their role. For their part, institutions must translate principles into concrete policies: funded training, promotion of standards, and coordinated governance on a European scale. And companies cannot wait for the framework to be perfect, as their future competitiveness and productivity will depend on whether they adapt their business model to the requirements of the regulation. Finally, citizens, the ultimate recipients of these systems, also have an active role as their informed understanding and demands are the basis for the social legitimacy of these technologies.
"Institutions must translate principles into concrete policies: funded training, promotion of standards, and coordinated governance on a European scale"
From theory to practice
Europe was a pioneer in legislating on artificial intelligence. Now is the time to demonstrate that it works in practice. Implementing smart regulations, in addition to an obligation, is also an instrument to strengthen European competitiveness, foster safe innovation, and ensure that AI systems serve societies responsibly. AI regulation is no longer played out in principles; it is played out in implementation.