

In February 2026, the Spanish Data Protection Agency (AEPD) published guidelines on agentic artificial intelligence, a technology that is gradually being incorporated into companies and public administrations. Its importance lies in the fact that it is not just a new tool, but a different way of carrying out personal data processing.
The guidelines do not aim to resolve specific cases, but to provide a framework for understanding what changes when processing relies on AI agents.
The guidelines describe AI agents as systems that use language models to achieve objectives, adapting to their environment and acting according to circumstances.
Unlike simpler systems, these do not merely respond to requests. They can organize tasks, break them down into stages, access different sources of information, and execute actions within digital systems.
In other words, these are not passive tools, but systems capable of actively intervening in organizational processes.
One of the key points of the document is that the use of agentic AI can change how data processing is structured.
When these systems are integrated, they may alter:
For this reason, their implementation requires a review of regulatory compliance, even in existing processing activities.
Additionally, agents may access both internal information and external sources, which may involve the use of personal data that was not initially foreseen..
The guidelines emphasize that risks do not arise solely from language models, but from the interaction between multiple components. It is precisely this complexity that creates new vulnerabilities.
Among the most relevant:
The AEPD distinguishes between two key levels:
Both may contain personal data and require differentiated handling.
One of the most interesting aspects of the guidelines is their approach to risk. The AEPD highlights that agentic AI changes the nature of processing and therefore requires a specific analysis.
As a guiding tool, it refers to the so-called “rule of 2”, which states that three elements should not occur simultaneously:
Although this is a simplification, it serves as a warning to identify particularly risky configurations.
However, the AEPD itself notes that the analysis must go further, incorporating factors such as data quality, the presence of bias, and compliance with the principle of data minimization.
The use of these systems requires careful attention to several issues:
Rather than merely identifying risks, the guidelines propose a broad set of measures. Among the most relevant:
All of this is based on a key idea: data protection must be integrated by design.
Agentic AI introduces real risks, derived from its autonomy, its integration capacity, and its high technical complexity.
However, it can also become a powerful ally in strengthening data protection, provided that its implementation is carried out properly.
The key lies in the approach: it is not just about using the technology, but about understanding it, controlling it, and defining the limits within which it should operate.
Because in this new scenario, the greatest risk is not artificial intelligence… but using it without understanding it.