Agentic AI and Data Protection

The Spanish Data Protection Agency (AEPD) has published a 2026 guide on agentic AI. Discover what it is, its risks, the legal obligations involved, and its impact on the processing of personal data.
Pasqual Guerrero
April 7, 2026

How Agentic AI Affects Data Processing

In February 2026, the Spanish Data Protection Agency (AEPD) published guidelines on agentic artificial intelligence, a technology that is gradually being incorporated into companies and public administrations. Its importance lies in the fact that it is not just a new tool, but a different way of carrying out personal data processing.

The guidelines do not aim to resolve specific cases, but to provide a framework for understanding what changes when processing relies on AI agents.

What Is an AI Agent?

The guidelines describe AI agents as systems that use language models to achieve objectives, adapting to their environment and acting according to circumstances.

Unlike simpler systems, these do not merely respond to requests. They can organize tasks, break them down into stages, access different sources of information, and execute actions within digital systems.

In other words, these are not passive tools, but systems capable of actively intervening in organizational processes.

IImpact on Data Processing

One of the key points of the document is that the use of agentic AI can change how data processing is structured.

When these systems are integrated, they may alter:

  • the way operations are carried out
  • the actual scope of the processing
  • the actors involved
  • and the associated risks

For this reason, their implementation requires a review of regulatory compliance, even in existing processing activities.

Additionally, agents may access both internal information and external sources, which may involve the use of personal data that was not initially foreseen..

Key Vulnerabilities

The guidelines emphasize that risks do not arise solely from language models, but from the interaction between multiple components. It is precisely this complexity that creates new vulnerabilities.

Among the most relevant:

  • Interaction with the environment: The agent may exchange information with external services, which can lead to loss of control over data if appropriate safeguards are not in place.
  • Integration of multiple services: The combined use of tools, APIs, and external models creates processing chains that are difficult to control.
  • System memory: Memory is one of the most sensitive elements. It not only allows information to be stored to improve performance, but can also accumulate personal data persistently.

The AEPD distinguishes between two key levels:

  • functional memory (necessary for operation)
  • management memory (logs and records)

Both may contain personal data and require differentiated handling.

  • Autonomy in decision-making: The agent may decide how to act, what information to use, and what actions to execute. This raises particularly relevant issues when:
    • decisions have significant effects
    • inappropriate data is used
    • or there is no effective human oversight

Risk Assessment

One of the most interesting aspects of the guidelines is their approach to risk. The AEPD highlights that agentic AI changes the nature of processing and therefore requires a specific analysis.

As a guiding tool, it refers to the so-called “rule of 2”, which states that three elements should not occur simultaneously:

  • access to uncontrolled information
  • access to sensitive data without restrictions
  • the ability to execute automatic actions with real-world effects

Although this is a simplification, it serves as a warning to identify particularly risky configurations.

However, the AEPD itself notes that the analysis must go further, incorporating factors such as data quality, the presence of bias, and compliance with the principle of data minimization.

Data Protection Obligations

The use of these systems requires careful attention to several issues:

  • Determination of responsibilities: the number of actors increases and data flows become more complex
  • Transparency: it may be difficult to explain how decisions are made in multi-stage systems
  • Data minimization: there is a risk of accessing more data than necessary “by default”
  • Data subject rights: exercising them may become more complex if information is distributed across memories, logs, and external services

Recommended Measures

Rather than merely identifying risks, the guidelines propose a broad set of measures. Among the most relevant:

  • establishing specific governance for agentic systems
  • involving the Data Protection Officer (DPO) in their oversight
  • defining strict policies for data access and classification
  • compartmentalizing memory
  • ensuring traceability of decisions and actions
  • implementing meaningful human oversight
  • controlling the degree of system autonomy
  • and maintaining ongoing evaluations of system performance

All of this is based on a key idea: data protection must be integrated by design.

Conclusion: A Risky Technology… and an Opportunity

Agentic AI introduces real risks, derived from its autonomy, its integration capacity, and its high technical complexity.

However, it can also become a powerful ally in strengthening data protection, provided that its implementation is carried out properly.

The key lies in the approach: it is not just about using the technology, but about understanding it, controlling it, and defining the limits within which it should operate.

Because in this new scenario, the greatest risk is not artificial intelligence… but using it without understanding it.

How can we help you?
If you have any questions, our specialists are here to assist you whenever you need it.
Live Chat
Share this article
Blog

Related Articles

Businesses trust Lawwwing to ensure their legal compliance, keeping their documents up-to-date and avoiding penalties.
cross