cyberivy
AI RegulationEU AI ActAI AgentsAI GovernanceEuropePrompt InjectionCompliance

EU AI Act has a gap around autonomous AI agents

May 5, 2026

Das Berlaymont-Gebäude der Europäischen Kommission in Brüssel mit EU-Flaggen vor der Glasfassade.

A new Tech Policy Press article and SSRN paper argue that the EU AI Act applies to AI agents in principle, but key rules fit open-ended systems that act on their own poorly.

What this is about

Tech Policy Press published the analysis "The EU AI Act is Not Ready for Agents" on May 5, 2026. The core point: the EU AI Act covers AI agents in principle, but it was written before capable agents became widely visible in software development, office work and everyday personal tasks.

This is not an abstract legal debate. Agents read emails, operate web interfaces, write code, book services and can trigger real-world actions. That is where regulation gets harder: a chatbot gives an answer. An agent does something.

What AI agents actually do

An AI agent connects a model with tools, goals and context. Instead of only producing text, it can plan steps, call external systems, change files or prepare transactions. Depending on the setup, it works on a task for minutes or hours and makes interim decisions that a human does not confirm one by one.

The analysis names five problem areas: performance, misuse, privacy, equity and oversight. One difficult point is that classic testing terms such as "accuracy" do not map cleanly to tasks where there is no single correct answer. In a housing or benefits decision, speed, fairness, fraud detection and traceability matter at the same time.

Why it matters

The EU AI Act is the world’s most important comprehensive AI law. It sets requirements for high-risk systems, general-purpose AI models, documentation, risk management and human oversight. But if agents become mass-market only after the law is drafted, grey zones appear.

For companies, this means compliance cannot stop at model cards and privacy notices. Any organization deploying agents needs technical logs, restricted permissions, recovery points and clear stop rules. For citizens, the issue is that a system may not only provide wrong information; it may act on their behalf.

In plain language

Think of a normal chatbot as a cookbook: you read a suggestion and decide what to cook. An agent is more like giving someone your apartment keys and saying: "Make dinner." They shop, open cupboards and use appliances. At that point, checking the recipe is not enough. You must also manage keys, stove, budget and emergency stop.

A practical example

A mid-sized company lets an agent triage 2,000 support tickets per week. The agent may read customer data, retrieve internal knowledge-base articles and send standard replies directly. If only 0.5 percent of cases are misclassified, that affects ten customers per week. If the agent can also trigger refunds up to 100 euros, a text error becomes a financial and legal action.

A safer design would use separated roles, a maximum of 100 tickets per test run, human approval above 50 euros, a full action log and an automatic lock if refunds spike. Those operational controls are central for agents.

Scope and limits

  • The Tech Policy Press piece is a legal and technical analysis, not a new law and not an official European Commission position.
  • Many agents remain limited today. The risk grows mainly when they receive real permissions in production systems.
  • The EU AI Act is not useless. It provides a framework, but standards and guidance need to address agents more explicitly.

SEO & GEO keywords

EU AI Act, AI agents, AI governance, high-risk AI, AI Office, GPAI, human oversight, prompt injection, AI regulation, Europe, Tech Policy Press

💡 In plain English

AI agents are systems that do not just answer, but execute tasks. That is where the EU AI Act is still too broad: it regulates AI, but many details around permissions, control and undoing agent actions remain open.

Key Takeaways

  • Tech Policy Press published the analysis on May 5, 2026.
  • The EU AI Act applies to agents in principle, but only partly addresses their open-ended action logic.
  • Misuse, privacy, oversight and hard-to-measure performance are the main pressure points.
  • Companies should secure agents not only legally, but technically with permissions, logs and stop rules.

FAQ

Are AI agents not covered by the EU AI Act at all?

They are covered in principle. The criticism is that many detailed duties do not fit autonomous, tool-using systems well.

What is the main difference from chatbots?

A chatbot answers. An agent can interact with systems and trigger actions, such as changing code or preparing an order.

What should companies do now?

Give agents only the minimum required permissions, log actions, limit test runs and require approval for irreversible steps.

Sources & Context