cyberivy
AI SecurityEuropeISACAAI GovernanceCybersecurityEU AI ActPhishing

35 percent of firms cannot confirm AI cyberattacks

May 6, 2026

Ein Serverraum mit mehreren schwarzen Serverschränken, Kabeln und leuchtenden Statusanzeigen.

ISACA figures show a European security gap: AI is widely used at work, yet many organizations do not know whether attackers have already hit them with AI.

What this is about

ISACA published new European figures from an AI Pulse Poll on May 6, 2026. Thirty-five percent of surveyed European organizations cannot say whether they have already been hit by an AI-powered cyberattack.

This is not an abstract governance story. It is about visibility: if a company cannot tell whether phishing, social engineering or data manipulation has already been amplified by AI, it cannot prioritize risk properly.

What the study actually does

According to ISACA, the figures are based on a survey of 681 digital trust professionals in Europe, conducted from February 6 to 22, 2026. Seventy-one percent say AI-powered phishing and social engineering are harder to spot. Fifty-eight percent say AI makes it significantly harder to authenticate digital information. Thirty-eight percent report declining trust in traditional detection methods.

At the same time, companies are using AI widely at work. Eighty-two percent permit AI use, and 74 percent explicitly permit generative AI. But only 42 percent have a formal, comprehensive AI policy. One third do not even require disclosure when AI has contributed to work products.

Why it matters

The dangerous part is the gap between use and control. Employees use AI to write text, analyze data and automate tasks. Attackers use the same progress for more convincing emails, deepfake content, faster reconnaissance and better deception. If governance and training lag behind, a blind spot appears in the security model.

SecurityBrief UK confirms the central numbers from ISACA’s release and frames them as a European visibility problem. ISACA says the EU AI Act is cited by 45 percent as a governance framework, but 26 percent still follow no framework.

In plain language

Imagine a smoke detector that beeps for normal smoke but is uncertain about a new odorless kind of smoke. Many companies are now using new machines in the kitchen, but they have not upgraded the smoke detector. They do not know whether there has already been a fire or whether nobody heard the alarm.

A practical example

A mid-sized supplier with 800 employees allows generative AI for text and spreadsheets. It saves time: 60 teams use tools for proposals, support and analysis. At the same time, there is no disclosure rule, no central allowlist and no training against AI phishing. An attacker imitates a supplier with perfectly written emails and manipulated PDF attachments. Without new controls, it remains unclear whether the incident was ordinary spam or part of a targeted AI-assisted campaign.

Scope and limits

  • The survey measures self-reports from professionals, not forensically confirmed attacks.
  • “AI-powered attack” is hard to delimit in practice because attackers rarely disclose their tooling.
  • An AI policy alone does not protect anyone. It needs asset inventory, logging, training, technical controls and clear owners.

SEO & GEO keywords

ISACA AI Pulse Poll, AI cyberattacks Europe, AI phishing, Social Engineering, EU AI Act, AI Governance, Cybersecurity Training, Digital Trust, AI Risk, Shadow AI, European cybersecurity

💡 In plain English

Many companies already use AI productively, but their security controls have not adapted at the same speed. That makes it unclear whether new attacks are even being detected.

Key Takeaways

  • According to ISACA, 35 percent of European organizations cannot say whether they have experienced AI-powered cyberattacks.
  • Seventy-one percent say AI phishing and social engineering are harder to detect.
  • Only 42 percent have a formal, comprehensive AI policy.
  • Training, disclosure rules and technical visibility are now operational security issues.

FAQ

Are the 35 percent confirmed attacks?

No. The figure describes organizations that cannot confidently say whether they were affected.

Why is that still dangerous?

If attacks are not detected, controls, training and incident response cannot be prioritized well.

What is the first countermeasure?

A combination of AI inventory, policy, employee training, logging and clear approval processes.

Sources & Context