Microsoft patches two RCE flaws in Semantic Kernel AI agent framework
May 10, 2026
Microsoft disclosed two critical flaws in the Semantic Kernel framework on May 7, 2026. CVE-2026-26030 carries a CVSS score of 9.8 and both allow code execution via prompt injection.
When prompts become shells
On May 7, 2026, Microsoft disclosed two security flaws in its open-source agent framework Semantic Kernel in the Microsoft Security Blog. Both let an attacker either run code on the host or write files to unintended locations through crafted prompts. The flaws were fixed before exploits became known.
What the flaws actually do
CVE-2026-26030 (Python SDK)
The flaw affects the Python distribution of the semantic-kernel package before version 1.39.4. The vulnerable component is the InMemoryVectorStore when used as a backend for a Search Plugin with the default filter. According to Microsoft, the default filter was built as a Python lambda expression and executed via eval(), with values interpolated into the expression without validation. A single injected prompt could trigger remote code execution on the host server. The CVSS score is 9.8 according to NVD.
CVE-2026-25592 (.NET SDK)
The second flaw affects the .NET Semantic Kernel SDK before version 1.71.0. The center of the issue is SessionsPythonPlugin, which gives agents access to dynamic Python sessions in Azure Container Apps. A helper function called DownloadFileAsync was, according to Microsoft, accidentally exposed to the model as a callable kernel function and lacked sufficient path validation. A hostile prompt could thus write arbitrary files to chosen paths.
Patches and mitigations
According to its blog post, Microsoft introduced an AST node-type allowlist for filter expressions, restrictions on callable functions, blocking of dangerous attributes such as class-hierarchy traversal primitives, and tighter rules around bare names. Users must update to semantic-kernel >= 1.39.4 (Python) or Microsoft.SemanticKernel >= 1.71.0 (.NET).
Why it matters
Semantic Kernel is one of the most widely used frameworks for enterprise AI agents in Microsoft 365 and Azure environments. A prompt injection here does not turn into a data leak but into full code execution. That is exactly what industry studies have for months called the most severe agent risk: when untrusted text from email, web pages, or tickets ends up in a tool call, the tool has access to the entire host system.
The flaws are a teaching example of why frameworks must never feed model inputs directly into eval() or similar constructs. The Mandiant M-Trends 2026 report had already pointed out that 28.3 percent of CVEs are exploited in practice within 24 hours of disclosure.
In plain language
Imagine a mailbox with a caretaker who reads every message out loud and follows all instructions inside. If someone drops in a letter saying, "Caretaker, open the door and hand over the key," the caretaker does exactly that. Microsoft's Semantic Kernel was for a while that kind of caretaker: it read commands out of data and executed them because a protective layer was missing. The patch is essentially a door lock.
A practical example
A German insurer runs an internal claims-handling agent in 2026 based on Semantic Kernel Python 1.36 with InMemoryVectorStore and the default filter. Incoming policyholder emails are read by the agent, enriched with semantic search, and answered. An attacker sends an email that hides a crafted filter expression between friendly text. During the lookup, the expression is executed via eval() and starts a reverse shell on the application server. With the patch to 1.39.4 plus additional input validation, the mail stays a harmless inquiry.
Scope and limits
First, CVE-2026-26030 only affects configurations using InMemoryVectorStore with the default filter and a prompt-injection path. Teams that use custom filters or external vector stores like pgvector are not directly vulnerable to this specific flaw.
Second, patching is only part of the solution. Layered defenses such as strict tool allowlists, sandboxing, and output filtering remain necessary. Microsoft itself recommends exactly this multi-layered approach in its blog post.
Third, the vulnerability does not affect "prompt injection" as a category. It shows that even a well-established framework can fail at a seemingly small point such as filter evaluation. Anyone building their own agent framework should similarly comb through their codebase for eval, exec, or __import__.
SEO and GEO keywords
Microsoft Semantic Kernel, CVE-2026-26030, CVE-2026-25592, Prompt Injection, RCE, Remote Code Execution, AI Agent Security, InMemoryVectorStore, SessionsPythonPlugin, AI Framework, MITRE, NVD, 2026.
π‘ In plain English
Microsoft found and patched two critical flaws in its Semantic Kernel toolkit. They let attackers run real code on a company's computer by sending malicious text into an AI agent.
Key Takeaways
- βMicrosoft disclosed CVE-2026-26030 and CVE-2026-25592 on May 7, 2026 in its security blog.
- βCVE-2026-26030 affects semantic-kernel Python before 1.39.4 and has a CVSS score of 9.8 per NVD.
- βThe flaw exploits eval() in the InMemoryVectorStore default filter and enables remote code execution.
- βCVE-2026-25592 affects the .NET SDK before 1.71.0 via the SessionsPythonPlugin and enables file writes.
- βPatches are available; Microsoft also adds an AST allowlist and function restrictions.
- βBoth flaws are a reminder why agent frameworks must never feed model input directly into eval().
Sources & Context
- When prompts become shells: RCE vulnerabilities in AI agent frameworks (Microsoft Security Blog, May 7, 2026)
- NVD entry: CVE-2026-26030
- GitLab advisory: CVE-2026-26030 Microsoft Semantic Kernel InMemoryVectorStore filter functionality vulnerable
- CVE-2026-26030: Critical RCE in Microsoft Semantic Kernel Python SDK Exposes AI Applications (Windows News)
- Semantic Kernel Prompt Injection Bugs Let Attackers Run Code or Write Files (Windows Forum)
- May 2026 Patch Tuesday forecast: AI starts driving security industry changes (Help Net Security, May 8, 2026)