AIMap shows how exposed many AI endpoints are online
May 6, 2026

Bishop Fox released AIMap, an open-source tool that finds MCP, Ollama and other AI endpoints and scores risky misconfigurations.
What this is about
On May 6, 2026, Help Net Security reported on AIMap, a new open-source tool from Bishop Fox. It looks for publicly visible AI infrastructure, fingerprints frameworks such as MCP, Ollama, vLLM, LiteLLM, LangServe and Gradio, and scores how risky a discovered service appears from the outside.
That matters because many teams are deploying AI agents and local model servers faster than they are hardening authentication, CORS, TLS and tool permissions. A forgotten test server can become a real attack path.
What AIMap actually does
AIMap combines Shodan queries, live HTTP checks and Nuclei templates. The project documentation says it includes more than 32 preset queries for common AI signatures. It then checks protocol, framework, authentication state, exposed tools, models and possible system-prompt leaks.
Each endpoint receives a risk score from 0 to 10. Missing authentication, tool execution, open CORS rules, missing TLS, visible models and combinations such as “no login plus code execution” weigh heavily. Active tests, including prompt injection and tool-abuse checks, are documented for authorized testing and have to be started deliberately.
Why it matters
Bishop Fox’s demo cites more than 175,000 exposed Ollama instances and more than 8,000 open MCP servers. A shown scan found almost 2,000 live endpoints across 50 countries; 91 percent lacked authentication. Those figures should be treated as vendor demo data, not an official census. But the pattern is clear: AI infrastructure often reaches the internet before it is secured.
For developers, the point is practical. MCP servers can call tools, read files, query databases or execute commands. If such a system is publicly reachable, “it is only an experiment” is no longer a security model.
In plain language
Imagine a workshop with a robot arm, a laptop and a cabinet full of tools. AIMap does not simply walk in and start using the tools. It looks from the street to see whether the door is open, whether dangerous tools are visible and whether someone mistook a sign saying “do not touch” for a lock.
A practical example
A startup runs an internal MCP prototype for support automation. The server has a query_db tool, a file_read tool and a test endpoint without login. On the local network nobody notices. After a cloud deployment, port 8000 becomes publicly reachable. AIMap would find the endpoint, fingerprint it as MCP, check authentication and assign a high score because no authentication is combined with a database tool. The team would get a concrete task list: restrict access, scope tools, enforce TLS and turn on logging.
Scope and limits
- AIMap is not permission to scan other people’s systems. Active tests belong only on owned or explicitly authorized targets.
- The demo numbers are a warning signal, not a complete global census of AI endpoints.
- The tool detects known patterns. New frameworks, unusual ports or partially broken authentication may still slip through.
SEO & GEO keywords
AIMap, Bishop Fox, MCP Security, Ollama Security, AI attack surface, exposed AI endpoints, prompt injection, vLLM, LiteLLM, LangServe, AI infrastructure security, Shodan
💡 In plain English
AIMap is like an outside security walk-through: it shows which AI servers are visible, whether they require login and which dangerous tools they expose.
Key Takeaways
- →AIMap is a Bishop Fox open-source tool for publicly visible AI infrastructure.
- →It fingerprints MCP, Ollama, vLLM, LiteLLM, LangServe, Gradio and OpenAI-compatible endpoints.
- →Risk scores rise especially when authentication is missing and tools are exposed.
- →Active tests are intended only for owned or explicitly authorized targets.
FAQ
Is AIMap an attack tool?
It is built for defensive research and authorized testing. Its active test modules can be misused and should not be run against third-party systems.
Why are MCP servers especially sensitive?
MCP connects models to tools. If tool permissions are too broad, an attacker may trigger actions rather than merely chat.
What should teams check first?
Public exposure, authentication, tool permissions, TLS, CORS and logs.