This week, Microsoft disclosed a critical server-side request forgery (SSRF) vulnerability in their own Azure MCP server. Let that sink in. The company that runs one of the world's three largest cloud platforms shipped an MCP implementation with a flaw that could let attackers pivot into internal Azure services.
If Microsoft can't secure their MCP servers, what makes you think yours are safe?
The Model Context Protocol has gone from an interesting experiment to critical infrastructure in barely a year. Thousands of organisations are deploying MCP servers to give AI agents access to their tools, databases, and APIs. And almost nobody is scanning them for vulnerabilities.
What Is MCP, and Why Should You Care?
The Model Context Protocol — originally developed by Anthropic and now adopted across the industry — is the standard way AI agents interact with external tools. Think of it as a universal adapter: instead of building custom integrations for every AI model, you build one MCP server and any compatible agent can use your tools.
An MCP server might expose tools that query your production database, read and write files on your servers, send emails on behalf of users, access your cloud infrastructure APIs, execute code, or interact with third-party SaaS platforms.
This is incredibly powerful. It's also incredibly dangerous.
The Attack Surface Nobody's Watching
Traditional security tooling doesn't understand MCP. Your vulnerability scanners, your SIEM, your WAF — none of them are designed to detect MCP-specific attacks.
SSRF: The Azure Wake-Up Call
Server-side request forgery through MCP servers is the most immediately dangerous attack vector. An AI agent calls an MCP tool, the attacker crafts input that manipulates the URL or request target, and the MCP server — running inside your network — makes a request to an internal resource. Suddenly, the attacker has access to your cloud metadata endpoint, internal APIs, or admin panels.
Prompt Injection via Tool Results
An MCP tool fetches data from an external source. That data contains hidden instructions — invisible to humans but parsed by the AI agent. The agent follows the injected instructions, potentially exfiltrating data or executing unintended actions. All of this happens within the agent's normal operation — no alarms, no logs, no alerts.
Excessive Permissions and Missing Authentication
Survey the MCP servers in your organisation right now. How many run with admin-level permissions? How many lack any form of authentication? How many expose tools that aren't needed by the agents using them? If your experience is anything like what we see in security assessments, the answer is "most of them."
What You Should Do Today
- Audit your MCP inventory. You can't secure what you don't know exists.
- Scan your configurations. Tools like MCPScan can identify misconfigurations and vulnerabilities automatically.
- Apply least privilege. Every MCP tool should have the minimum permissions required.
- Monitor tool invocations. Log every MCP tool call, who invoked it, and what it accessed.
- Test for injection. Include MCP prompt injection testing in your security assessment programme.
The MCP ecosystem is moving fast. Security tooling needs to move faster. The window between "this is theoretical" and "this is being actively exploited" is closing rapidly. Don't wait for your own Azure moment.