BLOG

"P" in MCP stands for privacy

No it doesn’t. MCP has security vulnerabilities: it’s also a privacy nightmare.

Scroll to read
4/10/2025

Interest in Anthropic’s Model Context Protocol has blown up.

People are using MCP to enable MCP Clients like Claude Desktop to

all with natural language.  

But MCP has security challenges.

Over the weekend, one of the founders of Invariant Labs wrote The S in MCP stands for Security highlighting the

  • Command injection vulnerabilities: using unsafe shell calls in the server
  • Tool poisoning: adding malicious instructions inside the tool description
  • The Rug Pull: silently changing tool definition after installation
  • Cross-server tool shadowing: A malicious server makes the agent client override or intercept calls to a different trusted server

These security vulnerabilities appear to focus on local instances of MCP.

But as MCP and related infrastructure patterns scale to the consumer web, MCP servers will be deployed remotely, and a new host of privacy problems will appear.

Remote MCP are Data Controllers

Under CPRA, a Data Controller determines the purposes for which data can be processed while a Data Processor “Service Provider” acts under the instructions of the Controller.

In consumer applications, the web application is almost always a Data Controller, one that gets to choose how data is used, typically under terms that supply the web application rights to use data for the purposes of “improving the service” – a vague coverall that enables exercise of data for flexible purposes.

For instance, OpenAI ChatGPT terms of service say they

may use Content to provide, maintain, develop, and improve our Services,

where Content is anything you provide ChatGPT and “improve our Services” is never left defined.

OpenAI is clearly a Data Controller for ChatGPT.

Remote MCP will be the same, but arguably worse, in the sense that you’re giving remote MCP your authentication credentials that will come with terms no one will read that likely include usage for purposes including, “improving their service” which could mean anything.

To be clear, Crosshatch also have this language in our Consumer Privacy Policy – that said, our usage is bound by our obligations to our user-controlled data connections pattern and a clear specification of what Our Services exactly are.

Remote MCP are like aggregators. Last gen aggregators sold customer data to adtech and hedge funds

When iPhone and Android launched without flashlights, apps rushed in to provide services that turned camera flash features into a flashlight.

But these apps also asked for location and other permissions on install.

Users usually installed these apps in a rush, not noticing the strange permissions requested on install.

WHEN I DOWNLOADED the Flashlight app to my iPhone, I was in a jam. I was camping, I think. Or maybe a pen had rolled under my couch. I remember that smug sense of self-congratulation after I downloaded the software, which converted the iPhone's LED flash into a steady and bright beam of light.

But I shouldn't have been so pleased with myself. Though I didn't realize it at the time, I was potentially handing over a boatload of data to advertisers as well.

Wired magazine reported in 2014.

The late Mint.com was a Data Controller that provided a unified view of your finances in an easy to use app by aggregating your logins. Its parent company Yodlee also sold customer data to hedge funds.

These patterns reflect a larger challenge in MCP’s chosen “privacy as consent” pattern: overwhelming the user with consent messages that they’re ill-equipped to meaningfully understand.

MCP’s consent nightmare

MCP authors have eschewed a central trusted authority for the protocol, preferring to leave the user responsible for allowing access.

We would prefer to avoid a central trust authority. We want clients and servers to be able to determine whether they trust each other based on user consent, rather than delegating to any kind of registry of trusted implementations.

MCP’s Justin Spahr-Summers wrote in November.

This has two problems:

  • Unbounded consent space
  • Cognitive overhead of consent

Since MCP allows arbitrary function interfaces to integrations, the user is left to parse what these allowed functions even do. Studying one MCP server interface pays no dividends to the user for understanding another because each one could be different. A function get_personalized_recommendations from one (remote) MCP server could mean something entirely different on another. [As the Invariant Labs piece highlighted, this function could also change post install!]

Even if there were only one function to consent to, this still may be too complicated for most users.

We still collectively don’t know what all these cookie consent messages say. Why would MCP consent be any different? MCP consent messages are a cookie fatigue redux.

These banners, designed to give users control over their data, have ironically achieved the opposite. Most users, overwhelmed by the sheer volume and complexity of the choices, simply click “Accept All” to get to the content (if it ever loads). This defeats the purpose of consent and creates a pervasive sense of fatigue.

Privacy scholar Daniel Solove calls it a Kafkaesque Consent.

[Kafka's] work also paradoxically suggests that empowering the individual isn’t the answer to protecting privacy, especially in the age of artificial intelligence. In Kafka’s world, characters readily submit to authority, even when they aren’t forced and even when doing so leads to injury or death. The victims are blamed, and they even blame themselves.

User consent is convenient from an open protocol design perspective, but it’s poor from a user experience or privacy perspective. Getting consent that a user blindly accepts is not consent.

A simplified consent layer

Consumer AI agents are supposed to save us time.

Part of the UX game for agents is to create the minimal interface for unleashing agent action such that they don’t (too often!) do something we don’t want. If agents ask us for permission for every little choice, they’ll collapse into the “world’s most insecure intern”.

Third party applications will have fine grained access to our context and use it to anticipate our needs and adapt to our state.

But this statefulness, particularly as part of an open internet, should heed lessons from the past decade on privacy UX and incentives.

It is not reasonable to expect end users will be able to make informed consent decisions for every candidate MCP server. This is why e.g., we’re seeing folks like Auth0 enable enterprise administrators to set consent permissions at a company level.

To the extent patterns like MCP scale to consumer, consumers will need a similar trusted and simplified consent interface to easily and effectively manage cross-application connections, where it’s clear, intuitive, and well-motivated how and why applications are accessing otherwise private information.

We’ve long felt like applications should “call your agent” but where “your agent” is any private AI resource with permissioned access to your context.

In 2025 privacy feels less pressing. But principles of privacy like data minimization – e.g., apps should only access the data that’s necessary and proportionate to the service they’re providing you – still feel fundamentally reasonable.

We shouldn't have to auth private resources into multiple remote MCP servers. We should be able to do it once and link those resources out to new apps.

P in MCP clearly doesn’t stand for privacy.

But it should.

See what Crosshatch can do for your business.

Crosshatch for businesses

Collecting our thoughts

CREATE UNIQUE
EXPERIENCES
FOR EVERY USER
start building