BLOG

Personal MCP server

Model Context Protocol (MCP) allows users to bring tools and context to desktop AI clients. What about web clients?

Scroll to read
12/15/2024

AI capability and utility is downstream of context.

This simple truth has led every application to build its own fragmented infrastructure of data integrations, CDPs, and onboarding forms – all to collect and integrate context into their application.

Last month, Anthropic released the Model Context Protocol (MCP) to change this pattern.

What is MCP?

MCP is an open protocol that standardizes how applications provide context to LLMs.

Using supported local client applications like

  • Anthropic Claude desktop
  • Code editor Zed
  • Code assistant Continue

developers can contribute MCP servers that you can host locally to interact with context or tools like

MCP servers provide MCP clients a well-defined and controlled interface to interact with underlying integrations. For instance you can give Claude Desktop the ability to

  • List channels slack_list_messages
  • Post messages slack_post_message

or in Google Maps

  • Search for a place search_places
  • Get driving directions get_directions

or in your local filesystem

  • Get files in a directory list_directory
  • Search for files returning all matches search_files

MCP server configurations provide MCP clients tool

  • Descriptions: so clients know what they are and when to use them
  • Interfaces: so clients know how to use the tool

This enables client interactions with underlying tool and context sources without directly managing auth or otherwise underscoped interfaces.

MCP architecture.

The best AI applications are equipped with relevant tools and context. So far, developers have had to roll integrations themselves: this open protocol makes equipping supported MCP clients with these tools easy.

Indeed, pairing LLMs with enterprise data is a core theme for AI: an open protocol is poised to accelerate it.

Developers find the feature to be incredibly powerful.

Instead of equipping Claude Projects with context manually, you can let Claude just search your filesystem and return LLM completions of it.

MCP Tutorial

So far, these examples have centered on interactions where the client and the server are self-hosted or trusted. For instance, your client sending a search request to Google Maps is fine because it’s both

  • Consented: the client checks with the user before invoking the tool
  • Trusted: your use of e.g., Google Maps operates under a license agreement you’ve already agreed to with Google

But what about cases where both the client and server are owned by someone else?

MCP for Consumer Web Applications

Consumer web or mobile applications may also want to equip themselves with tools and context – particularly your context.

This case is different than those we’ve considered above: here you’re engaging with a

  • Applications hosted by third parties
  • Who may wish to access your context (not just context they own)

e.g., to provide a more personalized experience. Examples include a

  • Walmart or Target who’d like to save you time and money staying in stock
  • DoorDash or Uber who’d like to schedule food and cars to fit your schedule and taste
  • Dating apps to match you with users with similar interests
  • TripAdvisor or Booking to match you with travel that matches your tastes

In these cases, client applications are hosted by third party businesses and may wish to provide AI-powered services that reference your data – your past purchases, reservations, favorite songs, etc.,

Today’s MCP doesn’t handle this case: the client is local and trusted. You configure MCP servers with authentication keys that, as a developer, you manage yourself.

But what about this case where the client is owned by someone else? And when you don’t really want to bother managing your own authentication keys?

Enter LLM Sampling

Anthropic’s MCP introduces a concept of Sampling

Sampling is a powerful MCP feature that allows servers to request LLM completions through the client, enabling sophisticated agentic behaviors while maintaining security and privacy.

Anthropic shares no real-world examples of sampling, but it appears to set the stage for LLM clients to call other AI. The structure of a sampling call is informative

Standardized message request for sampling calls.

Notice how the request abstracts away

  • Model
  • Context: thisServer v allServers (though it’s not clear what these settings actually induce)

allowing the server to just say

i need a completion from a cheap, fast and somewhat smart model with the following context

This appears to have direct application for the third party client, which could call out to a secured AI client for LLM completions of permissioned context.

The sampling capability in MCP points toward a potential solution for this third-party context challenge, though it requires careful extension to work in consumer applications.

MCP for personalized applications

Suppose you’re using a travel site – a remote third party client application.

It endeavors to personalize your experience, possibly with AI.

But it doesn’t have the data needed to adapt its experience to your context. It only knows what you’ve said or done together – which isn’t the full story.

This is the setup for MCP: you equip clients with additional context and tools to give your application additional functionality and adaptability.

For instance, Claude Desktop may not be up to date on current events. We can solve that by adding search to our MCP client Claude Desktop by equipping it with the Exa AI MCP server.

Similarly, we could equip the travel application with a MCP server with sampling enabled, so that it could sample private user context to the ends of enabling the application to adapt itself to personal context.

That said, this case has a few differences.

First, candidate user context for personalized applications exists across many remote application servers, each with their own data schema and latency. Ideally candidate retrieval methods could operate over a unified data model with managed latency. Further, ideally keys could be managed on behalf of users, who may not have the time, interest, or capability to manage keys themselves.

Second, we’d wish for some security model that enables fine-grained access control over the context that’s passed to the LLM client. We don’t want to pass all context to the LLM client or be at the whim of the native remote service API scopes like https://www.googleapis.com/auth/gmail.readonly. Additionally, we’d like to assert  how or why LLM completions of permissioned context passed to third parties satisfy privacy law.

Finally, we’d hope for some consent model that doesn’t require consent at each sampling invocation. Consent must be meaningful and intuitive, not a result of a beleaguered fatigue.  

So while MCP lays the groundwork, Crosshatch builds upon it with a user-centric approach.

LLM Sampling for remote data and apps

Crosshatch solves these issues by enabling secure LLM sampling of private data by third party applications.

Crosshatch's version of MCP Sampling: combine LLMs, a unified data representation, and fine-grained access controls in user-governed cloud compute managed by Crosshatch.

Developers can add Crosshatch Link to their site or apps to invite users to link context to their application for a specified purpose e.g., “personalized travel”. This initiates a sync to the user’s trusted cloud and transforms the data into a unified events representation or index: everything the user has

  • Liked
  • Purchased
  • Confirmed
  • Exercised
  • Posted

across applications. The user leaves their trusted cloud to manage auth keys on their behalf.

In the Link, the user can choose what data to share to this application, agreeing to make this context available for secured LLM completion.

This unified events representation enables fine grained access controls to share e.g.,

“only recent purchases”
“hotel and restaurant confirmations”

irrespective of the data model or permission scopes native to the remote service that provides the data. [This is a reason why AI needs AI-native Auth.] It also enables natural retrieval for LLM completions.

Notice the similarity between the sketched MCP Sampling call and Crosshatch API call, both of which enable the server (in the Crosshatch case, the third party application) to request LLM completions of private context for any chosen AI model.

Finally, the Link enables a consent model that’s both consistent with privacy law and doesn’t require repeated consent on each sampling call.

Privacy law requires that data access should be “reasonably necessary and proportionate” to the use case, where “reasonableness” is determined by the user. That is, so long as linked applications make LLM calls in keeping with the Link-specified purpose, LLM sampling should satisfy privacy requirements.

We started Crosshatch well before MCP dropped and we do not currently use MCP. However, both efforts point toward the same future: one where applications can access rich user context through well-governed protocols rather than building isolated context silos. By allowing consumers to attach purpose-bound LLM completions of their context to the apps they already use, we can make every application more intelligent while preserving user control.

See what Crosshatch can do for your business.

Crosshatch for businesses

Collecting our thoughts

experience
an internet
Made for you
start building