BLOG

AI cookie

Enterprise have AI applications that run on all their data. What about consumers?

Scroll to read
12/7/2024

Last week Wired magazine interviewed Brian Chesky on AI. Brian explained

Up until 2 years ago very few people were talking about AI and then of course ChatGPT launched … and while it’s been so exciting, I bet you … your daily life has[n’t] been any different because of AI.

Here’s a test: take your phone out; look at all the apps on your home screen and ask what of them have changed because of generative AI? … I bet you none of the apps you use  – including Airbnb – are fundamentally different in a world of AI. What did you do that was changed fundamentally because of AI?

We agree. Despite the incredible progress in AI – not much has changed for consumers.

We're disappointed but not surprised.

We started Crosshatch as a bet that – aside from AI native apps like chat – consumer AI wouldn’t be that interesting unless there was a new data layer for personalized applications.

The logic is simple: Personalization is downstream of context.

Enterprise get it. They have AI apps that run on all their data. What about consumer?

Toward apps with complete context

Personalization is downstream of context.

Without complete context, you don’t have meaningfully adaptive applications. It’s true we have great new tooling like Vercel’s AI SDK – but without dynamic context there’s little motivation for generative interfaces.  

The history of this is straightforward: Before language models, personalization capability was given by data infrastructure and engineering sophistication.

- Are you tracking all data?
- Is it well organized and accessible?
- Have you hired talented machine learning engineers to transform it into models?

But now, personalization quality is given entirely by access to relevant context.

People are adding blood test results to Claude Projects to get personalized health advice.  You wouldn’t ask Claude these same questions with only partial access!

But that’s what we’re effectively doing with internet applications under a first-party data regime.

Today, businesses ship personalization based on the context they’ve happened to collect about us. This follows 20 years of business cannon. If businesses can build bigger Context Moats than their competitors, they can make it more expensive for consumers to switch to a competitor.

This is a good strategy for corporate boards but limits user experience.

But you wouldn’t use an AI without all the context. Why would you use an application with only a slice? [Businesses surely do not.]

But that’s where we are today. Every app is riding the old strategy of

  • Collect more data
  • Organize and activate it better than your competitors
  • Induce switching costs

The trouble is, this data just isn’t enough for AI. The first party data strategy of the mobile and cloud era breaks in the age of AI.

Personalization just isn’t a prize that can be won on first party data alone. Shipping natural language search doesn’t make your search personalized.

The end-state is applications and interfaces that adapt dynamically to context that they might not have directly observed.

Enterprise know they need applications that run on all the context. Why don’t we demand it for consumer?

Enterprise leading consumer

In the mobile era, consumer lead enterprise. In the intelligence era, it seems enterprise could lead consumer. 2024 was the year of enterprise AI. Will 2025 be the year of consumer?

Today’s enterprise expect to be able to run AI on all their data. It’s a clear enduring trend. While it’s swept enterprise, we believe it’s only a matter of time before it arrives for consumers. You can’t escape data gravity.

The enterprise case is simpler, however, because enterprise own all their data. The enterprise is the Data Controller. There’s no third party (or at least enterprise can demand Bring Your Own Cloud deployments).

Consumer internet applications make this complicated, as applications are hosted by a third party Data Controller, who typically has ‘worldwide’ ‘irrevocable’ rights to do whatever it wants with data it accesses.

Every app hasn’t changed as Chesky describes because every app probably can’t have all our data because that could be a privacy nightmare for everybody.

As we’ve written at length, AI can make this simpler. Previously everyone built their own machine learning models on their own hard-won first-party context. But now “in-context learning

just adding user context with a few examples to a prompt

is poised to work just as well, for cheaper both absolutely and on the margins. Our previous modeling of

people like this tend to do that
people with these features tend to act in a certain way

via feature engineering and machine learning models looks incredibly old fashioned and naive in the age of intelligence. This prior modeling slices the world up into legible, consistent and dense features while the world around it is unstructured, ambiguous and often sparse.

As an aside, this all is happening while the traditional data-driven “Your Data. Your AI” narratives that powered the rise of cloud are collapsing.  The first party data that would power our promised e.g., adaptive commerce experiences won’t live up to expectations (cf Mode’s Benn Stancil). Nike made itself “data driven” above all else and its stock dove into the ground. [Taste rules the world!]

And this reckoning has been coming for a while

No matter how good the tools are that clean and analyze it, how skilled the engineers are who are working on it, or how mature the culture is around it, [first party data] never burn as hot or as bright [as the data of companies like Google, Amazon, or Netflix, whose datasets are inherently rich and high-leverage].

wrote Mode’s Benn Stancil now two years ago!

“In-context learning” makes it so that the personalization task can rather be outsourced to a capable LLM, context and a well-chosen prompt.

This means that consumers have a path to “AI with all your data” on interfaces that they don’t own.

The consumer case is more challenging than enterprise, in that in the enterprise case the enterprise owns all their data. An enterprise can demand that any AI application that unites and activates data runs on their servers e.g., via Bring Your Own Cloud deployments.

The consumer – who may not wish to bother with locally hosting such services – will likely rely on a third party to manage auth, data syncing, and AI on her behalf.

This architectural pattern - where users bring their own context and AI to web applications - demands new metaphors. It's like having a personal agent (Call my agent!) that mediates your digital interactions.

Or, more precisely, it's an inversion of the traditional cookie model: instead of identifying you to servers, it authenticates servers to your AI agent.

AI cookie

Let’s say an AI cookie is an identifier dropped on the browser or in the app that enables permissioned access to a user’s AI agent resource with all permissioned context.

Cookies originally used to identify clients to servers to enable user data tracking in a first-party CDP or third-party DMP or DCR (all of whom have incomplete or incorrect data on users) are better used to identify servers to user-owned personal AI agents that have all the context.

While personal AI agents introduce candidate points of failure in a network, they’re poised to introduce new privacy dynamics that could resolve challenges of the early internet.

For one, the golden rule of cookies was never honored. It appears entirely plausible that the same-origin policy actually incentivized more unconsented tracking than it stopped. This is because it never addressed the underlying incentives that strictly rewards relevance over privacy.

We’re confident AI cookies will  increase privacy by directly addressing the incentives problem: when users can bring their own data and AI to applications, other avenues for data collection pale in comparison to the user’s own personal AI. Data gravity is real.

Why use an expensive DMP or DCR when you can just ask the user’s AI agent?

It’s true this new AI cookie “YMMVC” pattern prompts new questions about the nature of applications  

what is an app beyond a view and controller of AI-mediated context

though we remain confident that

Moving to this new paradigm will only increase consumer and business surplus.

Consumer apps with all the context

We’re excited to see the tide begin to move in this direction. Two weeks ago, Anthropic shipped its Model Context Protocol, to enable, among other things, applications to sample LLM completions of private context.

Anthropic's Model Context Protocol shows how this might work for developers, starting with developer-owned resources. But consumer applications present unique challenges. While enterprise MCP servers operate within single organizations, consumer context requires more sophisticated governance as it requires (e.g., per privacy law) purpose-limited permissioned access to LLM sampling – all with streamlined UX. Crosshatch provides this complete infrastructure - from data integration to privacy controls - making AI cookies practically possible.

Developers have been thrilled with MCP and how it’s enabled easy access to outside context.

Today, when you look at your phone, you see apps that know only fragments of who you are. Tomorrow, with user-controlled AI and complete context, every interface becomes uniquely yours.

The enterprise already has this future. It's time consumers did too.

If you’d like to see how you can use Crosshatch’s “AI cookie” for adaptive interfaces or proactive AI - reach out!

See what Crosshatch can do for your business.

Crosshatch for businesses

Collecting our thoughts

experience
an internet
Made for you
start building