BLOG

Bill Gurley and the Sirens of Nirvana

20 years ago Bill Gurley found the keys to capitalism: the Context Moat. For the age of AI, it's a local maximum.

Scroll to read
8/31/2024

On June 10, 2003, with the economy “anxious and waiting” looking for sustainable competitive advantages Bill Gurley found the Perfect Business Model.

What if a company could generate a higher level of satisfaction with each incremental usage? What if a customer were more endeared to a vendor with each and every engagement? What if a company were always more likely to grab a customer’s marginal consumption as the value continued to increase with each incremental purchase?

This may be the nirvana of capitalism – increased marginal customer utility. Imagine the customer finding more value with each incremental use.

To keep customers engaged, no need to directly impose switching costs onto consumers “which can be seen as ‘trapping’ or ‘tricking’ the customer”. Increased marginal utility preempts the need!

Instead, the customer who abandons increasing marginal customer utility would experience "switching loss."

It was at this moment that Bill Gurley invented the context moat.

This became the perfect business model used by today’s biggest consumer businesses:

  1. Collect usage data
  2. Transform into better experiences
  3. Increase switching costs

The trouble is, and as AI is making increasingly clear – the context moat is a local maximum.

While AI capabilities are expected to continue increasing, even today’s distilled models are incredibly capable. In the context of personalization, the power of personal AI is bounded from above by available context. First party context, it turns out, is just not that much.

To be fair, Personalization has been a fragmented exercise – requiring fragmented

  • Personnel: PMs, data scientists, engineers, designers
  • Systems: warehouses, pipelines, model frameworks, activation mechanisms
  • Data: first party or third party from DMPs

But AI can vertically integrate all of this: AI unlocks simple personalization programs comprised of just context and a prompt; executed at marginal costs headed to zero.

So while incremental usage data to grow app marginal utility might have worked for the age of fragmented machine learning, it breaks for the age of AI.

It’s simply not ambitious enough.

Language models are serving “machine god from sand” but machine learning with first party data is giving “this one time at band camp.”

This blog explains how we got here, and what experimentation outside of this local maximum could unveil.

Context not data moat

Traditionally we think of data moats.

The story of defensibility from data started in the 2010s as commercial machine learning systems began to rise. To build these systems, you’d need lots of data. Consumer startups found defensibility in Gurley-styled consumer data assets used to create personalization models that made switching expensive for users.

But, paradoxically, the rise of machine learning systems atop these data moats is leading to their own demise. A16Z’s Martin Casado explained 2019

Data has long been lauded as a competitive moat for companies, and that narrative’s been further hyped with the recent wave of AI startups.

But for enterprise startups — which is where we focus — we now wonder if there’s practical evidence of data network effects at all. Moreover, we suspect that even the more straightforward data scale effect has limited value as a defensive strategy for many companies

Martin’s observation for enterprise appears prescient for the age of large language models. LM makers have assembled all the internet’s data and trained generalized models on top.

UPenn’s Ethan Mollick observed a few months ago that specialist models – trained atop first party private data – quickly lose out to rising generalists.

Remember BloombergGPT, which was a specially trained finance LLM, drawing on all of Bloomberg's data? … GPT-4 beat it on almost all finance tasks.

Your special proprietary data may be less useful than you think in the world of LLMs

Consumer companies, of course, trained their own models to predict favorites, what items customers might like to try next, or what sort of coupon will keep them coming back.

But with generalist language models able to activate any context, the candidacy of first party data to train proprietary models as differentiated enterprise asset sours.  Open-source “GPT-4 level models” like Llama 3.1 distribute under a permissive

non-exclusive, worldwide, non-transferable and royalty-free limited license

raising the relative requirements for training models internally.

The data moat is dead.

It’s the context moat that holds on and keeps switching costs high.

It’s pretty weird to think about companies competing to own a slice of our context. But these context moats have real stakes. To understand why, we need to look at the economics of context moats, particularly in the age of AI.

Economics of context

The "everything is an ad-network" world runs on two things: attention and context.

And the business of personalization is a simpler form of the business of advertising (see Google’s Hal Varian’s intro). With rising CAC and privacy technologies, it has increasingly attractive properties.

Personalization is about matching people who want to buy things to things you sell.

The business value of personalization grows with customer data and attention. This is distinct from advertising, whose context is fragmented across corporate lines.

For instance, a Shopify store advertising on Meta doesn’t know the users who see Meta ads, and Meta doesn’t readily observe the conversions that happen on Shopify as a result of its ads. In the personalization case, there’s no fragmentation as all of this context is first party.

This is particularly important because different types of context have different value. In the world of personalization in businesses that sell things, context relevant to purchasing decisions reigns. A static picture of a user doesn’t cut it – you need a constant stream of data relevant to the personalization exercise at hand.

For instance, while Facebook has a lot of user data, for advertising it’s rather dependent on conversions it (used to) observe on advertiser sites. Eric Seufert explains

Facebook’s first-party data — its knowledge of the ‘cat videos’ and ‘vacation pictures’ with which users engage — is actually not very helpful for use in targeting ads. Rather, what Facebook uses to great effect is its knowledge of which products users buy on advertisers’ properties: armed with that information, Facebook can create behavioral product purchase and engagement histories of its users that very effectively direct its ads targeting. What’s missing in the above assessment is that advertisers willingly and enthusiastically give that data to Facebook.

It’s important this data is specific and recent:

Recency plays an incredibly important role in the signal parsed from the data identified above. Not only must an advertiser know that a user has historically engaged and monetized with products following an ad click, but they must know that a user has done so recently: the older the data is, the less helpful it is in targeting ads to that user. Facebook’s ad targeting models require a constant supply of data — the off-property events stream — in order to retain efficacy.

This is why we’re seeing a great rise in retail media networks: everyone who has conversion data

  • Lowes
  • Chase
  • Walmart
  • Uber
  • Instacart
  • Doordash
  • CVS

has launched one, and investors are loving it. Selling attention by activating context is a great business, it turns out. [It could also be enshitifying everything.]

A fundamental problem is that context is fragmented and not evenly distributed. In some ways, this is good – we don’t want everyone to have all our context.  In real life we share it selectively.

On the other hand, this fragmentation leads to suboptimal experiences and services for businesses and consumers. Meta needs conversion data but doesn’t have it (and instead cuts deals to get it). Walmart – who wishes to anticipate grocery needs – could use cross-app itemized receipt data but only has its own.  Spotify – who wishes to provide more relevant user experiences – could use context from Apple Music, location, or browser history, but only has listening history. In response, companies stand up clean rooms to share consumer data, but it’s only available to the largest players and trades noisy “privacy preserving” data for political and compliance cover.

This cross-app interoperability could deliver surplus for consumers and businesses, but today we’re all restricted to the context any app has. This is the local maximum of Gurley’s capitalist nirvana. Surely we can do better.

Sirens of Nirvana

20 years ago Bill Gurley summoned the Sirens of Capitalist Nirvana.  The ultimate business model where marginal switching loss increased with marginal usage.

Each incremental usage created a new data point that cumulatively made consumer switching wholly inconvenient. Even if a competitor had something better, the prospective loss of convenience appeared too great for users to contemplate alternative action.

And now with language models able to operate practically any length of context for marginal costs approaching zero, the games of accumulating consumer context to induce switching loss is laid totally bare.

Before language models, this conceit was hidden beneath layers of infrastructure and headcount – transforming Big Data into Machine Learning Models was only available to the technical, creative and bold. The complexity of activating context created another moat – only the largest and most sophisticated could do it.

But now, language models are available everywhere, for everyone – serverlessly, in your own GPUs, open source, probably soon in the browser and OS – and for any skill level: all you need is a well-formed prompt.

The technical veil that made this capitalist nirvana not patently an exercise of competitive data tracking is gone. Language models democratize the activation of consumer context – it's just a prompt away and available for less cost than purpose-built systems.

The personalization we saw in the movies anticipated our needs. It could use context about us to surprise, delight and save us time and money.

Consumer enterprise prefers the context moat. It’s been the basis for the defensibility of their business for the past two decades.

But clinging to it now will fail relative to the growing data gravity of data consumers can bring with them. Consumers have more data about themselves than any business does. Consumers are the sun.

What users gain – internet applications that specialize themselves for each of us – is at least normatively and likely economically superior to what any business might lose in getting to claim they have more context than a competitor.

At the end of the day, the user (and likely the business) pays the cost of clinging to Gurley’s capitalist nirvana. And language models lay this truth bare. There’s no reason for it any longer.

We’re entering a future where consumers can bring their context with them. With Crosshatch, consumers can allow apps to interpret their context on any surface in ways it benefits both business and consumer. Competition proceeds on provision of genuine value, not who knows a slice of you best.

This mutual gain is critical, as it enables personalization that’s free for users, a founding design principle of the internet.

One failure mode for Crosshatch is businesses just reject this: why mess with a good thing? Well, the current paradigm isn't great for people. Normatively governments globally are requiring businesses allow consumers to access their data.

We believe context interoperability is actually good for everyone. Our intuition follows Adam Smith:

Key to promoting efficiency in markets is transparency and access. Adam Smith in the Wealth of Nations wrote about this over 200 years ago. He contended that if the price of information is lowered or you make information free, in promoting such transparency, the economy benefits. Similarly, if access to the market is free, everybody gets to compete.

Gary Gensler remarked in 2013. People and the economy benefit when information flows freely up to the frontier of user choice.

At Crosshatch, we’re building the infrastructure to make this future possible. Businesses can invite users to link their context in a tap, making their context available for activation for mutual benefit and in a way users control.

The context moat is coming to an end.

To begin building in this new future of personalization, turning on hyper-personalized experiences in a tap with no learning curve or cold start required, feel free to create a developer account.

See what Crosshatch can do for your business.

Crosshatch for businesses

Collecting our thoughts

experience
an internet
Made for you
start building