BLOG

Personal AI is an identity

Browsers and OS use identity to make data available across devices. The case for identity to make personal AI available across applications.

Scroll to read
6/8/2024

In March we asked What is Your AI?

It’s our belief that Personal AI is anything and everything. Personal AI is anywhere we are. It's a composition of attention, tools, and context.

Most candidate Personal AI do not have the context. As Notion’s Ivan Zhao put it last week

The storage/data/context layer remains fragmented. Compute without context can’t do much.

Suppose to the contrary that the context layer were not fragmented.  But Personal AI is everything and everywhere.

So is our (private?) context now everywhere?

Reductio ad absurdum! Surely not!

The context over which personal AI might operate is often private. Equipping everyone with all our context would fly in the face of accepted data minimization principles adopted by global privacy law. It'd also be inefficient and expensive, with everyone building personal AI duplicating pipelines to end up with the same context.

What’s increasingly clear is that legally, individuals have a right to this fragmented context. So the data to power a personal AI should at least be with us.

But how do we get it everywhere? To every Personal AI? In a way we control?

In this blog we study paths for Personal AI to rise, the tradeoffs each of these paths might make, and why we believe Personal AI is an identity.

Compute with some context

Compute without context can’t do much, so we study only the paths that have context.

The only existing technologies that have rich context are

  • The browser
  • Desktop OS
  • Mobile OS

The browser can see a history of everything you’ve browsed.

The desktop operating system can see all compute processes and screenshot everything on the screen.

Neither can see what you do offline.  They probably also cannot access data from mobile apps. Nor can they see the context kept private by businesses, like inventories or profiles of other platform users. Most offline services keep digital backends accessible via email or data request.

Mobile operating systems like iOS or Android appear different, in part because of how they mediate installations and monetization via App Stores. They see OS-level interactions but do not record interactions that take place within an app. For instance, iPhone does not collect

  • Ubers we’ve booked
  • DoorDash orders
  • Amazon purchases

In Apple or Google terms of service we found no language explicitly blocking this activity. That said, given the consent orientation of corporate policy and global privacy law, it’d seem Apple and Google would need to get consent from both users and developers to do this. On last week’s All In Podcast Chamath agreed Apple would need to change their SDK terms of service to enable this sort of data usage. While many are excited about mobile OS enabling a personal AI, it might turn out they have access to less context than most imagine.

If you look closely at all three of these technologies, you’ll notice that all of them encourage users to log in

  • MacOS and iOS with iCloud
  • Windows with Microsoft
  • Chrome with Google
  • Safari (indirectly with iCloud)
  • Arc Browser

These login services enable a more seamless browsing experience by making context from one device available on another, such as:

  • Your iMac available on iPad
  • Your Surface available on PC
  • Your mobile Arc available on Windows Arc

So while it’s true there are only three technologies today that observe massive user context flows, all three are positioned downstream from identity.

And increasingly, so is the web.

Data is wherever I am

A lot of people seem to believe that data on iPhone doesn’t leave your phone.

This is not true. It’s also not convenient.

Most iPhone users keep iCloud accounts. iCloud syncs data from native apps like Photos, Contacts and Maps to iCloud so it’s available across devices.

Here’s Apple’s Nick Gillett explaining at WWDC19

A belief that I have is that all of my data should be available to me on whatever device I have wherever I am in the world. …

And the data that we create on all of these devices is naturally trapped.

There’s no easy way to move it from one device to another without some kind of user interaction.

Now to solve this, we typically want to turn to cloud storage because it offers us the promise of moving data that’s on one device seamlessly and transparently to all the other devices we own.

If Apple is storing our encrypted data on its cloud services, you might start to wonder who holds the encryption keys. It turns out it depends. For most users, while the iPhone stores encryption keys for iCloud data for apps like Journal, Maps and Safari, Apple holds the encryption keys for data like Photos, Notes and Contacts.

Be careful with those selfies!!

AI even where I’m not

We use iCloud so our data is wherever we are.

But what about AI?

It appears that AI model size is bifurcating. Small performant models are beginning to appear natively in the browser like Gemini Nano in Chrome. On the other hand, large models like Llama 3 70B can run locally on Apple Silicon Macs but need more RAM than iPhone 15 currently has. Even if it could, it’d have significant impact on battery life, reported Dylan Patel at SemiAnalysis.

And if you want to use private models like Claude Opus or GPT4, demanding local-only (or onchain!) inference could leave you out of luck.

It is expected that this week Apple will announce OpenAI integration with iPhone with inference available via Secure Enclaves.

While data is canonically encrypted at rest and in transit, data is often decrypted when it’s in use. This would challenge Apple’s privacy posture, which keeps data encrypted but in limited circumstances.

Secure Enclaves solve this by keeping data encrypted while it’s in use. This could enable Apple to extend iPhone services to cloud inference that isn’t available locally running on data that even Apple doesn’t have the keys to decrypt.

With Secure Enclaves, iPhone users get inference over data they control even where they’re not.

Personal AI is an identity

All the candidate services that might wield context for a personal AI encourage users to log in with a user-governed identity.

iCloud. Google. Microsoft. Arc.

These identities, and the permissions associated with them, are extended by encryption keys. Some user-governed. Some managed. The choice depending on ergonomics and user-security preferences. These identities accumulate context, like browsing, photos or health information.

But all of these existing services have constrained interfaces. They don’t interoperate. Apple is explicitly against cross-context interoperation. Google is constraining it. Folks are unsure of Microsoft.  

None have the full context.

Apple enables users to share suggested memories including photos and maps with Journaling Apps. Apps can't see all connected data, only suggestions that users agree to share.

On the other hand, per Notion’s Ivan Zhao, applications from travel, to dining, to grocery to social are shipping applications with fragmented context. AI is capable of transforming context into great new applications, but applications can only apply it if they have it.

This is why headless personalizationan identity layer with data, AI and user governance built in, that could dip into our data wells, reuse it and create magical personalized experiences in different contexts – is so interesting to us.

If there were an effortless way to assign our fragmented data to an identity with AI resources we govern, and avail those resources to third parties to interpret it in new applications, that could unlock incredible new applications that today are otherwise constrained by operating systems and browsers.

This architecture could begin to address the concerns Bill Gates raised in November last year

You’ll want to be able to decide what information the agent has access to, so you’re confident that your data is shared with only people and companies you choose.

Putting AI resources next to where user data live and placing them within a security boundary users  govern give users a way to endow AI resources with access to accumulated permissioned data in a way they effortlessly control. This pattern also respects a minimization principle, where the only data shared by the AI resource, whether it's GPT4 or Claude Sonnet or Gemini Nano, is that which is needed for the application to operate.

As Apple showed with iCloud and its forthcoming AI services in Secure Enclaves, it’s not about data or AI staying local, per se. It’s about encryption, user agency, and sometimes trust. Many users trust Apple and elect Standard Data Protection v its Advanced form. AI served on iPhone is cloud inference served to my identity with encryption keys I control.

And while previous phases of computing were fragmented – involving different stacks, runtimes, frameworks, etc., – AI appears simple. Text is the universal interface. While previously it made sense to endow third parties Data Controller rights so they could perform their own flavor of computing on our data, with generalized models rising in capability, speed and cost efficiency, there’s less need to give third parties Data Controller rights to process data with models that users could govern themselves. This becomes particularly apparent as users accumulate a new Data Gravity and physics move compute to where the most user data live.

Personal AI is not a tab in your browser. Or just your browser. Or an app on your phone.

It’s everything.

To be everything everywhere all at once, Personal AI with all our context needs to be what you govern.

Personal AI is an identity.

See what Crosshatch can do for your business.

Crosshatch for businesses

Collecting our thoughts

experience
an internet
Made for you
get started