Just over two weeks ago OpenAI launched Operator,
an agent that can use its own browser to perform tasks for you

You can ask it to
- Find you cheap eggs
- Buy you an ice cream scoop
- Or book you a trip
OpenAI’s agent will attempt to execute the task in the browser for you end-to-end, searching and clicking just like you would.
Operator and browser agents like it are poised to allow us to stop using internet applications altogether, and just let AI do things on our behalf.
We’re incredibly excited about this future, but it seems like there are (at least!) two open questions on the path there
- Will sites allow agents open access?
- Who is responsible for agent actions?
The answers to these questions have implications for how agents enable personalized services that save consumers time and money. Do agents orchestrate applications, or do users bring their agent to apps?
Can agents access app inventories?
Agent capability is downstream of access to context and tools.
Operator enables access to tools by making the browser its tool – it’s able to execute anything in a browser you could.
Apps and sites like Uber, Zillow, Airbnb, and Walmart work hard to assemble a trusted supply of cars, homes, and staples they make available to customers. They’re opinionated about customer experience and want to own the whole experience end to end.
It’d appear for this reason, we’re already seeing sites like Reddit and Youtube block Operator. As we wrote last week, referencing Benedict Evans
no one in tech wants to be someone else's dumb API call.
Last year Airbnb was hiring Anti-Bot engineering.
These market dynamics are still forming, but early responses appear to be aligned with Evans: apps who own valuable inventory want to be the principal toll road to it.
Who is the DRI for AI agents?
When AI agents take action on our behalf, who is responsible if they mess up? When there is money involved, will the agent cover a mistake?
For instance, when you order DoorDash, DoorDash is acting as your agent to orchestrate the sourcing and delivery of your food. If your order isn’t right, DoorDash covers it, often offering re-delivery or refund.
When the Washington Post tried Operator, Operator bought eggs on behalf of the columnist, even though that wasn’t specifically asked.
Sam Lessin, previous co-founder of Fin (a human/AI assistant founded in 2017), shared 3 lessons from building Fin
When you get between a customer and an open-ended set of services with real world consequences that you don’t control, you are assuming a massive amount of liability for outcomes you can’t guarantee."
The 1% Blame Game - The Assistant Front End Will Get Blamed for Any Downstream Service Mistake"
“Depth of Error Impact in the Real World — The surface area of cost and consequence is dramatically dramatically bigger than it is when your Doordash food is cold. The problem is a financial one. Every time you take on a high ticket item you are basically taking on the liability of a big refund or huge fight if you don’t deliver … and if you are just getting paid as an ‘assistant’ vs. something like a 10% or 20% commission on everything, you just don’t have the cash to give reasonable compensation to your customers and keep your model working.
"I could have done it better / faster myself" — Anytime a service isn’t perfect, the customer / client thinks ‘I could have done it better or faster myself’ … even when that is 100% untrue. But again, when someone gets in the middle of that transaction in most cases the liability assumption even when things don’t go terribly wrong creates constant stress.
The DRI for an AI agent is the agent itself. The agent is more valuable the more it does for you, but that adds to the surface area of what it can mess up. And when it messes up, people will expect the agent to make it right. But as Sam notes, the business model to make democratized assistants work is difficult to find.
Buying services from multiple vendors with unclear responsibility boundaries often leads to poor outcomes for customers: things get dropped and it’s not immediately clear who is ultimately responsible for any service level. [You can see this in challenges presented by BYOC deployments, for instance.]
Make agents responsible again
Today’s independent AI agents own no inventory themselves – the content, services, and goods they aggregate are not theirs. The services they source this from could readily block or hinder their activity.
And they’re not DRIs for outcomes. If they make a mistake, will they compensate you for your lost time or money?
In the end, an app is a wrapper for
- Its own supply of content, goods or services
- Direct responsibility for the provision of these goods
AI agents have neither of these traits.
This is why, if you want AI to help you save time or money, you need to go to services that both have
- The goods you want to consume
- Are responsible for the effective delivery of these goods
Apps do this, but today’s independent AI agents do not.
Motivation for personal AI as an identity
Many have imagined that a personal AI could be an AI agent that goes out and does stuff on your behalf. This AI agent should know you well and can use this information to help you save time and money.
But both of these dynamics
- Sites already blocking independent AI agents from taking automated action
- Lack of a directly responsible party for delegated action
challenge this path of the personal agent.
Instead, if you want personalized services that save you time and money, you need to bring your AI agent to these services, enabling you to access their private supply of goods and services under a service agreement where there actually is someone who’s responsible for the delivery of goods. By bringing your AI agent, you can access the goods and services these applications sell, and retain the service agreement where they guarantee service quality. This enables a personalized experience that an AI agent could otherwise provide, but under the boundaries and guarantees of an application.
Without access to goods or capacity to actually guarantee their delivery, what really is an AI agent?
[An AI agent is actually a user agent that always acts in the user’s interest, sharing information with applications only reasonably necessary for the fulfillment of user requests.]