BLOG

YMMVC

Extending the Model-View-Controller architectural pattern with AI-Bindings to make generative UI react to You.

Scroll to read
9/10/2024

The web started out still, but then it started to react.

This blog studies a history of reactivity on the web and how a reframing of the classical model-view-controller user-interface architecture pattern could unlock ultimate reactivity.

Still life

This is the world’s first website. It was published by Tim Berners Lee on April 30, 1993.

It’s a text document written in Hyper-Text Markup Language. It’s a page part of a file system that’s accessible to browsers at its file system’s IP address.

The page is static and read-only: a reflection of how the authors saw the world.

Hackernoon: What is HTTP?

It’s accessible via Hyper-Text Transfer Protocol: You request the page and you get it back. It doesn’t change based on your inputs here or anywhere.

Change with the times

In 1995 Boston TV’s WCVB’s Fred Dufresne created and patented server-side scripting so that web pages could be templated and dynamically constructed.   

Figure 14 from US Patent US5835712A filed in 1996.

This was important for news sites like WCVB whose news articles shared structure but had dynamic content.

WCVB from 1998 via Wayback Machine.

As the world changed, WCVB wrote new stories.

Scripting allowed it to update its content independently from its web page structure. Whenever a user visited the site, HTML was rendered server-side and sent to the client.

At this point in time, the web’s view of the world came outside the web.

“Sox CEO Pitches Ideas for New Ballpark Funding”

WCVB wrote about events happening in the real world and represented the news on the web.

This pattern enabled early reactivity, but it had two problems. First, whenever the world changed, the site loaded in the browser wouldn’t update. If you wanted fresh news, you’d have to refresh. The browser would then would

  1. go to the server
  2. fetch data from the database
  3. turn it into html
  4. send it back to the browser

Ideally, web applications would automatically change client side – on your browser – whenever the data state changed.

Second, while news applications reacted to events happening outside the web, soon we’d want applications that could react to state changes from inside the web.

Change inside out

In 1995, eBay launched, allowing users to buy and sell items in a marketplace. This introduced new applications of templating – for accounts, carts, and product pages.

At this moment the web began to create its own objects to represent: commerce.  It went observations of general events sourced offline to those cataloged digitally.

It birthed new semantics: items for sale, pending, shipped, delivered.

This kicked off a flurry of new platforms enabling representation and mutation of the world and its digital representations – travel, social, real estate and e-commerce.

But all these platforms concerned user state as only known by the application.

Change outside in

What if user state could be managed outside the application?

This started with providing user state outside the application session. In 1994 websites couldn’t remember users across sessions. Each time a user returned, web applications couldn’t remember. State developed within a session was lost.

Cookies gave HTTP state across sessions: They allowed sites to implicitly identify via user cookie identifier stored in the browser so that application sessions could be maintained in the server. That is, the web server associated user state to a cookie identifier.

This helped web pages remember users across sessions without logging in.

Third-party cookies enabled tracking user state across web applications, giving a more unified view of activities across many sites. This allowed sites to represent changes from outside their walls (e.g., activity on other sites) to services like personalization and ads on their own.

This made the web convenient but also a little creepy. When talking to one person we don’t usually imagine that conversation is being shared with everyone else.

But the systems to map outside activities to services within apps turned out to be fragmented and complex. These systems have two parts: sourcing and activating data – both challenging.

Pathways like cookies to source data outside in – now increasingly blocked – were often unreliable. Cookies refreshed. IP addresses have multiple users. Some applications used DMP data networks whose wonky backroom deals made reconciling identifiers across data partners equally error-prone.

Even the largest social networks rely on cross-app tracking: Meta replaced its pixel with the Conversions API to track conversions on its advertiser customer sites. Many imagine Meta has incredible data – it does – but data most valuable to its ads business comes from cross-application contexts. Meta doesn’t know if its ads convert unless advertising applications tell them.

Activating outside data inside is also a challenge. Engineers combine these data flows in proprietary machine learning algorithms and systems that are expensive to engineer. Even if you have access to data representing change outside in, activating that data is a whole other question.

Time to change

To get to the final stages of reactivity, we return to the original user interface design pattern: Model-View-Controller

Wikipedia simplified diagram of MVC.

We follow the 1979 original Trygve Reenskaug definition

Models. Models represent knowledge. There should be a one-to-one correspondence between the model and its parts on the one hand, and the represented world as perceived by the owner of the model on the other hand.

Views
. A view is a (visual) representation of its model. It would ordinarily highlight certain attributes of the model and suppress others. It is thus acting as a presentation filter.

Controller. A controller is the link between the user and the system. It provides means for user output by presenting the user with menus or other means of giving commands and data.

Perhaps only at question then is who is the owner of The Model?

Or rather, what is the model of? The application’s view of the user? Or the user’s view of the world? Is it the user or the app that provides the interface?

For the web to become fully reactive – to flux – we believe it’s both user and app.

Crosshatch-App Partner architecture pattern.

Here we introduce the YMMVC model, an extension of the classical MVC pattern but adding Your Model. In it, the app controls its own local model, but users also have a global model and private AI resources that can be permissioned to any app they use.  

Today, like the original server-side rendering of WCVB, if you want a view to update, you the user has to engage with the app to provide it updates to its model. This costs time and labor and in many cases you've already provided updates to other app models relevant to the present app. Apps should react to You: This pattern fixes this by allowing Views to update based on all Your Model context, not just your engagement with the application at hand.

Apps may expose their model to users to access and sync it.  Apps like Strava, Fitbit, Whoop, TikTok, and Google already do this. This means that Your Model is a meta model over cooperative application Models.

Apps update their Views based on local and global (Your) models, depending on whose data model is most relevant and accessible to a view at hand.

Model-view binding – making data representation in the model legible to the View – is typically challenging, but AI can make integration natural.  AI can translate combined app and Your Model context into new views with only a prompt mapping context from other application models to its own. AI is a powerful and inexpensive binding agent.

Claude suggested we talk about privacy – we've discussed privacy at length elsewhere – we believe this pattern is both privacy-safe (only using context a user chooses to share) and consistent with how we already use user interfaces (I already tell TripAdvisor my travel dates and price per night I'm looking for).

Do it like the 90s

[We borrow this header title and the following quote from this Facebook presentation in 2014.]

“We should do our utmost to shorten the conceptual gap between the static program and the dynamic process, to make the correspondence between the program and the process as trivial as possible” (Dijkstra, 1968)

Unlike traditional UI frameworks, we take a Model as less a candidate specification of a View, but rather context for the construction of any View as prescribed by an AI context-view binding.

AI as a binding-agent is perhaps a maximalist response to Dijkstra’s wish.

In a traditional application, if you see a snapshot of state it should be straight forward to understand how state binds to application view.  But if you were to see a collection of context from state outside your application, you might not know how that binds to a View. But AI – equipped with relevant context (and a general understanding of the world) – does.

In the ‘90s we refreshed our browser to get updated content. In 2024, applications infer from Your Model context to get updated Views..

Taking AI to be a binding agent, you can recast your UI as a referentially transparent function over arbitrary context, not just context as defined in your application. Instead of templating UI with variables defined by your application, AI as a binding-agent expands the scope of applications to operate over any context a user will permission to your application.

Using Vercel AI, this is Crosshatch YMMVC in action.

We’re getting ready to publish a Crosshatch SDK for Vercel AI library to enable this pattern in a few lines of code. Sign up for access and we’ll keep you posted.

See what Crosshatch can do for your business.

Crosshatch for businesses

Collecting our thoughts

experience
an internet
Made for you
get started