The costs of the default
We're losing our shared experiences. It's now inefficient not to personalize.
A couple of weeks ago I wrote about the death of the default. The baseline experiences we share - in the digital sphere, but more broadly as a community and as a society - are eroding as systems become more adaptive. It’s cultural. Defaults aren’t just convenient design choices; they’re part of how we make all kinds of systems legible, contestable, and shared.
It’s happening because personalization isn’t expensive anymore.
It used to be. A static interface everyone had to use wasn’t just a choice, it was a necessary budget decision. Customization means more design work, more engineering, more testing. More cost. Our “default” was an economic constraint. But we dressed it up as a deliberate choice. One solution, simple enough for everyone.
Now our cost constraint is disappearing.
All the major cloud providers are investing heavily in hardware built specifically to make inference cheaper. In late January, Microsoft unveiled Maia 200 - a custom-built chip for AI inference that delivers 30% better performance per dollar than previous hardware. It’s getting cheaper to generate a personalized response, and that isn’t going to stop.
Once we get to that point, serving everyone the same thing isn’t a neutral choice. It starts to become actively wasteful. Why ignore contextual intelligence you already have? It’s inefficient not to personalize.
That changes some basic assumptions about how we create experiences.
Settings pages - where you configure preferences and forget about them - stop making sense. The system can just infer what you need. The experience gets generated. And if every user is getting a different experience, how do we do quality assurance? It’s not a check of a single interface. It becomes something statistical. It isn’t “does this screen work?” but “do all these different outputs stay within acceptable bounds?”
It’s still political. Defaults are political by nature - consciously, and unconsciously, they encode assumptions about who users are and what they need. If inference replaces design defaults, the political decisions move somewhere harder to see. They’ll bury themselves in the training data, in system prompts. Much less visible. Harder to challenge.
Defaults aren’t just about simplification. Even with their unavoidable flaws, they give people a shared experience to point at, argue about, and hold accountable. If a default discriminates, people can do something about it. When they dissolve, that shared reference dissolves too.
Now it’s being dissolved by economics. It’s not necessarily a conscious decision on anyone’s part. That makes our usual responses - a new design pattern, a framework, a set of guidelines - less effective at mitigating the issue.
We can’t bolt a solution on later. Whatever replaces the social function of defaults has to be built into the infrastructure itself.
Further reading:
Death of the default - on shared reality in a world of adaptive systems.
Guthrie, S. Maia 200: The AI accelerator built for inference. Jan, 2026.
Article photo by Alexandre Debiève on Unsplash.
