We’ve been getting used to algorithmic content for at least a decade.
It’s your social feed when it changed from chronological to “most relevant”. It’s the stories that rise to the top. And, which stories are disappeared for you. Our platforms have taught us that different people live in very different versions of the same system. Even if they’re standing next to each other.
That was only the beginning.
What’s next isn’t serving algorithmic content. It’s serving algorithmic interfaces.
That’s another hugely consequential shift.
From content to experience
Algorithmic content changes what you can see.
Algorithmic interfaces change what you can do.
I’m not talking primarily about look and feel. Not color themes, branding, or configurable dashboards. It’s more than just a flexible version of a UI that lets us set our own preferences.
These are systems that infer, in real time, which actions, paths, and affordances you should even have available to you.
The interface becomes less a surface that you navigate, and more a negotiation - between your intent, the inferred context, behavioral probability, and institutional constraints.
It’s not just two users seeing different things.
It’s users who might not be offered the same possibilities.
Interfaces stop being a single “thing”
There’s a comfortable familiarity around traditional interfaces. They’re describable.
I can point at a screen and say this is how this works. I can draw it and document it. So can you - and we’ll recognize we’re talking about the same thing. We can complain about it together. Even when they’re complex, there’s a baseline stability that makes the interface legible.
Algorithmic interfaces erode that stability.
If we’re letting systems assemble our experiences dynamically - based on how the system infers our goals, prior behavior, and our predicted competence - there’s no single “what it looks like”. We don’t have a canonical version.
This isn’t an interface that’s designed once (though it will likely have design philosophy behind it). It’s continuously decided.
This can make it very effective. It can also make it harder for us to see.
Not configurable
Let’s be clear and precise what this is not.
A configurable interface is something stable that the user has the capability to adjust. They make explicit choices about their preferences. Options are visible even if they’re unused. There is a shared set of underlying capabilities.
An algorithmic interface can invert that.
Now the system is adjusting itself before the user even arrives. So the choices are inferred and not chosen.
The system is mediating how to personalize on your behalf.
The interface is steering - and constraining - your behavior, whether you intended it to or not.
Hidden options, unequal paths
I think one of the most uncomfortable implications of algorithmic UI is this:
Some options may never appear for some people.
They’re not disabled in a settings panel. The user hasn’t actively opted out. But the system has inferred - correctly or not - that they’re unnecessary for that user. Unsafe, too complex, irrelevant, or unlikely to succeed.
That’s a subtle shift in power. The system isn’t neutral.
If the option is invisible, how does anyone know how to ask for it? When those paths differ in quiet ways, how can you compare? Two people could use the same product, have the same aims, arrive at different outcomes - and not be able to explain why.
Did it choose the correct context? Was it a structural bias? Was it a protective action? The interface doesn’t necessarily offer us the answers.
It may be an inevitable shift
This isn’t a drive for aesthetic ambition. It’s a drive for efficiency.
AI systems are better at inferring than instructing. They predict what’s next, identify patterns, and minimize friction. Static interfaces can create costly friction when context is volatile.
There are real advantages to algorithmic UI:
Reduced cognitive load
Faster task completion
Fewer dead ends
Adaptive accessibility
Interfaces meet users where they are
This is rational from a system perspective. If people only need a small part of the space, why expose all the possibilities and risk paralysis of choice? Optimize locally rather than force everyone through the same pathways.
Users benefit from this:
Onboarding that doesn’t confuse by showing advanced options to all users.
“Safe mode” interfaces that can be triggered for certain behavioral profiles.
Let expert users see dense control surfaces while others are more guided.
Interfaces that change between sessions based on learned and inferred confidence.
It’s not a theoretical appeal - it can be a genuinely improved experience.
Encoding preferences within the structure
Algorithmic interfaces don’t eliminate bias. They can operationalize and entrench it.
Every inference made is based on some model of what success, competence, risk, and clarity look like. And those models are based on training data, cultural assumptions, and historical precedent. Just like language models today are trained disproportionately in English compared to other languages, and so become more fluent in English contexts.
So preferences become structural.
Certain communication styles are seen as “confident”.
Some interaction patterns are read as “efficient”.
These approaches to problem-solving are “normal”.
And others are quietly discouraged. Not explicitly, but because the model tends to omit them.
Algorithmic UI shapes what gets shown, it doesn’t announce what’s forbidden. It’s tough to detect these preferences and biases. The system can feel helpful and fair, even if it’s nudging users towards certain behaviors and outcomes that it thinks are the best ones.
Over time, this reinforces existing advantages. The system learns who it works well for.
So it doesn’t just adapt to users, it starts to adapt users to itself.
What we need to see
As interfaces become more inferred, our challenges are going to shift.
We can’t simply ask about whether outcomes are fair, or whether the system performs well, on average. We need to understand and reason about the underlying possibilities - which paths were offered, which were hidden or never considered, and why.
It’s harder to define who or what is accountable, because we’re no longer pointing to a single interface that “doesn’t work” for some people. Critique is complex because the experience is more provisional. How do we apply governance when the system isn’t presenting consistent surfaces to view?
Algorithmic UI is a new way of building and presenting products. It’s also a new way of mediating the end user’s agency. Deciding, moment by moment, what the system will allow you to do.
And our capacity to assess needs to expand from understanding what just happened, to what else might have been possible.
Further reading:
Death of the default - on shared reality in a world of adaptive systems.
Walsh, D. Generative AI isn’t culturally neutral, research finds. MIT Sloan School of Management, Sep 2025.
Nielsen, J. AI: First New UI Paradigm in 60 Years. NN/G, Jun 2023.
Article photo by Michael Dziedzic on Unsplash
