How to build the thing [Part 5: AI as translator, not controller]
AI should bridge the gap between intent and execution; humans need to shape the result
AI is everywhere now. We’re plugging it into workflows, grafting it onto existing tools, layering it across processes that weren’t designed for it. “Do AI!”
And it’s new, powerful, and it presents as a little bit magical. So it gets treated as the centerpiece, the point of things. AI is the headline.
But (and you may have heard this already in this series!), AI is just another thing that gets us to the thing.
In this fifth part of the series, I’m focusing on the role AI should play in systems: not being a controller, but by being a translator. Something that’s getting better and better at turning human intention into motion.
When AI is a controller, it constrains us.
When AI is a translator, it amplifies our intent.
From “what I mean” to “what happens”
Most work involves translation. And that usually includes multiple iterations to ensure those translations are accurate.
intent → articulation → structure → execution
AI has immense value in its capacity to compress those stages (even to the point of collapse or their elimination as a specific step in design and delivery).
Describe a flow, and see it appear.
Sketch an idea - in images or in prose, and see the scaffolding form.
Outline a page, see a worthwhile draft arrive.
The human step isn’t eliminated here. In fact it remains absolutely critical. But AI shortens the ramp between thought and action. As the slogan at Knapsack says:
Think it. Ship it.
Think about when AI actually feels magic. It’s when you’re communicating directly with the system in your own language - not needing to use special syntax, no ceremonies, no need for intermediate tooling.
Let it learn your dialect, and make suggestions
A translator proposes.
If you lean too hard on AI then it automates the decision making process - in a way that’s brittle. And if AI isn’t grounded in how your teams think, build, and speak, then it’ll become generic. Detached from your lived reality.
A translator knows the dialect. And the goal isn’t to hand the work to the AI, it’s to let AI support the work that’s already moving forward.
Good AI-supported systems:
Offer a starting point rather than jumping straight to a conclusion
Ask clarifying questions to call out ambiguous intent
Provide options, not a prescriptive solution
AI speed combined with human excellence means widening the available paths, not narrowing them.
I’m sure we’ll see AI flatten many organizations into something average. AI is getting fairly capable at getting things to “OK-ish”.
But used well, it should raise the floor and the ceiling using intuition that you’ve already codified - your dialect.
(This is why Part 4 - codifying intuition - comes before this one)
Conversation, not command
I think this is one of the biggest misconceptions around AI today - and where much of the fear comes from.
The value of AI comes from interaction, not one-off prompts. Single instructions can’t capture complex intentions - that’s where we get “AI slop”. But a conversation can refine those intentions.
Translators let people correct, redirect, reinterpret, push back, encourage critique. And, perhaps more importantly, they let people take over.
AI gives the path structure, but the person stays in the loop as the owner, shaping and validating.
Here’s a simple test for whether AI is “good”.
Does it remove friction while preserving agency?
If your answer is yes, then it probably belongs there. If it’s quietly centralizing decision-making in a model nobody understands, you’ve got a problem.
AI should exist to empower humans
A healthy AI supported system increases the capability of the people who interact with it.
People get more space to think. And they can use that space, and the AI support:
Focus on the most meaningful parts of the work
Clear away repetition
Reduce the cognitive challenge of starting from a blank page
Accelerate - and thus encourage - greater exploration before committing
And it should be additive. If the system can’t function without the AI then you’ve created a dependency, not a capability.
AI is a multiplier of human judgment, not a replacement for it. And the more “the thing” comes into view, the more the AI should fade into the background.
Further reading
Halleck, Quinn. “Does Film Survive AI?” (YouTube). TEDx Talks, 2024
Wang, Ge. Humans in the Loop: The Design of Interactive AI Systems. Stanford University, October 2019.
Part 5 in a six-part series on building in a way that serves real human outcomes.
Part 6 will be published in two weeks.
