We’ve tried chatting with our computers. There was Clippy. My Amazon Alexa became the most over-engineered (and yet incredibly useful) voice activated cooking timer possible. But the dream of natural language as a primary interface kept arriving...and not quite working. Even the impressive demos were underwhelming in daily reality.
That’s shifted.
I interact with my computer almost entirely through conversation now. Not novelty. Not occasionally. It’s the primary mode.
I describe what I want in plain language. Despite the discomfort I mentioned in an essay a few weeks ago, I increasingly do so through voice as well as keyboard. Things get built. Files get created. Plans are written. Code is built. Systems get modified.
And then I look at what was made and decide whether it was right, or how to modify it. And we chat some more.
I don’t think I’m alone. In my circles, at least, feels like more people are operating this way. Directing tools through text and conversation, not clicking through interfaces that hold your hand.
GUIs didn’t disappear. But for a growing number of people they’re something you produce for others. They’re not where I operate from.
One of these things is not like the other
Conversational interfaces are the mechanic.
What’s underneath that interface is not all the same.
I use Claude in a chat window. I use Claude Code in the terminal. In both cases I type, something responds, I evaluate the output. We have a conversation. The interaction pattern is recognizable across both of them.
Claude Code has system access. It writes files. It runs commands. It asks my permission to modify things. And if I say “yes, do that”, I’m not just approving the generation of a document. I’m approving an operation on my own system.
The “yes” can feel similar in Claude.ai as it does in Claude Code. The blast radius is not.
Approving an operation in a chat window isn’t the same as approving a system permission in a dialog box. And on one level we know that.
But there’s an older, visceral aspect to this. Conversation triggers social trust.
We trust things that communicate like people.
When Claude asks me ever-so-nicely whether it can have bash access, a part of my brain processes that like a colleague asking me for a favor. Not just a root access prompt.
When it’s just a dialog, I can dismiss it. When it’s a colleague, I can’t dismiss them.
Claude Code is genuinely the most capable thing I’ve used for building. That capability is the point. It’s exciting. But the same quality that makes it feel trustworthy - it’s fluent, reasonable, amenable - is exactly what makes the trust worth examining.
What’s going on?
OK, let’s think about the first part first. Before we get cautious.
It’s really fucking cool that you can chat with your computer. Like actually chat to it, in ordinary language, and have it do useful stuff.
That’s new. That’s sci-fi made real for a lot of us. A genuine shift in how humans relate to machines. Maybe the biggest shift in interaction since the invention of the mouse and the window.
For all my life, computers needed me to learn their language to communicate. Commands, syntax, interfaces. So that the machine could parse it. We adapted to that tool.
Now...the tool adapts to me. It’s not perfect. It’s inconsistent. But so are people. At a very fundamental level, I think the direction of translation has reversed.
That’s not trivial. It’s not Clippy. It’s not a better search box.
It’s different.
But there’s a question worth asking. I don’t have a clean answer to it yet.
What does it mean to approve operations that you don’t fully see? What does it mean to let something that feels like a real conversation make changes to systems that actually matter? How much do we care how the thing was built, if we ask for something and our agent makes it work?
We haven’t built the intuition to deal with that yet.
I think the interface arrived well before our instincts have had time to adjust.
Further reading:
Gibbons, S, et al. The 4 Degrees of Anthropomorphism of Generative AI. NN/Group, Oct 2023
Milano, B. Your chatbot may be the friend that isn’t. Harvard Law Today, Oct 2025.
Numan, G. Are ‘Friends’ Electric? Youtube (official), Jan 2020.
