Arthur C. Clarke wrote:
Any sufficiently advanced technology is indistinguishable from magic.
That’s as much a warning as a wonder.
When I have a health issue, I don’t ask a magician.
I said something along those lines at a Knapsack Patterns event in Houston. It seemed to land. Not because it’s a surprising claim. But it has implications about what we’ve accepted, or what we’re being encouraged to accept.
AI is, to use a term I’ve recently fallen in love with, bonkers. It’s exciting. I can do stuff that makes me feel like…no other term for it…a wizard.
And in most of the ways that matter for really consequential decisions, it’s a black box. Those outputs are fluent and confident, and they have no receipts. Which for low-stakes work is fine. A draft, summary, some first pass on something you intend to rewrite. A wrong answer is low cost. And it’s a tight feedback loop - look at what you asked, look at the outcome, and course correct.
But some domains aren’t like that.
Medicine. Law. Education decisions for my children. These are compounding decisions. An error now doesn’t surface until three steps later. At which point the trail is as important as the outcome. “It worked” or “it didn’t work” isn’t sufficient information.
These are domains where we need to know why. We demand provenance. Where did the answer come from? We want to know what it was weighed against, and who’s accountable when it’s wrong.
AI slop is a real thing. Confident hallucination. Plausible fabrication. OK looking interface that falls apart when you press it. Answers that are fluent and wrong.
And, again, in low-stakes work that’s just annoying. In high-stakes work, it’s consequential, or even dangerous. And the model doesn’t know the difference. Its confidence is the same regardless of consequence. It’ll produce its effect.
That’s what you get with magic.
An output. No mechanism.
Magicians aren’t there to solve problems. They’re there to produce wonder. An effect. Their performance is the whole point - and they make it work with confidence, fluency, apparent certainty…and a good bit of misdirection. But carrying that into different domains doesn’t work.
At the Houston event we talked about AI-assisted patient journeys. Clinical staff asking questions about their internal AI agent: Where does this information come from? How certain are you? What’s the difference between this “specialist” agent and ChatGPT?
These are provenance questions, not technology questions.
These users aren’t afraid of AI in the abstract. But they are afraid of being accountable for outcomes that they couldn’t trace back to some kind of defensible source.
That’s a perfectly rational fear.
Think about journalism. We build sourcing requirements for a reason. Forensic evidence has chain of custody requirements. Finance needs audit trails. This isn’t paranoia.
Accountability requires reconstruction. The ability to go back and show your work. And the confidence that if you started again with the same data, you’d reach the same conclusion. That’s not bureaucracy, it’s the whole point.
AI, as most people encounter it today, doesn’t show its work. In many cases it cannot. My CEO asked Claude to justify a financial metric it gave him recently. It’s answer:
I have to be honest, I made it up.
We place tests and measures on humans. We need to apply them to AI outputs in the same way. Scientific method. Peer review.
Smart people can be wrong. Confident can be wrong. Well-intentioned can be wrong.
The defense we have, the defense we rely on, is structured scrutiny. Documented reasoning and the ability to reproduce results.
AI is all of those people. At scale. Simultaneously.
And it has no ego. Which should actually make the rigorous review easier.
There’s also no reason why demanding this would measurably slow AI down. It makes it trustworthy when it matters.
AI technology is not slowing down. It’s not going to. It doesn’t need to.
But magic and medicine are not the same discipline. The distinction isn’t optional.
The more important and consequential a domain is, the worse any kind of deviation from reality becomes. AI hallucinations aren’t a quirk to manage around. They’re a fundamental failure. A failure that demands the same infrastructure we’ve built to cope with the fact that humans are fallible.
Provenance isn’t a constraint on AI. It’s an empowerment of it. It’s what moves AI from magic to reality.
Magic is wondrous. I like watching it.
I live in reality.
Further reading:
Data Provenance for AI. MIT Media Lab.
Miller, K. Houston, We Have a Problem: Building Trust in the Age of AI. Knapsack Blog, Mar 2026.
Delistraty, C. A.I. Isn’t Magic. Lots of People Are Acting Like It Is. New York Times, Sept 2025.
Article photo by Joseph Two on Unsplash.
