<?xml version="1.0" encoding="UTF-8"?><rss xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:atom="http://www.w3.org/2005/Atom" version="2.0" xmlns:itunes="http://www.itunes.com/dtds/podcast-1.0.dtd" xmlns:googleplay="http://www.google.com/schemas/play-podcasts/1.0"><channel><title><![CDATA[Robin Cannon: Field Notes]]></title><description><![CDATA[Professional writing from the edges of product, design, and digital systems. Drawing on my role as VP of Product at Knapsack, and years leading design systems and product strategy at IBM and J.P. Morgan — the patterns, decisions, and dynamics that don't fit neatly into case studies. Systems thinking, strategy, and leadership from inside the work.]]></description><link>https://www.robin-cannon.com/s/field-notes</link><generator>Substack</generator><lastBuildDate>Sat, 11 Apr 2026 05:38:58 GMT</lastBuildDate><atom:link href="https://www.robin-cannon.com/feed" rel="self" type="application/rss+xml"/><copyright><![CDATA[Robin Cannon]]></copyright><language><![CDATA[en]]></language><webMaster><![CDATA[shinytoyrobots@substack.com]]></webMaster><itunes:owner><itunes:email><![CDATA[shinytoyrobots@substack.com]]></itunes:email><itunes:name><![CDATA[Robin Cannon]]></itunes:name></itunes:owner><itunes:author><![CDATA[Robin Cannon]]></itunes:author><googleplay:owner><![CDATA[shinytoyrobots@substack.com]]></googleplay:owner><googleplay:email><![CDATA[shinytoyrobots@substack.com]]></googleplay:email><googleplay:author><![CDATA[Robin Cannon]]></googleplay:author><itunes:block><![CDATA[Yes]]></itunes:block><item><title><![CDATA[The Claude to Claude Code bridge]]></title><description><![CDATA[The key that gets you in the room]]></description><link>https://www.robin-cannon.com/p/the-claude-to-claude-code-bridge</link><guid isPermaLink="false">https://www.robin-cannon.com/p/the-claude-to-claude-code-bridge</guid><dc:creator><![CDATA[Robin Cannon]]></dc:creator><pubDate>Tue, 31 Mar 2026 15:01:35 GMT</pubDate><enclosure url="https://substack-post-media.s3.amazonaws.com/public/images/ea420ca3-f1a9-4e85-b536-82cb81436702_5593x3621.jpeg" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>It took me a while to get my head round the idea that Claude and Claude Code live in separate environments. They&#8217;re both called &#8220;Claude&#8221;, right? But I didn&#8217;t really understand the practical implications until it started getting in my way.</p><p>I was out. Had my phone in my hand. And wanted to run a competitive analysis to see how Knapsack was addressing the market issue raised in an article I&#8217;d just read. Everything I needed was in my <code>cpo-skills</code> suite. Context, background data sources, routing tables to know which command to pull for which task.</p><p>All sitting in my vault. Perfectly organized. Inaccessible to me.</p><p>Because Claude Code lives in my CLI. My CLI lives on my laptop. My laptop was at home.</p><p>Claude.ai was right there. And it had far less that could help me.</p><p>It made me take a closer look at what claude.ai skills could do. My assumption - skills are prompts. A few hundred words providing context, available to trigger. Useful for tasks. But a long way from the depth I&#8217;d built in Claude Code.</p><p>I&#8217;ve built for depth. My <code>cpo-skills</code> suite has ten discrete command files, a delivery pipeline context document, and utilizes my specialized data-gathering agents wired to Linear, Slack, Github, etc. My <code>thought-leadership</code> suite carries its own context file - posting cadences, metrics, content pillars, conferences I&#8217;m tracking.</p><p>When Claude Code triggers these, that&#8217;s more than a prompt. It&#8217;s loading a working system.</p><p>But I realized that a claude.ai skill can be a gateway to that same system. On trigger, instead of containing all the intelligence itself, it instructs Claude to fetch it. It reads the README from my vault, loads the context file, consults the routing table, and pulls the specific command for the task at hand.</p><p>The files live in GitHub. Claude fetches them directly during the session. The skill in claude.ai is the key. The vault on GitHub is the room it opens.</p><p>My <code>cpo-skills</code> and <code>thought-leadership</code> skills in my Claude.ai project work in exactly this way. Each one is a few dozen lines. When I trigger the skill, it expands into the full suite I built in Claude Code. That&#8217;s context, routing logic, specialized commands, and in some cases an agent. All also available in that GitHub vault.</p><p>If you&#8217;re working in claude.ai you see a simple skill with a clear description. What runs is everything in the vault, accessed remotely.</p><p>This matters beyond just my own workflow. Claude Code solves for depth. That&#8217;s complex, stateful, multi-step work with persistent context. Claude.ai solves for accessibility. No terminal, no config files, far lower technical barrier.</p><p>The bridging pattern doesn&#8217;t collapse the distinction. The complexity stays in the vault, versioned and maintainable. The interface stays simple.</p><p>The person who builds the vault and the person who opens the door don&#8217;t have to be the same.</p><div><hr></div><h4>Further reading:</h4><ul><li><p>Salcan, Y. E. <em><a href="https://medium.com/@yunusemresalcan/claude-vs-claude-code-vs-cowork-which-one-do-you-actually-need-66d3952a2eb4">Claude vs Claude Code vs Cowork &#8212; Which One Do You Actually Need?</a> </em>Medium article, Feb 2026</p></li></ul><p><em>Article photo by <a href="https://unsplash.com/@alluntsyatko?utm_source=unsplash&amp;utm_medium=referral&amp;utm_content=creditCopyText">Alla Bila</a> on <a href="https://unsplash.com/photos/a-weathered-red-double-door-under-a-stone-archway-qys_X17KRRE?utm_source=unsplash&amp;utm_medium=referral&amp;utm_content=creditCopyText">Unsplash</a>.</em></p><div class="subscription-widget-wrap-editor" data-attrs="{&quot;url&quot;:&quot;https://www.robin-cannon.com/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe&quot;,&quot;language&quot;:&quot;en&quot;}" data-component-name="SubscribeWidgetToDOM"><div class="subscription-widget show-subscribe"><div class="preamble"><p class="cta-caption"><strong>Subscribe for articles on design, technology, and culture - plus original fiction.</strong></p></div><form class="subscription-widget-subscribe"><input type="email" class="email-input" name="email" placeholder="Type your email&#8230;" tabindex="-1"><input type="submit" class="button primary" value="Subscribe"><div class="fake-input-wrapper"><div class="fake-input"></div><div class="fake-button"></div></div></form></div></div><p></p>]]></content:encoded></item><item><title><![CDATA[We've always known the destination]]></title><description><![CDATA[A thirty-year detour to somewhere we knew we were meant to go]]></description><link>https://www.robin-cannon.com/p/weve-always-known-the-destination</link><guid isPermaLink="false">https://www.robin-cannon.com/p/weve-always-known-the-destination</guid><dc:creator><![CDATA[Robin Cannon]]></dc:creator><pubDate>Tue, 24 Mar 2026 15:01:21 GMT</pubDate><enclosure url="https://substack-post-media.s3.amazonaws.com/public/images/b6658b15-a105-42ae-84ad-cdc930cb56d9_6000x3376.jpeg" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>There has always been one obvious destination for digital product delivery. A single interface where design intent and working reality are the same thing.</p><p>Not translated.</p><p>Not approximated.</p><p>Not handed off from one side and reconstructed on the other.</p><p>The same thing.</p><p>Our industry has been trying to build that for thirty years. We&#8217;ve gone from building bad versions of the right thing to building good versions of the wrong thing.</p><p>The jokes wrote themselves. Dreamweaver sites. FrontPage sites. If you were of a certain rebellious bent, HotDog sites. </p><p>These used to be mocked because they were the telltale sign of someone who didn&#8217;t really know what they were doing. Table-based layouts, inline styles, spaghetti markup so bloated that any developer would want to quietly rebuild the whole thing from scratch instead of fix it.</p><p>They were also empowering tools for a lot of people. They let you make stuff that was real. They weren&#8217;t being mocked because they were trying to unify the visual and the functional in a single interface. That instinct was right.  They were mocked because of how much they corrupted the code side of the equation.</p><p>The canvas was easy to navigate.</p><p>The code output was garbage.</p><p>So industry corrected. Realistically, given the technical limitations. Built an organizational culture around the separation of disciplines.</p><p>Serious designers used serious design tools.</p><p>Serious developers wrote serious code.</p><p>And between them a handoff ritual grew - redlines, specs, prototypes, tickets.</p><p>We created a workaround dressed up as a workflow.</p><p>The separation is artificial. We&#8217;ve always known this to some extent. Design systems were an obvious admission. Design intent encoded as structured, reusable truth rather than redrawn from scratch on every new screen. Tokens, components, semantic definitions; shared language that both sides could read. I&#8217;ve often joked that the irony of the name &#8220;design system&#8221; is that its primary consumers are usually developers.</p><p>AI closes the remaining distance. When structured design context can be interpreted directly into working interfaces, the translation layer becomes unnecessary. The middle dissolves, and the destination comes into view.</p><p>It&#8217;s why I find these code-to-canvas offerings so strange. Code-to-canvas takes a working interface - real interactions, data, behavior - and converts it back into static frames. </p><p>It argues that collaboration is only possible on drawings of the real thing, not the real thing itself.</p><p>Dreamweaver and FrontPage, for all their failures, at least understood where they needed to go. The visual and the functional needed to live together. They just didn&#8217;t have the technology to make their ambition real. The code they generated was the limitation, not the vision. </p><p>You can forgive a tool for being ahead of its time. But the technology exists now to make the canvas genuinely real - connected, live, executable. And it&#8217;s harder to forgive a deliberate turn away than a premature attempt at the right destination.</p><p>Our destination hasn&#8217;t changed. A canvas as a live interface into the system. The real thing made navigable, editable, collaborative. We&#8217;ve known that&#8217;s where we were going for a long time.</p><p>Surely this time.</p><div><hr></div><h4>Further reading</h4><ul><li><p><em><a href="https://www.robin-cannon.com/p/the-digital-workflow-is-obsolete">The digital workflow is obsolete</a></em>, on the collapse of the handoff model.</p></li><li><p><em><a href="https://www.robin-cannon.com/p/code-to-canvas-is-bonkers">Code to canvas is bonkers</a></em>, on Figma&#8217;s specific wrong turn.</p></li><li><p><em><a href="https://www.webmasterworld.com/html_editors/347.htm">FrontPage vs DreamWeaver</a></em>. Webmaster World.com discussion thread, July 2003.</p></li><li><p>Smith, E. <em><a href="https://tedium.co/2017/03/02/microsoft-frontpage-history-web-design-wysiwyg/">Your Code is Junky</a>.</em> Tedium, March 2017.</p></li></ul><p><em>Article photo by <a href="https://unsplash.com/@d_mccullough?utm_source=unsplash&amp;utm_medium=referral&amp;utm_content=creditCopyText">Daniel McCullough</a> on <a href="https://unsplash.com/photos/an-architect-working-on-a-draft-with-a-pencil-and-ruler-HtBlQdxfG9k?utm_source=unsplash&amp;utm_medium=referral&amp;utm_content=creditCopyText">Unsplash</a>.</em></p><div class="subscription-widget-wrap-editor" data-attrs="{&quot;url&quot;:&quot;https://www.robin-cannon.com/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe&quot;,&quot;language&quot;:&quot;en&quot;}" data-component-name="SubscribeWidgetToDOM"><div class="subscription-widget show-subscribe"><div class="preamble"><p class="cta-caption"><strong>Subscribe for essays on design, technology, and culture - plus original fiction.</strong></p></div><form class="subscription-widget-subscribe"><input type="email" class="email-input" name="email" placeholder="Type your email&#8230;" tabindex="-1"><input type="submit" class="button primary" value="Subscribe"><div class="fake-input-wrapper"><div class="fake-input"></div><div class="fake-button"></div></div></form></div></div><p></p>]]></content:encoded></item><item><title><![CDATA[The design token cargo cult]]></title><description><![CDATA[How a useful tool can become dogma]]></description><link>https://www.robin-cannon.com/p/the-design-token-cargo-cult</link><guid isPermaLink="false">https://www.robin-cannon.com/p/the-design-token-cargo-cult</guid><dc:creator><![CDATA[Robin Cannon]]></dc:creator><pubDate>Tue, 17 Mar 2026 15:01:42 GMT</pubDate><enclosure url="https://substack-post-media.s3.amazonaws.com/public/images/e64fab7a-7d14-466f-bc1e-f8472749686f_4032x3024.jpeg" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>When I was at IBM, Anna Gonzales was the thought leader on the token architecture for the Carbon Design System. It was tight, disciplined, and opinionated. And it worked. The structure encoded decisions the team had already fought for and made: what the system constrained, what it left open, where a component&#8217;s responsibility ended and a product team&#8217;s began. </p><p>You can read the system&#8217;s philosophy in how the tokens are organized.</p><p>The tokens aren&#8217;t what made the system work. The convictions are what made it work. The design tokens were an artifact of that conviction.</p><p>That distinction is the argument.</p><div><hr></div><p>The W3C Design Tokens Community Group published its first stable spec at the end of 2025. It&#8217;s genuinely good work. Years of collaboration to solve a hard coordination problem. How to share design decisions without everything fracturing every time someone changes a color.</p><p>The spec is deliberately minimal. It defines tokens, their types, and a reference system that lets one token point to another &#8212; so <code>color.text.primary</code> can reference <code>color.palette.black</code>, and changing the palette propagates everywhere. It adds <code>$extends</code> for group inheritance, a Color module with modern color space support, and a Resolver module for theming and context. </p><p>It&#8217;s a sensible, focused infrastructure that solves real problems. It&#8217;s the kind of foundation that would underpin exactly the discipline Anna built at IBM.</p><p>What practitioners have built <em>around</em> that foundation is something else. </p><p>The spec is agnostic about tiers. It defines how to express token relationships, not how many layers you should have. </p><p>The community is far less agnostic. Three-tier has become doctrine: primitive tokens at the base, semantic tokens that make purposeful claims about use, component tokens scoped to specific components. This is &#8220;mature token architecture&#8221; in most design systems discourse.</p><p>The gap between doctrine and practice is instructive. </p><p>Three-tier is what gets taught and recommended. Two tiers is what the major systems actually implement. </p><p>Carbon&#8217;s architecture doesn&#8217;t map neatly onto the primitive/semantic/component model - it has its own layering logic built around UI depth. Polaris has moved away from its component token layer. Material Design 3 publishes reference tokens and system tokens, and stops there.</p><p>Three-tier is the aspiration. Two-tier is what survives contact with a real system.</p><p>That gap should be a signal. The canonical systems couldn&#8217;t fully sustain the doctrine. And yet the doctrine keeps getting taught as the definition of maturity.</p><p>The problem isn&#8217;t the spec.</p><div><hr></div><p>The spec - unavoidably - creates <em>a thing to make</em>. &#8220;We&#8217;re implementing the W3C spec&#8221; can start to feel like a north star, when a real north star is missing.</p><p>At J.P. Morgan there was always a tension between a debate on token naming strategy and architecture coming before a simpler question was answered: <em>what is this design system for?</em></p><p>Naming debates aren&#8217;t a path to that clarity. They can be a replacement for it.</p><p>For Anna at IBM, the tokens were downstream. Philosophy came first. Tokens encoded the philosophy.</p><p>Where I&#8217;ve seen more struggle - at JPM, at some of the design systems I&#8217;ve worked with at Knapsack - is when that order is inverted. Taxonomy comes first, the thinking is supposed to emerge from it. Sometimes it does. Often the taxonomy becomes the only explicit structure of the system, and so it becomes load-bearing.</p><p>Which leads to a design system whose deepest held opinion is how to name its hover state.</p><p>Run naming conventions workshops because mature systems have naming conventions. Produce token JSON because good systems produce token JSON. That&#8217;s a cargo cult pattern. The mechanism becomes the mission.</p><div><hr></div><p>There is a failure mode you can identify: token counts scaling linearly with component complexity.</p><p>The Tetrisly design system acknowledged this problem. Their button component reached over 500 tokens. It enumerated every property of every state of every variant: background, border, text, icon, default, hover, focus, active, disabled, primary, secondary, danger, ghost, large, medium, small, dark mode, high contrast.</p><p>Before long you end up with <code>button-background-color-primary-large-hover-dark</code>, and hundreds of siblings.</p><p>The spec supports this. But at this point the abstraction provides no value over well-organized CSS. The overhead is real: tooling dependency, Figma sync, governance process. But there&#8217;s no additional leverage when your variable names map one-to-one to CSS properties you have to write anyway.</p><p>The promise of tokens is leverage - fewer, powerful constructs that express more than just flat specifics. 500 tokens is a clear failure of that promise - you may as well be writing CSS. Tetrisly acknowledged this, and they&#8217;ve very deliberately thinned their &#8220;component tier&#8221; so their model is much closer to a two-tier than three-tier model.</p><div><hr></div><p>Phillip Lovelace recently argued that tokens are even more important in an AI-driven workflow - token taxonomy can be an API the AI agent consumes. Semantic naming lets AI stop guessing your brand.</p><p>This is a worthwhile floor argument. AI generating UI from a token file produces more consistent output than generating from nothing. The W3C spec makes that even more reliable.</p><p>A floor isn&#8217;t a ceiling.</p><p>AI can traverse a token graph and resolve a name to a hex value. It can&#8217;t tell you why that color is right for a primary hover state, or if a destructive action should use the same token. Should a payment confirmation defer to stricter contrast constraints?</p><p>Those aren&#8217;t AI limitations. </p><p>Those are limitations of the information that tokens are supposed to carry.</p><p>Tokens encode what things look like. Not why. Not when. Not the conditions that change the answer.</p><p>The convictions in the best systems come from the decisions that precede them. An AI with access to those decisions - rules, intent, context - can do more interesting things than resolve color aliases.</p><div><hr></div><p>Tight token architecture delivers real value. Tokens are a powerful artifact of thinking, but not a substitute for it. The W3C spec describes something of genuine worth, when it&#8217;s built in the right order.</p><p>The design systems that work treat tokens as output. Philosophy first, constraints second, governance third. Tokens encode the decisions that have been made. But only <em>if </em>those decisions have been made. </p><p>Systems that struggle have the sequence backwards. And the quality of the spec actually makes the inversion easier. It&#8217;s a rigorous blueprint for the mechanism, and the foundation is left implicit.</p><p>Tokens with system conviction are infrastructure. Tokens that substitute for it are dogma. The difference is everything.</p><div><hr></div><h4>Further reading:</h4><ul><li><p>Frost, B. <em><a href="https://bradfrost.com/blog/post/the-many-faces-of-themeable-design-systems/">The Many Faces of Themeable Design Systems</a></em>. bradfrost.com</p></li><li><p>Gonzales, A. <em><a href="https://medium.com/carbondesign/introducing-figma-variables-and-a-consolidated-all-themes-library-d4893d1b8920">Introducing Figma variables and a consolidated &#8220;All themes&#8221; library!</a> </em>Carbon Design Blog, Aug 2023.</p></li><li><p><em><a href="https://www.designtokens.org/tr/2025.10/">Design Tokens Technical Reports</a></em>. W3C Community Group, Oct 2025.</p></li><li><p>Lovelace, P. <em><a href="https://www.designsystemscollective.com/design-systems-are-having-their-moment-70674f8ab197">Design Systems Are Having Their Moment</a></em>. Design Systems Collective, Feb 2026.</p></li></ul><p><em>Article photo by <a href="https://unsplash.com/@jcanty123?utm_source=unsplash&amp;utm_medium=referral&amp;utm_content=creditCopyText">Jan Canty</a> on <a href="https://unsplash.com/photos/a-wooden-structure-sitting-on-top-of-a-rocky-beach-bz-FrwVCLDc?utm_source=unsplash&amp;utm_medium=referral&amp;utm_content=creditCopyText">Unsplash</a>.</em></p><div class="subscription-widget-wrap-editor" data-attrs="{&quot;url&quot;:&quot;https://www.robin-cannon.com/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe&quot;,&quot;language&quot;:&quot;en&quot;}" data-component-name="SubscribeWidgetToDOM"><div class="subscription-widget show-subscribe"><div class="preamble"><p class="cta-caption"><strong>Subscribe for essays on design, technology, and culture - plus original fiction.</strong></p></div><form class="subscription-widget-subscribe"><input type="email" class="email-input" name="email" placeholder="Type your email&#8230;" tabindex="-1"><input type="submit" class="button primary" value="Subscribe"><div class="fake-input-wrapper"><div class="fake-input"></div><div class="fake-button"></div></div></form></div></div><p></p>]]></content:encoded></item><item><title><![CDATA[A design system isn't an aggregator. It's a contract.]]></title><description><![CDATA[Tools can aggregate assets. They can't make a system real.]]></description><link>https://www.robin-cannon.com/p/a-design-system-isnt-an-aggregator</link><guid isPermaLink="false">https://www.robin-cannon.com/p/a-design-system-isnt-an-aggregator</guid><dc:creator><![CDATA[Robin Cannon]]></dc:creator><pubDate>Tue, 24 Feb 2026 16:03:07 GMT</pubDate><enclosure url="https://substack-post-media.s3.amazonaws.com/public/images/c60dcfc2-d84a-4e4b-b97d-0139f899a323_5472x3468.jpeg" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>This week, Mariana Rita published &#8216;<em>Stop paying for design system documentation you can build yourself&#8217;</em>.</p><p>Her argument is that paid design system documentation platforms like Zeroheight are obsolete. Figma&#8217;s API is accessible. LLMs can write docs. Open-source tooling is good. Build the whole thing yourself, in two weeks, for free.</p><p>It&#8217;s a practical guide. It has sound tooling advice.</p><p>The piece treats design system documentation like it&#8217;s an asset display problem. How do you surface your Figma components and code repos in one place, more cheaply than current products offer? That&#8217;s a reasonable question.</p><p>But it&#8217;s based on a definition of design systems that I think is fundamentally too shallow. That&#8217;s a starting point that leads to something that looks like a solution, but isn&#8217;t.</p><div><hr></div><p>The article describes documentation as an &#8220;aggregator - the one place that ties everything together.&#8221; </p><p>Sit with that word.</p><p>An aggregator is passive. It collects. It displays. It points at sources and renders them side by side.</p><p>A design system isn&#8217;t an aggregator.</p><p>It&#8217;s a contract.</p><p>A design system is the authoritative agreement between disciplines - design, engineering, product - about what is true, what is intentional, and why. Documentation doesn&#8217;t just display agreement, it&#8217;s where that agreement becomes canonical. Intent becomes instruction. &#8220;We discussed this on Slack&#8221; becomes &#8220;this is how we build things.&#8221;</p><p>If your docs platform is a viewer on top of your Figma files and your code repos, that&#8217;s not a system of record. It&#8217;s a window. It doesn&#8217;t resolve disagreements.</p><div><hr></div><p>A DIY doesn&#8217;t solve one, vital problem - and the article doesn&#8217;t name it.</p><p>Figma is not a design system.</p><p>Storybook is not a design system.</p><p>Figma is a tool for designers. Their working files, experiments, intentions, abandoned interations. A designer&#8217;s environment - rich and full of things in-progress, deprecated, aspirational.</p><p>Storybook is a tool primarily for developers. It documents what has been implemented. It&#8217;s the engineering team&#8217;s environment - authoritative about code, somewhat indifferent to design rationale.</p><p>If you point an AI agent at your Figma library you&#8217;ll train it on your designers&#8217; hypotheses. Point it at your Storybook and you train it on engineering implementation that may lag design intent, exceed it, or quietly diverge.</p><p>Neither tool can answer the question on its own: <em>what is true?</em></p><p>When the Button component in Figma has rounded corners and the one in Storybook doesn&#8217;t, what does your documentation site say? If it just faithfully renders both sources, it&#8217;s documented your misalignment.</p><p>That&#8217;s not nothing. It&#8217;s useful to know. But it&#8217;s not a source of truth. It&#8217;s published disagreement.</p><p>The design system has to adjudicate. It has to carry a philosophical underpinning. A stance. Not just visual and technical inventory. It needs the decisions and reasoning that make the inventory coherent. Why did we make these choices? What are the governing principles? Which source wins when there&#8217;s a discrepancy? Why? </p><p>And how do we strive for excellence when there isn&#8217;t a source at all? A new pattern, an edge case, a platform you haven&#8217;t built for yet. The system of record doesn&#8217;t just arbitrate what exists. It also guides what should.</p><p>That&#8217;s not something that can be generated. It requires human judgment, authority, and a platform to enforce it.</p><div><hr></div><p>The article&#8217;s AI-readiness argument is sharp. And it&#8217;s where the initial error becomes most consequential.</p><p>The article is right to identify that design system documentation is increasingly an instruction layer for AI agents. &#8220;Robot food&#8221; as my colleague Chris Bloom would describe it. The context that makes generated UI consistent and correct. And it&#8217;s also right that static platforms unable to expose data in a structured, machine-readable way, fail at this job.</p><p>So it proposes replacing them with Docusaurus and MDX files maintained by a team. Which is also static. And manually curated. But it&#8217;s free and you own it.</p><p>But the answer to AI-readiness isn&#8217;t just a better documentation site. It&#8217;s a genuine system of record. Where documentation is generated from structured, authoritative, interconnected sources. Where the connection between intent and implementation is dynamic, not periodically reconciled.</p><p>Documentation that auto-updates when code changes isn&#8217;t a feature. It&#8217;s the entire point. </p><p>But it needs to have broader context to update in an intelligent, guided, way.</p><p>Otherwise you&#8217;re just feeding AI a snapshot. A snapshot of what was true when someone last updated a file. Or a snapshot of the AI&#8217;s guess at what a conflict resolution was. And the gap between &#8220;what the docs say&#8221;, &#8220;what the design file says&#8221; and &#8220;what is in production&#8221; is exactly the kind of ambiguity that makes AI-generated interfaces drift.</p><div><hr></div><p>The cost argument also dissolves. </p><p>The article compares platform licensing fees to zero. </p><p>But the real cost of a design system isn&#8217;t tooling. It&#8217;s the misalignment it prevents - or fails to prevent.</p><p>That&#8217;s the denominator.</p><p>The cost of rework because design and engineering interpreted a component differently. Cost of multiple QA cycles because the implementation didn&#8217;t match the spec. Cost of onboarding time because the documentation was out of date. Cost of inconsistent experiences because there wasn&#8217;t an authoritative answer to &#8220;how does this pattern work in iOS?&#8221;</p><p>The total cost of a DIY aggregator includes engineering time to build it, maintain it, update it when APIs change, wrangle the AI writing pipeline, manually curate the output. And, if it&#8217;s being seen as an aggregator, it includes the organizational cost of having a documentation site that&#8217;s a collection of assets rather than a system of authority.</p><p>That cost might seem invisible. Then it accumulates.</p><div><hr></div><p>None of this is an argument against open-source tooling. Or AI-assisted documentation. Or against genuine improvements in what&#8217;s accessible and buildable. Those are real changes.</p><p>Infrastructure isn&#8217;t neutral.</p><p>The choice of what you build - aggregation layer or system of record - has downstream consequences for every discipline that depends on it. It shapes what designers trust. What engineers implement. What AI agents consume. What your product becomes.</p><p>If self-building (not merely aggregating) your design system platform is the right approach, and the costs and benefits are fully considered, great. That&#8217;s IBM Carbon, and it&#8217;s one of the best design system websites out there.</p><p>And if a documentation platform can make the building of the site easier, so you can focus on the creation of the actual design system, all the better.</p><p>Design systems are not a collection of Figma components and Storybook stories, with a documentation site sitting on top. The documentation is the surface expression of something much deeper and more considered. </p><p>Decisions made. Rationale captured. Authority established.</p><p>You can build an aggregator in two weeks. Building a system of record takes longer. Because you have to decide what&#8217;s actually true.</p><p>That&#8217;s the work.</p><div><hr></div><p><em>I&#8217;m not a neutral observer.</em></p><p><em>I&#8217;m VP of Product at <a href="https://www.knapsack.cloud/">Knapsack</a>. We build infrastructure that makes design systems a live system of record - connecting design, code, and documentation as a unified source of truth.</em></p><p><em>I have a direct interest in this question, and you should read with that in mind. But the argument stands regardless.</em></p><div><hr></div><h4>Further reading:</h4><ul><li><p>Rita, M. <em><a href="https://medium.com/all-about-design-systems/stop-paying-for-design-system-documentation-you-can-build-yourself-a10f1390987f">Stop paying for design system documentation you can build yourself.</a></em> All about design systems, Feb 2026.</p></li><li><p>Aizlewood, J. <em><a href="https://clearleft.com/thinking/design-">Design systems don&#8217;t start with components.</a></em> Clearleft, July 2017.</p></li></ul><p><em>Article photo by <a href="https://unsplash.com/@skillscouter?utm_source=unsplash&amp;utm_medium=referral&amp;utm_content=creditCopyText">Lewis Keegan</a> on <a href="https://unsplash.com/photos/text-XQaqV5qYcXg?utm_source=unsplash&amp;utm_medium=referral&amp;utm_content=creditCopyText">Unsplash</a>.</em></p><div class="subscription-widget-wrap-editor" data-attrs="{&quot;url&quot;:&quot;https://www.robin-cannon.com/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe&quot;,&quot;language&quot;:&quot;en&quot;}" data-component-name="SubscribeWidgetToDOM"><div class="subscription-widget show-subscribe"><div class="preamble"><p class="cta-caption"><strong>Subscribe for essays on design, technology, and culture - plus original fiction.</strong></p></div><form class="subscription-widget-subscribe"><input type="email" class="email-input" name="email" placeholder="Type your email&#8230;" tabindex="-1"><input type="submit" class="button primary" value="Subscribe"><div class="fake-input-wrapper"><div class="fake-input"></div><div class="fake-button"></div></div></form></div></div><p></p>]]></content:encoded></item><item><title><![CDATA[Code to canvas is bonkers]]></title><description><![CDATA[Figma's latest feature solves Figma's problem, not yours]]></description><link>https://www.robin-cannon.com/p/code-to-canvas-is-bonkers</link><guid isPermaLink="false">https://www.robin-cannon.com/p/code-to-canvas-is-bonkers</guid><dc:creator><![CDATA[Robin Cannon]]></dc:creator><pubDate>Wed, 18 Feb 2026 16:02:27 GMT</pubDate><enclosure url="https://substack-post-media.s3.amazonaws.com/public/images/01c34f06-ba20-4823-9447-8cfc15c0b64c_6240x4160.jpeg" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>Dylan Field posted about a new Figma feature: the ability to bring work from Claude Code directly into Figma&#8217;s canvas. Capture a working UI - something that already works in production, staging, or localhost - and convert it into editable Figma frames.</p><p>Let&#8217;s break that down. You use an AI coding tool to generate a working interface. Real code, real interactions, and real data. Then take the functioning reality and convert it <em>back</em> into a static abstraction, so that people can look at it together on that canvas.</p><p>You had a building. Now you have an architect&#8217;s drawing of that building.</p><p>Figma is telling you this is progress. It isn&#8217;t. It&#8217;s a solution to Figma&#8217;s business problem - the growing irrelevance of the static canvas - presented as though it&#8217;s a solution to yours.</p><div><hr></div><p>In November I wrote about the collapse of the traditional digital workflow. That comfortable fiction where design happens here, code happens over there, and a handoff ritual connects the two. </p><p>AI is dissolving that middle layer. Structured design systems, and wider product context, can be interpreted into working interfaces. At that point, the abstraction layer between design intent and delivery reality becomes unnecessary. Design isn&#8217;t a stage before delivery. Design <em>is</em> delivery.</p><p>It&#8217;s a defining shift in how products get built. </p><p>Jonny Burch made a complementary argument last week, in his piece &#8220;Life After Figma is Coming.&#8221; His framing encompasses the tooling ecosystem: as code becomes the source of truth, design tools become interfaces on code, not the other way around. Code is the only correct source of truth - open, shared, with common standards. The canvas doesn&#8217;t disappear, but it has to become a view into reality and not a substitute for it.</p><p>These arguments point in the same direction. The canvas is no longer a staging ground for ideas that exist outside the system. It needs to be a live interface for ideas that exist within it.</p><p>Claude Code to Figma points in the opposite direction entirely.</p><div><hr></div><p>Field&#8217;s own framing is revealing. He describes working code as &#8220;tunnel vision&#8221; and that this feature will help you &#8220;escape&#8221; it. </p><blockquote><p>&#8230;the design canvas is better at navigating lots of possibilities than prompting in an IDE.</p></blockquote><p>The Figma blog announcing the Claude Code to Figma feature elaborates further. Solo code exploration is a &#8220;single-player environment&#8221;.</p><blockquote><p>&#8230;that speed of solo exploration can become a constraint.</p></blockquote><p>And as a contrast, they describe the canvas as a &#8220;shared space&#8221; where &#8220;the conversation changes and new possibilities open up.&#8221;</p><p>That&#8217;s a neat rhetorical move.</p><p>It takes a genuine limitation of current code-first workflows - collaboration and visual comparison are harder in a terminal - but reframes it so that the <em>working artifact</em> is the problem, and the <em>abstraction</em> is the solution. </p><p>Those are not the same thing. Needing better collaboration on working artifacts isn&#8217;t the same as needing to convert the artifacts into a different, lower-fidelity format in order to collaborate at all.</p><p>The answer to &#8220;how do we collaborate on code?&#8221; is not &#8220;convert it to not-code.&#8221;</p><p>The better solution is better collaboration tooling for code. Live preview sharing. Annotation layers on running applications. Structured feedback on deployed states. As Burch points out, a tooling ecosystem is already exploding in size - design interfaces that sit on top of production code, development environments that integrate design thinking. The pieces are coming together.</p><p>When you capture something from Claude Code and bring it into Figma you don&#8217;t add information. You remove it. You remove interactions, real data, actual behavior. You replace it with a picture of what it looked like at the moment of capture. </p><p>That&#8217;s trading truth for convenience, and claiming it&#8217;s a workflow improvement.</p><p>It&#8217;s an absurd proposition.</p><p>You have a working thing. You convert it into a non-working representation. You collaborate on the representation. At some point, presumably, someone has to make into a working thing again. That&#8217;s not a workflow, that&#8217;s a detour.</p><div><hr></div><p>This is not just a feature decision. It&#8217;s a strategic posture that serves Figma&#8217;s interests while actively working against the interests of people using it.</p><p>Figma&#8217;s business depends on it being the place where product decisions happen. Their value proposition is the collaborative canvas being the hub of product development. The more decisions happen in code-first environments - engineers and designers collaborating directly on running applications - the more Figma risks becoming peripheral.</p><p>Figma is trying to maintain its gravitational pull. Every feature needs to bring work <em>into</em> Figma, not enable work to happen <em>outside</em> it. </p><p>Field says this directly:</p><blockquote><p>Whether product building begins in a terminal, a prompt box, a visual UI or a hand-drawn sketch, we want Figma to be the place where it all comes together.</p></blockquote><p>That&#8217;s not a workflow insight. That&#8217;s a business objective.</p><p>Claude Code to Figma is not about improving your workflow. It&#8217;s about preventing your workflow from leaving Figma behind.</p><p>And this is the part of the framing that I find genuinely dishonest. Figma&#8217;s blog post presents this as a way to unlock creativity and open up collaboration. Something to liberate teams from the constraints of solo code exploration. But the people who are using Claude Code to build working interfaces <strong>aren&#8217;t constrained</strong>. They are <strong>ahead</strong>. They have the real thing. The feature asks them to go backwards. Sacrifice fidelity, leave something functional and return to an abstraction layer. Because Figma needs them to.</p><p>That&#8217;s a solution for Figma. It&#8217;s not a solution for the people building products.</p><p>The timing might reinforce the point. Figma went public, and the stock has dropped a lot since IPO. It&#8217;s a legitimate question whether AI-native tools make traditional design canvas less central to digital product development. So product announcements also need to be a message to investors: <em>the canvas is still essential</em>. </p><p>Code-to-canvas is a defensive move dressed up as innovation.</p><div><hr></div><p>The honest version of this feature announcement might say: &#8220;More product work is starting in code. We need to pull that back into our ecosystem so we can remain relevant.&#8221; That&#8217;s a legitimate business challenge. And I have sympathy for the difficulty of Figma&#8217;s position. They built something genuinely great, and the ground is shifting beneath it.</p><p>But don&#8217;t tell me it&#8217;s for my benefit. The loss of fidelity isn&#8217;t &#8220;opening up new possibilities.&#8221; A functioning prototype being flattened into a picture isn&#8217;t &#8220;changing the conversation&#8221;. The conversation was already happening - and in a richer, more honest medium. This feature interrupts the conversation to bring it back to Figma.</p><p>It&#8217;s not about fewer canvases. It&#8217;s about more honest ones. More real. Connected to live systems, reflecting real state. Enabling collaboration on the actual artifact rather than a simulation. The future of the canvas is as an interface to code, not a destination to convert code back to.</p><p>Code to canvas is the wrong direction. The future is canvas as code. Features like this are going to look increasingly strange as the rest of industry figures that out.</p><div><hr></div><p><em>I&#8217;m not a neutral observer.</em></p><p><em>I&#8217;m VP of Product at Knapsack. We&#8217;re building in the place where structured design systems and product context meet AI-driven delivery. </em></p><p><em>But the proximity is also why this absurd code-to-canvas direction is so acute for me. When you&#8217;re working on systems that make design directly executable, watching someone propose converting execution back into abstraction feels like someone printing out a Google Doc so that they can fax it.</em></p><p><em>I'm presenting an expanded version of these ideas at the <a href="https://developersummit.com/session/the-digital-workflow-is-obsolete-how-to-survive-the-end-of-the-canvas">Great International Developer Summit</a> in April 2026.</em></p><div><hr></div><h4>Further reading:</h4><ul><li><p><em><a href="https://www.robin-cannon.com/p/the-digital-workflow-is-obsolete">The digital workflow is obsolete</a></em> - the end of abstraction, and the start of design as delivery.</p></li><li><p>Burch, J. <em><a href="https://jonnyburch.com/life-after-figma/">Life after Figma is coming (and it will be glorious)</a></em>. jonnyburch.com, Feb 2026.</p></li><li><p>Seiz, G. &amp; Kern, A. <em><a href="https://www.figma.com/blog/introducing-claude-code-to-figma/">From Claude Code to Figma: Turning production code into editable Figma designs</a></em>. Shortcut - Figma&#8217;s editorial newsletter, Feb 2026.</p></li><li><p>Field, D. <em><a href="https://www.linkedin.com/pulse/claude-code-figma-design-dylan-field-e5ilc/?trackingId=aca2hHS1Q2OmpfNYVAIeWA%3D%3D">Claude Code to Figma Design</a></em>. LinkedIn, Feb 2026.</p></li><li><p>Flowers, E. <em><a href="https://eflowers.substack.com/p/if-you-ask-a-designer-what-they-want">If you ask a designer what they want, they will say faster horses</a></em>. Zero Vector, Feb 2026.</p></li></ul><p><em>Article photo by <a href="https://unsplash.com/@version2beta?utm_source=unsplash&amp;utm_medium=referral&amp;utm_content=creditCopyText">Rob Martin</a> on <a href="https://unsplash.com/photos/red-and-white-stop-sign-tte1gbfGEeY?utm_source=unsplash&amp;utm_medium=referral&amp;utm_content=creditCopyText">Unsplash</a>.</em></p><div class="subscription-widget-wrap-editor" data-attrs="{&quot;url&quot;:&quot;https://www.robin-cannon.com/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe&quot;,&quot;language&quot;:&quot;en&quot;}" data-component-name="SubscribeWidgetToDOM"><div class="subscription-widget show-subscribe"><div class="preamble"><p class="cta-caption"><strong>Subscribe for essays on design, technology, and culture - plus original fiction.</strong></p></div><form class="subscription-widget-subscribe"><input type="email" class="email-input" name="email" placeholder="Type your email&#8230;" tabindex="-1"><input type="submit" class="button primary" value="Subscribe"><div class="fake-input-wrapper"><div class="fake-input"></div><div class="fake-button"></div></div></form></div></div><p></p>]]></content:encoded></item><item><title><![CDATA[Execution is cheap. Coordination is not.]]></title><description><![CDATA[The case for the control plane.]]></description><link>https://www.robin-cannon.com/p/execution-is-cheap-coordination-is</link><guid isPermaLink="false">https://www.robin-cannon.com/p/execution-is-cheap-coordination-is</guid><dc:creator><![CDATA[Robin Cannon]]></dc:creator><pubDate>Tue, 17 Feb 2026 16:02:06 GMT</pubDate><enclosure url="https://substack-post-media.s3.amazonaws.com/public/images/c4e6eb3e-633e-42c6-b4e6-e31845b41881_3936x2624.jpeg" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>Execution used to be the hardest part.</p><p>Writing code. Building components. Translating product strategy into design intent into something that ships. It&#8217;s where time went, budget, and where the frustration lived. We organized entire teams and workflows around the bottleneck of <em>making things</em>.</p><p>But the bottleneck is dissolving. AI tools can generate a login form in thirty seconds. They can scaffold a page layout, draft a component from a simple text description. The raw act of production is being commoditized. If we&#8217;re only measuring speed from prompt to output, we&#8217;ve never been faster!</p><p>Speed&#8230;is not the same thing as progress.</p><p>I&#8217;ve talked about this before. If you use AI to build product right now, we know the story. The tool generates something very quickly. You spend time fixing it. Wrong components, bad spacing, didn&#8217;t use the right patterns, made incorrect assumptions about how your product works. You might throw it away and write it yourself. </p><p>The generation was really fast. The <em>correction</em> then ate up all those savings.</p><p>This isn&#8217;t AI&#8217;s fault. The models are getting remarkably capable. </p><p>It&#8217;s a failure of context. The AI simply doesn&#8217;t know what &#8220;right&#8221; looks like for your team, or for your company. What&#8217;s your product context? What standards should it apply? </p><p>Nobody told the AI the answers. At least, not in a way that&#8217;s structured, coherent, and persistent so it can use it every time it generates something.</p><p>Our enormous investments are making execution cheaper. And not enough people are investing in making sure that cheap execution is pointed in the right direction.</p><h3>There&#8217;s already a name for this</h3><p>Infrastructure engineers have already solved for this problem. In a system like Kubernetes, there&#8217;s a clean split between:</p><ul><li><p><strong>The data plane</strong> (the thing that does the work, moves traffic, runs containers)</p></li><li><p><strong>The control plane</strong> (the configuration, policies, and health checks that govern execution)</p></li></ul><p>Hell, we can even apply this to actual airplanes. Air traffic control is an intuitive version of this. ATC isn&#8217;t flying any planes. It&#8217;s telling the pilots where to go, what to avoid, how to land safely. The planes are carrying the passengers. ATC has the rules and system-wide awareness. An airport needs both to run successfully.</p><p>AI tools are a powerful <strong>data plane</strong>. They carry out execution very effectively. But organizations are running them without a control plane. Without any kind of structured, persistent, layer of context. Something that tells the AI how <em>this team</em> is supposed to build <em>this product</em>.</p><p>Our generation requests start from near-zero. Output is technically functional but organizationally wrong. The 70-90% first-generation rejection rate isn&#8217;t necessarily a model quality problem. </p><p>It&#8217;s missing infrastructure.</p><h2>What a control plane actually does</h2><p>What would a control plane for digital product delivery look like?</p><h4><strong>1. Gather what&#8217;s true about what you build</strong></h4><p>Design tokens, component APIs, coding standards, accessibility rules, brand guidelines. Institutional knowledge that probably lies scattered across repos, Figma files, a Confluence page, embedded in tribal knowledge across teams. A control plane assembles those into a single layer that an AI tool can query.</p><h4>2. Know which sources to trust</h4><p>Your docs say that button should use an 8px padding. The shipped code uses 12px padding. Which one is right? Maybe the code, because documentation drifted and the implementation is a better reflection of reality. A control plane can apply ranking. Which sources are authoritative. Which might be more aspirational. Which are fall-backs.</p><h4>3. Measuring if the output is any good</h4><p>Not &#8220;did the AI generate something?&#8221; But &#8220;did the generation meet your standards?&#8221;</p><p>How many regeneration cycles did this take? What was the token cost? Does it pass linting? </p><p>Without measurement, AI-assisted development is governance by intuition. If you have measurement, you can actually govern and improve.</p><h4>4. Connect to the tools your team already uses</h4><p>A control plane isn&#8217;t a new tool to adopt. </p><p>It feeds context <em>into the tools you already have</em>. Claude, Cursor, Gemini, whatever comes next. Protocols like MCP make this control layer tool-agnostic. You keep your context layer - your control. Your execution layer becomes interchangeable.</p><h3>Bigger than your components</h3><p>If you&#8217;re a design systems person, maybe you hear &#8220;control plane&#8221; and think <em>my component library is AI food</em>. That&#8217;s true, but just a part of it.</p><p>A real control plane touches everything that shapes how a product ships. API contracts. Content strategy. Performance budgets. Security policy. Compliance and regulatory requirements. Localization. Roadmap constraints.</p><p>It&#8217;s the institutional memory of &#8220;how we do things here&#8221; that no document captures, and no new hire absorbs until at least a few months in the job. But here, we codify it for AI.</p><p>The difference really matters. Narrow integrations answer &#8220;how many components do we have?&#8221; A control plane answers &#8220;how does the organization deliver product?&#8221; Different questions, and the second one is where there&#8217;s real leverage.</p><h3>Infrastructure solutions for infrastructure problems</h3><p>Most organizations already possess most of the raw materials they need. Those coded components, documented standards, guidances, and repos that are full of already shipped product. </p><p>The gap isn&#8217;t source material. It&#8217;s a gap in aggregation, ranking, and feedback loops that can turn all of this scattered knowledge into usable intelligence.</p><p>Execution is going to keep getting cheaper. That&#8217;s a trajectory I don&#8217;t see reversing. </p><p>The organizations that invest in coordination infrastructure - including the kinds of systems we build at <a href="https://www.knapsack.cloud/">Knapsack</a> - are the ones who&#8217;ll actually ship. </p><p>The concept isn&#8217;t unique to us. But every team using AI to build product is facing the same challenge - whether they&#8217;ve named it yet or not.</p><p>The machines work. They need to know what to build.</p><div><hr></div><h4>Further reading</h4><ul><li><p>Walker, J. <em><a href="https://spacelift.io/blog/kubernetes-control-plane">Kubernetes Control Plane: What It Is &amp; How It Works</a>. </em>Spacelift.io Blog, Jan 2025.</p></li><li><p><em><a href="https://www.digitalinformationworld.com/2026/01/how-much-code-is-ai-writing.html">AI-Assisted Coding Reaches 29% of New US Software Code</a></em>. Digital Information World, Jan 2026.</p></li></ul><p><em>Article photo by <a href="https://unsplash.com/@chuttersnap?utm_source=unsplash&amp;utm_medium=referral&amp;utm_content=creditCopyText">CHUTTERSNAP</a> on <a href="https://unsplash.com/photos/aerial-photo-of-pile-of-enclose-trailer-kyCNGGKCvyw?utm_source=unsplash&amp;utm_medium=referral&amp;utm_content=creditCopyText">Unsplash</a>.</em></p><div class="subscription-widget-wrap-editor" data-attrs="{&quot;url&quot;:&quot;https://www.robin-cannon.com/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe&quot;,&quot;language&quot;:&quot;en&quot;}" data-component-name="SubscribeWidgetToDOM"><div class="subscription-widget show-subscribe"><div class="preamble"><p class="cta-caption"><strong>Subscribe for essays on design, technology, and culture - plus original fiction.</strong></p></div><form class="subscription-widget-subscribe"><input type="email" class="email-input" name="email" placeholder="Type your email&#8230;" tabindex="-1"><input type="submit" class="button primary" value="Subscribe"><div class="fake-input-wrapper"><div class="fake-input"></div><div class="fake-button"></div></div></form></div></div><p></p>]]></content:encoded></item><item><title><![CDATA[What are we actually buying when we buy AI?]]></title><description><![CDATA[AI reshapes labor, risk, and the cost of decision-making.]]></description><link>https://www.robin-cannon.com/p/what-are-we-actually-buying-when</link><guid isPermaLink="false">https://www.robin-cannon.com/p/what-are-we-actually-buying-when</guid><dc:creator><![CDATA[Robin Cannon]]></dc:creator><pubDate>Tue, 27 Jan 2026 16:02:00 GMT</pubDate><enclosure url="https://substack-post-media.s3.amazonaws.com/public/images/8cfe5e13-d543-4f3d-8dbf-ddb8f1067f9c_2832x1593.avif" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>You might have been involved in a few AI purchasing conversations over the past year or two. They often follow a familiar script.</p><p>Faster delivery. Fewer people. Higher output. A competitive advantage.</p><p>There are impressive demos. Confident pricing decks. And perhaps one or two clear case studies.</p><p>Then the AI systems arrive.</p><p>Suddenly there&#8217;s a lot more process than expected. More data work. We need to do more governance work. There&#8217;s much more human oversight. An increasing number of meetings about &#8220;how to use it responsibly&#8221;. There are productivity gains in pockets, but it&#8217;s not consistent. Those headcount reductions don&#8217;t quite materialize. And your operational complexity starts to increase.</p><p>This doesn&#8217;t mean the technology doesn&#8217;t work. But it does suggest that we might have been unclear about what we were buying.</p><div><hr></div><h3>The story we tell ourselves</h3><p>When organizations are talking about buying AI, they&#8217;re probably thinking of it in the same way as other improved software capabilities.</p><ul><li><p>Intelligence</p></li><li><p>Automation</p></li><li><p>Creativity</p></li><li><p>Replacement for labor</p></li></ul><p>Those are all fairly abstract concepts, though. They&#8217;re as much a marketing category as they are an operational one.</p><p>The value organizations extract from AI tend to cluster around different - less glamorous - things. Things that are more structural, and far more dependent on context.</p><div><hr></div><h3>What organizations are really buying</h3><h4>1. Velocity</h4><p>The most visible benefit, and almost definitely the easiest one to sell.</p><p>AI is quick to produce outputs. It can draft, summarize, code, deliver UI variants, give you some analysis. And this is great for removing friction in the early stages of work.</p><p>But this is only going to be valuable if your surrounding system is able to absorb it.</p><p>If it can&#8217;t - and that&#8217;s the case in many organizations - that early speed is just moving a bottleneck further downstream. Review, integration, governance, legal, QA, product coherence suddenly all become more complicated.</p><p>Those extra complications may well cancel out all of that early stage acceleration. Velocity without coordination doesn&#8217;t create speed, it creates noise.</p><p>Speed is a feature. It probably isn&#8217;t the product.</p><h4>2. Consistency</h4><p>A quiet, unglamorous, workhorse.</p><p>AI systems can produce uniform structure. They can maintain a tone, a formatting style. AI can follow rules. And that can be incredibly beneficial in larger organizations where inconsistency can be a drag on productivity.</p><p>Uniformity of style, patterns, or language. This doesn&#8217;t always come across in a demo. But it&#8217;s a really durable value.</p><p>It&#8217;s also a value that demonstrates the vital importance of context. Consistency can only exist relative to some set of shared decisions about what &#8220;correct&#8221; looks like.</p><h4>3. Reshaping (not eliminating) cost</h4><p>AI doesn&#8217;t necessarily remove labor. It relocates it.</p><p>That work moves into:</p><ul><li><p>Data preparation</p></li><li><p>Labeling and taxonomy definition and design</p></li><li><p>Prompt engineering</p></li><li><p>Evaluation</p></li><li><p>Policy definitions</p></li><li><p>Monitoring and exception handling</p></li></ul><p>That&#8217;s a significant shift in headcount, but not a disappearance.</p><p>The cost changes form, before it changes size. </p><p>An organization that expects labor to vanish is likely to be disappointed. An organization that understands and expects to repurpose labor will be less surprised.</p><h4>4. Optionality</h4><p>Think of this as the hedge against being left behind.</p><p>Executives buy AI for the same kind of reasons they might invest in cloud solutions before there&#8217;s a clear direction in what they&#8217;re going to build: it avoids being trapped later.</p><p>This has value, despite the vague use cases. But it&#8217;s not the same as buying finished capacity.</p><div><hr></div><h3>A useful memory: IBM Watson</h3><p>We&#8217;ve been here before.</p><p>In 2011, IBM&#8217;s Watson defeated human champions on <em>Jeopardy!</em> That created a powerful narrative about the <strong>arrival</strong> of general-purpose AI (or &#8220;cognitive computing&#8221; as IBM tended to say). </p><p>Many organizations rushed to adopt it - believing that they were buying intelligence - a system they could point at a domain and begin reasoning in a productive way. </p><p>What they actually bought was something else:</p><ul><li><p>Large-scale data ingestion</p></li><li><p>Domain-specific training</p></li><li><p>Constructing ontologies</p></li><li><p>Labeling efforts</p></li><li><p>Ongoing tuning</p></li><li><p>Critically, the need for long-term consulting engagements</p></li></ul><p>Companies might spend millions of dollars a year teaching Watson how their world worked. </p><p>That&#8217;s not necessarily a failure of the technology. But it&#8217;s a very clear mismatch in expectations. </p><p>Watson didn&#8217;t fail at being intelligent. It succeeded as a machine for encoding context - slowly, expensively, and with constant human involvement.</p><p>The consulting bill ended up not as a side effect, but as the product.</p><div><hr></div><h3>Same pattern, better demos</h3><p>OK, today&#8217;s models are more flexible. We&#8217;re more than a decade further on in creating friendlier interfaces. The generality of today&#8217;s AI is much more real.</p><p>But there&#8217;s an underlying dynamic that&#8217;s still the same.</p><p>AI systems still require:</p><ul><li><p>Explicit definitions of acceptable behavior</p></li><li><p>Structured representations of domain rules</p></li><li><p>Boundaries set for them</p></li><li><p>Exception handling</p></li><li><p>Ongoing evaluation</p></li><li><p>Continuous alignment with the realities of an evolving organization</p></li></ul><p>The main difference today is that the demos are better at obscuring this dependency for longer.</p><p>Language models feel general, even friendly. But there is a brittleness behind the fluency. Context gaps don&#8217;t show up until systems are more embedded into production workflows, our compliance environments, or even customer-facing surfaces.</p><p>And then all that familiar work begins.</p><div><hr></div><h3>Vendors benefit from this ambiguity</h3><p>This is probably an uncomfortable truth.</p><p>The more vague a buyer is about what they&#8217;re purchasing, the more benefit for the vendor.</p><p>Ambiguity allows:</p><ul><li><p>Performance to be judged on impressions, not data</p></li><li><p>Diffusion of responsibility</p></li><li><p>Elastic timelines</p></li><li><p>Reframing costs as &#8220;enablement&#8221; rather than necessary maintenance</p></li><li><p>Recasting failure as an adoption challenge</p></li></ul><p>&#8220;AI&#8221; is a particularly powerful label, because it&#8217;s a bundle of different values wrapped up in a single term. One word being used to describe speed, consistency, experimentation, automation, and strategic positioning.</p><p>If the buyer&#8217;s mental model is less precise, all those outcomes can remain more easily negotiable.</p><p>That&#8217;s not malice, but it is structural. And to some extent ambiguity might also benefit the buyer.</p><p>Organizations should be skeptical of any pitch that cannot answer clearly:</p><p><em>What are we actually paying for?</em></p><div><hr></div><h3>Context is the real budget line</h3><p>Successful AI deployments are not determined on model quality alone.</p><p>The hidden constant is context creation and maintenance.</p><ul><li><p>Formalizing decisions that had been informal</p></li><li><p>Documenting all the exceptions</p></li><li><p>Defining boundaries</p></li><li><p>Encoding organizational preferences</p></li><li><p>Stabilizing coherent vocabularies</p></li><li><p>Agreement on what &#8220;good&#8221; means</p></li><li><p>Being able to maintain these definitions as the underlying realities change</p></li></ul><p>And this is slow, organizational, arguably very <strong>human</strong> work. It doesn&#8217;t scale in the same ways. It doesn&#8217;t easily show up in benchmarks.</p><p>But without it, AI systems will remain very good at producing fast output that&#8217;s locally plausible but globally incoherent.</p><div><hr></div><h3>Better buying questions</h3><p>Don&#8217;t ask&#8230;</p><blockquote><p>What can this model do?</p></blockquote><p>Ask&#8230;</p><ul><li><p>What decisions are we, as an organization, formalizing?</p></li><li><p>What labor is moving somewhere else, rather than disappearing?</p></li><li><p>Are we outsourcing our risk to our vendor? How?</p></li><li><p>What context do we need to maintain indefinitely?</p></li><li><p>What parts of our organization need to change to make this work?</p></li></ul><p>Without answering those questions, it&#8217;s difficult to know if an AI system is succeeding.</p><p>The technology we&#8217;re looking at is very real.</p><p>But so is the work needed to make it actually useful.</p><p>If you don&#8217;t know what you&#8217;re buying then you can&#8217;t possibly know whether it&#8217;s working!</p><div><hr></div><h4>Further reading:</h4><ul><li><p><em><a href="https://www.advisory.com/daily-briefing/2021/07/21/ibm-watson">10 years ago, IBM&#8217;s Watson threatened to disrupt healthcare.</a> What happened? </em>Advisory Board Daily Briefing, July 2021</p></li></ul><p><em>Article cover image by <a href="https://unsplash.com/@alexshuperart">Alex Shuper</a> on <a href="https://unsplash.com/photos/calm-body-of-water-under-white-sky-ivV8zNrcMgY?utm_source=unsplash&amp;utm_medium=referral&amp;utm_content=creditCopyTexthttps://unsplash.com/photos/a-robot-holding-a-dollar-sign-in-its-hand-1gf8BVYmy90">Unsplash</a></em></p><div class="subscription-widget-wrap-editor" data-attrs="{&quot;url&quot;:&quot;https://www.robin-cannon.com/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe&quot;,&quot;language&quot;:&quot;en&quot;}" data-component-name="SubscribeWidgetToDOM"><div class="subscription-widget show-subscribe"><div class="preamble"><p class="cta-caption"><strong>Subscribe for essays on design, technology, and culture - plus original fiction.</strong></p></div><form class="subscription-widget-subscribe"><input type="email" class="email-input" name="email" placeholder="Type your email&#8230;" tabindex="-1"><input type="submit" class="button primary" value="Subscribe"><div class="fake-input-wrapper"><div class="fake-input"></div><div class="fake-button"></div></div></form></div></div><p></p>]]></content:encoded></item><item><title><![CDATA[A brief step back]]></title><description><![CDATA[Taking a few moments to deal with some feelings]]></description><link>https://www.robin-cannon.com/p/a-brief-step-back</link><guid isPermaLink="false">https://www.robin-cannon.com/p/a-brief-step-back</guid><dc:creator><![CDATA[Robin Cannon]]></dc:creator><pubDate>Tue, 20 Jan 2026 16:02:49 GMT</pubDate><enclosure url="https://substack-post-media.s3.amazonaws.com/public/images/d330f246-bafe-40bc-9c0b-1257a5200b67_6000x3368.jpeg" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>A shorter note than usual.</p><p>Not so much because there isn&#8217;t anything to say. But more because other things in life make saying less the best thing to do.</p><p>I&#8217;m dealing with some personal grief right now. Nothing I&#8217;m looking to unpack publicly right now, but deep and unexpected. It changes how my days feel at the moment - how long or short things take, whether I&#8217;m thinking clearly, or distracted, my energy for creativity.</p><p>Work culture - maybe especially in the tech industry - sometimes treats emotional disruption as a bit of an inconvenience. We&#8217;re all running around doing so much that we need to &#8220;manage around it&#8221;, or suck it up to keep our output uninterrupted. </p><p>We don&#8217;t have many systems that are built around taking a pause.</p><p>Grief isn&#8217;t a bug in a system though. It&#8217;s a natural human state, and it needs its own time. Just as we need time to be productive, we also need time to be reflective.</p><p>I&#8217;m taking a little bit of that time. I&#8217;m lucky that I work for a caring company that appreciates that need. And there&#8217;s still a bit of me that feels guilty, that I might be &#8220;letting the team down&#8221;. </p><p>Anyway, I made this week&#8217;s post deliberately short - and in part that&#8217;s supposed to be a reminder that it&#8217;s OK to slow down and tend to all parts of life, not just the deliverables!</p><p>If you&#8217;re reading this and carrying something heavy yourself - you don&#8217;t need to justify the weight of it. It doesn&#8217;t need to be turned into anything other than what it is. I hope that you have the space to stop, briefly, and have that pause be the most functional thing you can do.</p><div><hr></div><p><em>Article cover image by <a href="https://unsplash.com/@malidesha?utm_source=unsplash&amp;utm_medium=referral&amp;utm_content=creditCopyText">mali desha</a> on <a href="https://unsplash.com/photos/calm-body-of-water-under-white-sky-ivV8zNrcMgY?utm_source=unsplash&amp;utm_medium=referral&amp;utm_content=creditCopyText">Unsplash</a></em></p><div class="subscription-widget-wrap-editor" data-attrs="{&quot;url&quot;:&quot;https://www.robin-cannon.com/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe&quot;,&quot;language&quot;:&quot;en&quot;}" data-component-name="SubscribeWidgetToDOM"><div class="subscription-widget show-subscribe"><div class="preamble"><p class="cta-caption"><strong>Subscribe for essays on design, technology, and culture - plus original fiction.</strong></p></div><form class="subscription-widget-subscribe"><input type="email" class="email-input" name="email" placeholder="Type your email&#8230;" tabindex="-1"><input type="submit" class="button primary" value="Subscribe"><div class="fake-input-wrapper"><div class="fake-input"></div><div class="fake-button"></div></div></form></div></div><p></p>]]></content:encoded></item><item><title><![CDATA[Design systems are over. Product context is the work.]]></title><description><![CDATA[Design systems aren't obsolete - but their scope no longer matches the work.]]></description><link>https://www.robin-cannon.com/p/design-systems-are-over-product-context</link><guid isPermaLink="false">https://www.robin-cannon.com/p/design-systems-are-over-product-context</guid><dc:creator><![CDATA[Robin Cannon]]></dc:creator><pubDate>Tue, 13 Jan 2026 16:02:22 GMT</pubDate><enclosure url="https://substack-post-media.s3.amazonaws.com/public/images/b8dbe5c0-95e1-44b6-84ba-fd9b4b65150a_1536x1024.jpeg" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>Design systems are over - at least as far as we&#8217;ve learned to define them.</p><p>And the name has always been kinda iffy. It undersells the work involved, and muddies expectations about what the output is. The foundations are still essential, but we&#8217;re wrapping them in a scope that doesn&#8217;t match their role - especially as AI reshapes how we build products.</p><p>The containing terminology we&#8217;ve used to describe our work no longer fits what the work has become.</p><h3>It&#8217;s always been a bad name</h3><p>It&#8217;s been a fairly consistent undercurrent in conversations I&#8217;ve had for years. &#8220;Design systems&#8221; is not a great term.</p><p>We place &#8220;design&#8221; front and center, even though the primary consumers are often engineers. It implies visual polish rather than deep production infrastructure. It suggests a static system, not a set of evolving, critical decisions. And that doesn&#8217;t reflect all the complex labor involved - and the value delivered.</p><p>Teams have learned to work around this. Translate and explain in order to justify. </p><p>We lived with it when the system&#8217;s job was primarily standardizing UI. It&#8217;s increasingly becoming a liability when the outputs aren&#8217;t mediated solely by humans. Less exhaustive reviews, a partnership with AI quality assurance.</p><h3>Design systems solved yesterday&#8217;s scaling problem</h3><p>Design systems emerged to solve very real issues:</p><ul><li><p>Fragmented interfaces</p></li><li><p>Repeated implementation work</p></li><li><p>Fractured user experiences with inconsistent behavior</p></li><li><p>Design and engineering drifting apart</p></li></ul><p>We provided a shared foundation: components, tokens, patterns, documentation. We built upon (or created) a shared brand language that teams could rely on as their organizations scaled.</p><p>That work still matters - more than ever. But the environment those systems operate in is changing, and our definitions aren&#8217;t keeping up.</p><h3>AI doesn&#8217;t consume components - it consumes context</h3><p>We&#8217;re starting to talk about AI using design systems as inputs: feed the model components, tokens, guidelines, and generate output.</p><p>That&#8217;s incomplete framing.</p><p>AI doesn&#8217;t effectively recognize and implement components in isolation. What AI consumes - and amplifies - is context.</p><ul><li><p>Which decisions are encouraged, and which are merely permitted</p></li><li><p>Where the system is strict and where flexibility is encouraged</p></li><li><p>How to handle accessibility tradeoffs</p></li><li><p>Which interaction patterns are preferred - and why</p></li><li><p>Tone and voice changes in different moments</p></li><li><p>Understand historical exceptions and justifications</p></li></ul><p>This isn&#8217;t context that is cleanly surfaced in a component library. It lives all around it - in our documentation, in related guidelines, Slack discussions, and decisions that we forgot to codify.</p><p>People are better at surviving ambiguity than machines, so it was survivable when we were the only bottleneck.</p><p>But now we have machines producing at scale, that doesn&#8217;t work any more.</p><h3>The risk of accelerated drift</h3><p>If we don&#8217;t have strong product context, AI creates divergence rather than coherence.</p><ul><li><p>Prompts are local decisions</p></li><li><p>Outputs are reasonable in isolation</p></li><li><p>Product drifts at scale quickly move from subtle to structural</p></li></ul><p>We can kind of identify this through instinct. AI-generated UI feels superficially correct, but we can sense that wrongness. AI follows the easily visible rules and misses the invisible constraints.</p><p>Design systems aren&#8217;t about how things look, they&#8217;re about how we propagate our decisions.</p><h3>Product context is broader than we&#8217;ve allowed systems to be</h3><p>If a design system is the central foundation, product context is the structure that&#8217;s built upon and around it.</p><p>Product context includes:</p><ul><li><p>Visual and technical foundations (tokens, components, layouts)</p></li><li><p>Interaction models and behavioral patterns</p></li><li><p>Our content principles, tone and voice, language constraints</p></li><li><p>Accessibility decisions and requirements</p></li><li><p>Governance and review expectations</p></li><li><p>Regulatory boundaries and corporate risk tolerance</p></li><li><p>Historical precedent - especially about why exceptions exist</p></li></ul><p>This context is still usually pretty fragmented. Owned by different teams. Sometimes something that&#8217;s a universal reference external to the company. Uneven documentation, and enforced socially.</p><p>AI reduces the margin for error that that fragmentation brings.</p><h3>A role shift not a repudiation</h3><p>So this is where the work begins to change.</p><p>In AI digital delivery pipelines, the most valuable contribution isn&#8217;t another component (I&#8217;d argue that&#8217;s been the case even before AI acceleration!). It&#8217;s making explicit context implicit and operational.</p><p>And that reframes the roles and responsibilities of design system teams.</p><ul><li><p>Maintain intent, not artifacts</p></li><li><p>Define boundaries, not enforce consistency</p></li><li><p>Usable, machine-readable context, as much as human documentation</p></li></ul><p>It&#8217;s less about expanding control as it is about expanding clarity.</p><p>AI needs better constraints, not more pixels.</p><h3>The scope failed, not the name</h3><p>My title is sharp, because I want to drive to a pragmatic conclusion.</p><p>Design systems aren&#8217;t obsolete. They&#8217;re foundational. But you create a foundation to support something larger.</p><p>We can&#8217;t continue to conflate design systems as component libraries. If we do, we&#8217;ll underinvest in the context that AI needs to strengthen our products. Instead, it might erode them.</p><p>Our work has grown. Our responsibility has expanded.</p><p>The opportunity is bigger than the name we&#8217;ve been using.</p><p>Product context is the work.</p><div><hr></div><h4>Further reading:</h4><ul><li><p>Opperman, L. <em><a href="https://uxdesign.cc/design-systems-vs-ai-will-the-robots-take-over-1a56be62a74e">Design Systems vs. AI: will the robots take over?</a> </em>Medium, Jan 2024.</p></li><li><p>Teich, D. <em><a href="https://www.forbes.com/sites/davidteich/2020/10/29/the-alignment-problem-linking-machine-learning-and-human-values/">&#8220;The Alignment Problem&#8221;, Linking Machine Learning And Human Values</a> </em>Forbes, Oct 2020.</p></li></ul><div class="subscription-widget-wrap-editor" data-attrs="{&quot;url&quot;:&quot;https://www.robin-cannon.com/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe&quot;,&quot;language&quot;:&quot;en&quot;}" data-component-name="SubscribeWidgetToDOM"><div class="subscription-widget show-subscribe"><div class="preamble"><p class="cta-caption"><strong>Subscribe for essays on design, technology, and culture - plus original fiction.</strong></p></div><form class="subscription-widget-subscribe"><input type="email" class="email-input" name="email" placeholder="Type your email&#8230;" tabindex="-1"><input type="submit" class="button primary" value="Subscribe"><div class="fake-input-wrapper"><div class="fake-input"></div><div class="fake-button"></div></div></form></div></div>]]></content:encoded></item><item><title><![CDATA[Why I chose to join Knapsack]]></title><description><![CDATA[On the limits of design system expertise - and what comes after]]></description><link>https://www.robin-cannon.com/p/why-i-chose-to-join-knapsack</link><guid isPermaLink="false">https://www.robin-cannon.com/p/why-i-chose-to-join-knapsack</guid><dc:creator><![CDATA[Robin Cannon]]></dc:creator><pubDate>Tue, 06 Jan 2026 16:02:04 GMT</pubDate><enclosure url="https://substack-post-media.s3.amazonaws.com/public/images/8fb76c29-d50b-4b0a-9867-602cb6733e09_1036x440.jpeg" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>As the calendar moves us into 2026, I&#8217;m approaching a year in my role as Head of Product at <a href="https://www.knapsack.cloud/">Knapsack</a>. It feels like it&#8217;s a good moment to step back and reflect. Not on outcomes and metrics, but on my intent - why I made this move in the first place, and what I believe needed to change.</p><p>For more than a decade, I&#8217;ve worked on design systems at scale.</p><p>Large organizations, building global products. Leading systems that supported thousands of contributors and potentially millions of users. And systems that - by most external measures - would be considered very successful.</p><p>Despite this, at a certain point, I hit a ceiling.</p><p>Not because there wasn&#8217;t anything left to learn. Design systems are always evolving and always interesting. But there&#8217;s a quiet glass ceiling in this space - one that&#8217;s rarely acknowledged.</p><p>Design systems are critical infrastructure. But the people who build and run them can often get stuck in a professional limbo: deeply technical, strategic, and yet not seen as design or product leaders in the traditional sense. If you don&#8217;t come from a formal design background, and maybe even if you do, there&#8217;s a limit to how far &#8220;design system expertise&#8221; alone will carry you - no matter the scale or impact of your work.</p><p>It&#8217;s not a talent ceiling. It&#8217;s about how organizations understand design systems.</p><p>Enterprises still treat design systems as design artifacts: component libraries, documentation sites, visual standards. All important - but really those are just the supporting actors. When a system works well, it fades into the background. No drama. Fewer problems.</p><p>And that invisibility is both the point, and a trap.</p><p>In reality, design systems aren&#8217;t about components. They exist to facilitate production.</p><p>That means reducing decision fatigue. They encode our standards, and lower risk. They let organizations scale digital work without scaling chaos at the same rate! They sit at the intersection of design, engineering, governance, accessibility, brand, and business constraints.</p><p>Design systems are better thought of as <strong>production infrastructure</strong>. But that isn&#8217;t how we talk about them.</p><p>This perspective didn&#8217;t emerge in isolation.</p><p>While I was building a design system consultancy offering at IBM iX, we partnered with Knapsack on a client project. It was an opportunity to view - and stress-test - their thinking inside the realities of complex enterprise engagements. I spoke at <a href="https://www.knapsack.cloud/patterns">their events</a>. I appeared on the <em><a href="https://www.designsystemspodcast.com/">Design Systems Podcast</a></em>. I&#8217;ve watched how the company has shown up - not just what it was shipping, but how it spoke about the work.</p><p>They had a clarity to their point of view: that design systems weren&#8217;t just a design concern, and that tooling along wasn&#8217;t a solution to underlying production problems. They understood that there was an organizational gravity at play, where systems broke down during the process.</p><p>And most importantly, I trusted their leadership.</p><p>Not because they were promising easy wins or tidy narratives. But because Chris Strahl, Evan Lovely, everyone in leadership at the company, were honest about the complexity of the problem - and committed to engaging with it.</p><p>My personal inflection point coincided with a broader inflection point for industry.</p><p>AI shouldn&#8217;t just make existing workflows faster. It destabilizes them. It undermines the idea of design-to-developer handoff, or that systems are consumed only by humans. If we can increasingly automate production, the question is broader: "what are we encoding into the machinery of production?&#8221;</p><p>If design systems remain static artifacts, they&#8217;re going to become irrelevant. If they evolve into a broader product context - structured, computable sources of truth - they can become exponentially more powerful.</p><p>Knapsack understands that. Not as a feature roadmap, but as a worldview.</p><p>They - and now it&#8217;s we - don&#8217;t see design systems as a destination. We see them as inputs. Raw material for production. Something that will directly power how products are built - whether by humans, machines, most likely by both together.</p><p>That distinction matters.</p><p>I could have stayed in senior design system leadership roles inside large organizations. Those paths were open. It would have been comfortable and understandable.</p><p>But I can see where that road ends.</p><p>I&#8217;m looking toward my second year at Knapsack, and this feels like a reaffirmation rather than a retrospective. Reminding myself - and trying to explain - why I chose leverage over comfort, and long-term change over incremental optimization.</p><p>I didn&#8217;t join Knapsack to leave the enterprise world behind. I&#8217;m taking what I learned there and applying it at a point of real leverage. It&#8217;s shaping the future of digital product delivery, not by accelerating yesterday&#8217;s workflows, but by challenging the assumptions behind them.</p><p>Joining Knapsack was the evolution of a relationship already built on trust, shared understanding, and a belief that design systems can and must evolve into something far more than they&#8217;ve been before.</p><p>I didn&#8217;t join Knapsack to make design systems better. I joined to redefine how digital products get made.</p><div class="subscription-widget-wrap-editor" data-attrs="{&quot;url&quot;:&quot;https://www.robin-cannon.com/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe&quot;,&quot;language&quot;:&quot;en&quot;}" data-component-name="SubscribeWidgetToDOM"><div class="subscription-widget show-subscribe"><div class="preamble"><p class="cta-caption"><strong>Subscribe for essays on design, technology, and culture - plus original fiction.</strong></p></div><form class="subscription-widget-subscribe"><input type="email" class="email-input" name="email" placeholder="Type your email&#8230;" tabindex="-1"><input type="submit" class="button primary" value="Subscribe"><div class="fake-input-wrapper"><div class="fake-input"></div><div class="fake-button"></div></div></form></div></div>]]></content:encoded></item><item><title><![CDATA[How to build the thing [Part 6: Don't be too noticeable]]]></title><description><![CDATA[Success needs to show itself in outcomes, not applause.]]></description><link>https://www.robin-cannon.com/p/how-to-build-the-thing-part-6-dont</link><guid isPermaLink="false">https://www.robin-cannon.com/p/how-to-build-the-thing-part-6-dont</guid><dc:creator><![CDATA[Robin Cannon]]></dc:creator><pubDate>Tue, 23 Dec 2025 16:01:51 GMT</pubDate><enclosure url="https://substack-post-media.s3.amazonaws.com/public/images/2af42bb1-72e5-43b9-a1eb-3a2536c19b79_1024x1024.jpeg" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>Earlier in this series, I talked about how to <em><a href="https://www.robin-cannon.com/p/how-to-build-the-thing-part-3-make">make the system vanish</a></em>. That was about how to reduce friction so that people weren&#8217;t thinking about <em>using</em> a system and instead started to just <em>build the thing</em>. It was about the experience of making - flow, intuition, how it feels to do the work.</p><p>This principle is closely related. But it&#8217;s not about usability. It&#8217;s about success at an organizational level. And how to avoid the temptation of deliberately raising a system&#8217;s visibility once it starts to work well.</p><div><hr></div><h3>Performative success</h3><p>If you&#8217;re part of a team that works on a mature, successful system, you won&#8217;t be surprised to hear that those systems are often undervalued. There&#8217;s no drama, there&#8217;s no need for constant attention. They&#8217;re ticking along just fine.</p><p>There&#8217;s a temptation to compensate.</p><p>We publish adoption reports. A quarterly report. Maybe a newsletter. Roadmaps and presentations that celebrate the system itself more than what it&#8217;s enabling. And that visibility risks becoming the proxy for value.</p><p>It&#8217;s completely understandable. We&#8217;re all very aware that many organizations overlook the systems that work best. &#8220;We&#8217;ve got our design system, we don&#8217;t need to keep funding the design system at the same level.&#8221; It&#8217;s a genuine risk. Visibility can be more legible to leadership.</p><p>But it also risks drift, and risks the system being more about theatre than infrastructure.</p><div><hr></div><h3>Quiet systems, loud outcomes</h3><p>The strongest signal of a system working isn&#8217;t that teams talk about it - it&#8217;s that people talk about <em>what they built with it.</em></p><p>We want to hear things like:</p><ul><li><p>&#8220;That was easier than we&#8217;d expected&#8221;</p></li><li><p>&#8220;We shipped faster than we thought&#8221;</p></li><li><p>&#8220;Everything just kind of worked&#8221;</p></li></ul><p>Nobody is mentioning the system by name. That&#8217;s a sign of success, not a failure of visibility.</p><div><hr></div><h3>Measure outcomes, not attention</h3><p>This is uncomfortable, because it&#8217;s often cited as the core metric in system-building work.</p><blockquote><p>Adoption is a vanity metric.</p></blockquote><p>Adoption tells you people are touching the system - not that the system is helping them do better work. But it&#8217;s easy to count, and it can protect funding.</p><p>High adoption can coexist with frustration, workarounds, and quiet resentment. A forced mandate that pisses teams off as much as it helps them.</p><p>Low explicit adoption can coexist with deep impact. Using the values and principles of a system can often be overlooked or unmeasurable. </p><p>That&#8217;s not to say that we shouldn&#8217;t measure metrics, but we should be looking for the right signals.</p><ul><li><p>Reduced time to ship</p></li><li><p>Less rework</p></li><li><p>Fewer handoffs or repeat iterations</p></li><li><p>Faster onboarding</p></li><li><p>More consistent outcomes</p></li></ul><p>Not to flatter the system, but to validate it.</p><p>The best systems don&#8217;t compete for recognition. They&#8217;re there to create the conditions for everyone else to succeed.</p><div><hr></div><h3>Doing its job quietly</h3><p>There&#8217;s a particular sense of quiet satisfaction in building something and then stepping back. Letting it fade into the background from a visibility perspective, and letting the work that it facilitates speak for itself.</p><p>It&#8217;s also really hard to do. Because we&#8217;re proud of the systems we built. And we want to nurture them, and protect them.</p><p>But if what you&#8217;ve built is really <em>getting people to the thing</em> then the most generous and powerful move is to stop standing in front of it waving your arms for attention.</p><p>If it&#8217;s entrenched and unnoticeable, you&#8217;ve probably done it right.</p><div><hr></div><h4>Further reading</h4><ul><li><p>Bell, Sarah, et al. <em><a href="https://pursuit.unimelb.edu.au/articles/invisible-infrastructure-is-the-background-to-our-modern-lives">&#8216;Invisible&#8217; infrastructure is the background to our modern lives</a></em>. Pursuit, June 2023.</p></li></ul><div><hr></div><p><em>This is the final part of a six-part series on <a href="https://www.robin-cannon.com/t/gets-you-to-the-thing">building in a way that serves real human outcomes</a>.</em></p><div class="subscription-widget-wrap-editor" data-attrs="{&quot;url&quot;:&quot;https://www.robin-cannon.com/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe&quot;,&quot;language&quot;:&quot;en&quot;}" data-component-name="SubscribeWidgetToDOM"><div class="subscription-widget show-subscribe"><div class="preamble"><p class="cta-caption"><strong>Subscribe for essays on design, technology, and culture - plus original fiction.</strong></p></div><form class="subscription-widget-subscribe"><input type="email" class="email-input" name="email" placeholder="Type your email&#8230;" tabindex="-1"><input type="submit" class="button primary" value="Subscribe"><div class="fake-input-wrapper"><div class="fake-input"></div><div class="fake-button"></div></div></form></div></div><p></p>]]></content:encoded></item><item><title><![CDATA[The virtuous design system panopticon]]></title><description><![CDATA[AI as a design system ally. The watcher who says "yes, and..."]]></description><link>https://www.robin-cannon.com/p/the-virtuous-design-system-panopticon</link><guid isPermaLink="false">https://www.robin-cannon.com/p/the-virtuous-design-system-panopticon</guid><dc:creator><![CDATA[Robin Cannon]]></dc:creator><pubDate>Tue, 16 Dec 2025 16:01:35 GMT</pubDate><enclosure url="https://substack-post-media.s3.amazonaws.com/public/images/04738116-53ce-4e97-82d1-dc1f9e47d982_1024x1024.jpeg" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>In 2022, I wrote a post called &#8220;<a href="https://www.robin-cannon.com/p/dont-become-the-design-systems-police">Don&#8217;t become the design systems police.</a>&#8221; It cautioned against using a design system (and the team behind it) as a mechanism for enforcement. Give teams a checklist, and all they&#8217;ll do is learn how to pass that inspection.</p><p>But maybe a few years can change things. What happens if we throw AI into the mix?</p><p>We don&#8217;t need a <strong>team</strong> to review all those product surfaces for design system alignment and adherence. We have a tool that can do it in real time. At scale. Across legacy and in-flight work. UI, language, accessibility, interaction patterns - it&#8217;s all in reach. Our oversight can be embedded, ambient, and always-on.</p><p>If we use this carelessly, then it&#8217;s going to become exactly what I warned against.</p><p>A <strong>surveillance</strong> system. A source of shame. A way to more easily flag exceptions and punish your drift.</p><p>The design systems police are there - <em>and now they have a perfect memory and they never go to sleep.</em></p><p>But there is another way to look at it.</p><p>Can we make AI a guide instead of a gatekeeper?</p><p>Something that says:</p><ul><li><p>Want a suggestion to make this feel more like <em>us</em>?</p></li><li><p>Do you want to see how other people solved this with the patterns available?</p></li><li><p>Can I help to make this better?</p></li></ul><p>We&#8217;ve still got a panopticon. But not for control, it&#8217;s for <em>possibility</em>.</p><p>I gave a nod to this before, in &#8220;<em><a href="https://www.robin-cannon.com/p/when-ai-can-do-everything-else-your">When AI can do everything else, your job is to make it good</a></em>.&#8221;</p><p>If we can pass over our auditing to AI, then we can free people up to handle the intent. Let AI support the consistency, then we have space to explore.</p><p>Our panopticon becomes one that&#8217;s about <em>clarity</em>. </p><p>And it starts to drive our momentum. We&#8217;ve got access to an always-on suggestion engine. The watcher that helps us move faster - not because it says &#8220;no&#8221;, but because it&#8217;s always whispering:</p><p><strong>&#8220;Yes, and&#8230;&#8221;</strong></p><div><hr></div><h4>Further reading</h4><ul><li><p>Steadman, P. <em><a href="https://journals.uclpress.co.uk/jbs/article/id/608/">Samuel Bentham&#8217;s Panopticon</a></em>. Journal of Bentham Studies, 2012.</p></li><li><p>Mirowski, P. et al. <em><a href="https://neurips.cc/media/neurips-2023/Slides/83921.pdf">Artificial Intelligence Improvisation</a></em>. Improbiotics, ~2021.</p></li></ul><div class="subscription-widget-wrap-editor" data-attrs="{&quot;url&quot;:&quot;https://www.robin-cannon.com/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe&quot;,&quot;language&quot;:&quot;en&quot;}" data-component-name="SubscribeWidgetToDOM"><div class="subscription-widget show-subscribe"><div class="preamble"><p class="cta-caption"><strong>Subscribe for essays on design, technology, and culture - plus original fiction.</strong></p></div><form class="subscription-widget-subscribe"><input type="email" class="email-input" name="email" placeholder="Type your email&#8230;" tabindex="-1"><input type="submit" class="button primary" value="Subscribe"><div class="fake-input-wrapper"><div class="fake-input"></div><div class="fake-button"></div></div></form></div></div><p></p>]]></content:encoded></item><item><title><![CDATA[How to build the thing [Part 5: AI as translator, not controller]]]></title><description><![CDATA[AI should bridge the gap between intent and execution; humans need to shape the result]]></description><link>https://www.robin-cannon.com/p/how-to-build-the-thing-part-5-ai</link><guid isPermaLink="false">https://www.robin-cannon.com/p/how-to-build-the-thing-part-5-ai</guid><dc:creator><![CDATA[Robin Cannon]]></dc:creator><pubDate>Tue, 09 Dec 2025 16:02:37 GMT</pubDate><enclosure url="https://substack-post-media.s3.amazonaws.com/public/images/5007239c-3fd9-41f9-a1e3-7965f854eb52_1024x1024.jpeg" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>AI is everywhere now. We&#8217;re plugging it into workflows, grafting it onto existing tools, layering it across processes that weren&#8217;t designed for it. &#8220;Do AI!&#8221;</p><p>And it&#8217;s new, powerful, and it presents as a little bit magical. So it gets treated as the centerpiece, the point of things. AI is the headline.</p><p>But (and you may have heard this already in this series!), AI is just another thing that gets us to the thing.</p><p>In this fifth part of the series, I&#8217;m focusing on the role AI <em>should</em> play in systems: not being a controller, but by being a translator. Something that&#8217;s getting better and better at turning human intention into motion.</p><blockquote><p>When AI is a controller, it constrains us.</p><p>When AI is a translator, it amplifies our intent.</p></blockquote><div><hr></div><h3>From &#8220;what I mean&#8221; to &#8220;what happens&#8221;</h3><p>Most work involves translation. And that usually includes multiple iterations to ensure those translations are accurate.</p><p><code>intent &#8594; articulation &#8594; structure &#8594; execution</code></p><p>AI has immense value in its capacity to compress those stages (even to the point of collapse or their elimination as a specific step in design and delivery).</p><ul><li><p>Describe a flow, and see it appear.</p></li><li><p>Sketch an idea - in images or in prose, and see the scaffolding form.</p></li><li><p>Outline a page, see a worthwhile draft arrive.</p></li></ul><p>The human step isn&#8217;t eliminated here. In fact it remains absolutely critical. But AI <strong>shortens the ramp</strong> between thought and action. As the slogan at <a href="https://www.knapsack.cloud/">Knapsack</a> says:</p><blockquote><p>Think it. Ship it.</p></blockquote><p>Think about when AI actually feels magic. It&#8217;s when you&#8217;re communicating directly with the system in your own language - not needing to use special syntax, no ceremonies, no need for intermediate tooling.</p><div><hr></div><h3>Let it learn your dialect, and make suggestions</h3><p>A translator proposes. </p><p>If you lean too hard on AI then it automates the decision making process - in a way that&#8217;s brittle. And if AI isn&#8217;t grounded in how your teams think, build, and speak, then it&#8217;ll become generic. Detached from your lived reality.</p><p>A translator knows the dialect. And the goal isn&#8217;t to hand the work to the AI, it&#8217;s to let AI support <em>the work that&#8217;s already moving forward</em>.</p><p>Good AI-supported systems:</p><ul><li><p>Offer a starting point rather than jumping straight to a conclusion</p></li><li><p>Ask clarifying questions to call out ambiguous intent</p></li><li><p>Provide options, not a prescriptive solution</p></li></ul><p>AI speed combined with human excellence means widening the available paths, not narrowing them.</p><p>I&#8217;m sure we&#8217;ll see AI flatten many organizations into something average. AI is getting fairly capable at getting things to &#8220;OK-ish&#8221;.</p><p>But used well, it should raise the floor <em>and</em> the ceiling using intuition that you&#8217;ve already codified - your dialect. </p><p><em>(This is why <a href="https://www.robin-cannon.com/p/how-to-build-the-thing-part-4-codify">Part 4 - codifying intuition</a> - comes before this one)</em></p><div><hr></div><h3>Conversation, not command</h3><p>I think this is one of the biggest misconceptions around AI today - and where much of the fear comes from.</p><p>The value of AI comes from <em>interaction</em>, not one-off prompts. Single instructions can&#8217;t capture complex intentions - that&#8217;s where we get &#8220;AI slop&#8221;. But a conversation can refine those intentions.</p><p>Translators let <strong>people</strong> correct, redirect, reinterpret, push back, encourage critique. And, perhaps more importantly, they let people <strong>take over</strong>. </p><p>AI gives the path structure, but the person stays in the loop as the owner, shaping and validating.</p><p>Here&#8217;s a simple test for whether AI is &#8220;good&#8221;.</p><blockquote><p>Does it remove friction while preserving agency?</p></blockquote><p>If your answer is yes, then it probably belongs there. If it&#8217;s quietly centralizing decision-making in a model nobody understands, you&#8217;ve got a problem.</p><div><hr></div><h3>AI should exist to empower humans</h3><p>A healthy AI supported system increases the capability of the people who interact with it.</p><p>People get more space to think. And they can use that space, and the AI support:</p><ul><li><p>Focus on the most meaningful parts of the work</p></li><li><p>Clear away repetition</p></li><li><p>Reduce the cognitive challenge of starting from a blank page</p></li><li><p>Accelerate - and thus encourage - greater exploration before committing</p></li></ul><p>And it should be <strong>additive</strong>. If the system can&#8217;t function without the AI then you&#8217;ve created a dependency, not a capability.</p><p>AI is a multiplier of <strong>human</strong> judgment, not a replacement for it. And the more <em>&#8220;the thing&#8221;</em> comes into view, the more the AI should fade into the background.</p><div><hr></div><h4>Further reading</h4><ul><li><p>Halleck, Quinn. <em>&#8220;<a href="https://www.youtube.com/watch?v=d8icTgtZeQg&amp;t=1s">Does Film Survive AI?</a>&#8221;</em> (YouTube). TEDx Talks, 2024</p></li><li><p>Wang, Ge. <em><a href="https://hai.stanford.edu/news/humans-loop-design-interactive-ai-systems">Humans in the Loop: The Design of Interactive AI Systems</a></em>. Stanford University, October 2019.</p></li></ul><div><hr></div><p><em>Part 5 in a six-part series on <a href="https://www.robin-cannon.com/t/gets-you-to-the-thing">building in a way that serves real human outcomes</a>.</em></p><p><em>Part 6 will be published in two weeks.</em></p><div class="subscription-widget-wrap-editor" data-attrs="{&quot;url&quot;:&quot;https://www.robin-cannon.com/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe&quot;,&quot;language&quot;:&quot;en&quot;}" data-component-name="SubscribeWidgetToDOM"><div class="subscription-widget show-subscribe"><div class="preamble"><p class="cta-caption"><strong>Subscribe for essays on design, technology, and culture - plus original fiction.</strong></p></div><form class="subscription-widget-subscribe"><input type="email" class="email-input" name="email" placeholder="Type your email&#8230;" tabindex="-1"><input type="submit" class="button primary" value="Subscribe"><div class="fake-input-wrapper"><div class="fake-input"></div><div class="fake-button"></div></div></form></div></div><p></p>]]></content:encoded></item><item><title><![CDATA[How to build the thing [Part 4: Codify your intuition]]]></title><description><![CDATA[Turning collective experience into shared, scaled momentum]]></description><link>https://www.robin-cannon.com/p/how-to-build-the-thing-part-4-codify</link><guid isPermaLink="false">https://www.robin-cannon.com/p/how-to-build-the-thing-part-4-codify</guid><dc:creator><![CDATA[Robin Cannon]]></dc:creator><pubDate>Tue, 25 Nov 2025 16:30:54 GMT</pubDate><enclosure url="https://substack-post-media.s3.amazonaws.com/public/images/ffdfbca5-b714-4051-9238-7d69228449fc_1024x1024.jpeg" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>Every team in the workplace has some layer of quiet, underlying knowledge. Something that rarely, if ever, shows up in decks or documentation. People tend to know some patterns that work, shortcuts that can avoid pain later, legacy structures that have proved reliable, and behaviors that lead to good outcomes. Within a team dynamic, not all of this is formally taught - it&#8217;s also absorbed through shared experience.</p><p>Good systems don&#8217;t override that intuition. </p><p>Instead, they try to capture it, scale it, make it available to more people, without getting in the way.</p><p>That&#8217;s the fourth principle in our series. Part 3 was about making our system vanish. Part 4 is about making the shared knowledge - institutional understanding - visible. Quietly, and usefully.</p><h3>Encode what already works</h3><p>The best systems aren&#8217;t the ones that are invented from scratch. They&#8217;re the ones distilling the best of what teams are already doing successfully, and building on that. </p><p>That means noticing the patterns that show up again and again, giving them a stamp of approval, and making them more easily available.</p><ul><li><p>Layouts that reliably communicate hierarchy</p></li><li><p>Writing that demonstrably reduces back-and-forth questions</p></li><li><p>Coding practices that reduce recurring bugs</p></li><li><p>The workflows that ship more consistently on time.</p></li></ul><p>Codifying intuition means you&#8217;re not the one creating the rules - it&#8217;s the capture of behavior that already produces good outcomes. Not only giving others access to those behaviors, but giving rigor and metrics to them, too.</p><h3>Defaults, not rules</h3><p>When building systems, the best way to get adoption is to make adoption the path of least resistance. Codifying intuition supports that by making sure that people have a better starting point.</p><p>Opinionated defaults can making doing the <strong>right</strong> thing the <strong>easiest</strong> thing to do. They:</p><ul><li><p>Reduce cognitive load</p></li><li><p>Help new contributors, and support their growing effectiveness</p></li><li><p>Provide consistent quality without artificially enforcing uniformity</p></li><li><p>Are flexible enough to override when it&#8217;s justifiable to do so</p></li></ul><p>These defaults are your scaffolding - light, helpful, and removable. It&#8217;s about supporting success, rather than forcing compliance.</p><h3>&#8220;If you build it, they will come&#8230;&#8221;</h3><p>Strange how lines from old movies can stick with you. In <em>Field of Dreams</em> this whispered mantra was a little more mystical than we&#8217;re talking about. But ultimately it was about creating a place that people <em>wanted</em> to come to.</p><p>A system becomes frustrating - often to the point of failure - when it demands a particular behavior instead of merely providing support. People might comply in public, but work around it in private (this is why top-down mandates aren&#8217;t ever enough by themselves). Intuition can become accepted ceremony.</p><p>Better systems surface guidance that happens in the moment:</p><ul><li><p>Snippets that are clear and visible when you begin using a component</p></li><li><p>Panels that offer context only when it&#8217;s relevant</p></li><li><p>Automation suggesting steps forward rather than blocking actions</p></li></ul><p>If your system is behaving like your partner and not your supervisor, you&#8217;re more likely to trust it, and to use it naturally. This &#8220;partnership&#8221; model is one of the reasons why I think AI and product design and delivery systems are a naturally beneficial pair - LLMs, with context, can lend themselves to this sense of working together.</p><h3>Keeping the spirit of the law</h3><p>A common mistake here is to document the expression of intuition and think that successfully captures its true essence. </p><p>Someone does something clever because it works. It gets documented. It gets formalized. It gets followed.</p><p>It still gets followed even when everyone has forgotten why it existed in the first place. </p><p>The codification of intuition needs to include the <em>reasoning</em>, not just the ritual steps. If we lose a record of the <strong>why</strong>, then the system can start to become brittle.</p><h3>Intuition means adaptation</h3><p>Any system that is codifying intuition needs to be adaptive. Intuition, by its nature, will flex and change over time as circumstances do.</p><p>That means leaning into:</p><ul><li><p>Local variation</p></li><li><p>Overrides and escapes</p></li><li><p>Flexible boundaries</p></li><li><p>Places where the work is shaping the system, not the other way around</p></li></ul><p>This is an important measure that should be monitored. The intuition itself can flex. But if you need to regularly break your system to do any good work, then the system is constraining intuition rather than effectively capturing it.</p><h3>Shared intuition is acceleration</h3><p>We want to make collective experience part of our natural infrastructure. If we do that, everything can move faster. New joiners ramp quickly and naturally, even though senior contributors also spend less time explaining. Teams don&#8217;t need to reinvent solutions that already work.</p><p>Quality becomes consistent because we&#8217;re working within the boundaries of a system that guides people towards that quality, rather than because it artificially enforces it.</p><p>Codifying intuition isn&#8217;t about perfection, or achieving uniformity. Velocity is much more important than perfection.</p><p>Then the thing that gets you to the thing becomes easier for people to trust, easier to use, and easier to build with.</p><div><hr></div><h4>Further reading:</h4><ul><li><p>Kohlstedt, Kurt. <em><a href="https://99percentinvisible.org/article/least-resistance-desire-paths-can-lead-better-design/">Least Resistance: How Desire Paths can Lead to Better Design</a>. </em>99invisible, Jan 2016.</p></li><li><p>Polanyi, Michael. <em><a href="https://press.uchicago.edu/ucp/books/book/chicago/T/bo6035368.html">The Tacit Dimension</a></em>. Chicago University Press, 2009 (orig. 1966)</p></li></ul><div><hr></div><p><em>Part 4 in a six-part series on <a href="https://www.robin-cannon.com/t/gets-you-to-the-thing">building in a way that serves real human outcomes</a>.</em></p><p><em>Part 5 will be published in two weeks.</em></p><div class="subscription-widget-wrap-editor" data-attrs="{&quot;url&quot;:&quot;https://www.robin-cannon.com/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe&quot;,&quot;language&quot;:&quot;en&quot;}" data-component-name="SubscribeWidgetToDOM"><div class="subscription-widget show-subscribe"><div class="preamble"><p class="cta-caption"><strong>Subscribe for essays on design, technology, and culture - plus original fiction.</strong></p></div><form class="subscription-widget-subscribe"><input type="email" class="email-input" name="email" placeholder="Type your email&#8230;" tabindex="-1"><input type="submit" class="button primary" value="Subscribe"><div class="fake-input-wrapper"><div class="fake-input"></div><div class="fake-button"></div></div></form></div></div><p></p>]]></content:encoded></item><item><title><![CDATA[The digital workflow is obsolete]]></title><description><![CDATA[The end of abstraction, and the start of design as delivery]]></description><link>https://www.robin-cannon.com/p/the-digital-workflow-is-obsolete</link><guid isPermaLink="false">https://www.robin-cannon.com/p/the-digital-workflow-is-obsolete</guid><dc:creator><![CDATA[Robin Cannon]]></dc:creator><pubDate>Tue, 18 Nov 2025 16:00:32 GMT</pubDate><enclosure url="https://substack-post-media.s3.amazonaws.com/public/images/24adc06c-e813-4c06-9d53-ff708ffd65b2_1536x1024.jpeg" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>For decades we&#8217;ve lived inside a comfortable, consistent, somewhat fictional workflow. Design happens over here, code happens over there, and the workflow is the bridge that connects the two of them together. And that&#8217;s a process that makes sense when translating between visual artifacts and functional implementation was slow, messy, and human. Designers need a canvas. Developers need specs. The workflow is the glue.</p><p>But that&#8217;s a collapsing model now. We don&#8217;t have a linear process - canvas to code. We have a more continuous loop. We&#8217;d already started that process through the expansion of design systems and other orchestration platforms. But now AI interpreters can take structured design input from those resources and generate interfaces that are visually and functionally coherent. The canvas might still exist, but it&#8217;s no longer right at the center of the work. We had a handoff layer that defined digital product delivery, but now it&#8217;s dissolving.</p><p>That digital workflow isn&#8217;t ending because design has stopped mattering. It&#8217;s ending because design has become the vital infrastructure itself.</p><h3>The canvas wasn&#8217;t the work</h3><p>The canvas was an abstraction. It was a simulation of something that would eventually be built <em>somewhere else</em>. It existed because design and engineering spoke different languages. Designers made intent visible. Developers then translated that intent into working structure and logic. And we repeated the ritual handoff in order to reconcile two worlds that couldn&#8217;t speak to each other fluently.</p><p>The more those systems have matured, the more that separation is artificial. Tokens, semantic components, structured data, give us a shared language and more robust guidance. A well defined design system doesn&#8217;t need to be redrawn in code - it is code - and it can be read as such.</p><p>That&#8217;s not the death of the canvas. It&#8217;s time for an evolution. A <em>non-abstracted</em> canvas that&#8217;s connected directly to live systems, is invaluable. We&#8217;re human - we still think in terms of visual representation in order to think, collaborate, and refine our ideas. But that means a canvas has to be an actual truth - the <em>real thing</em> - and not a pretense. The abstraction is what needs to die.</p><blockquote><p>We don&#8217;t need fewer canvases. We need more honest ones.</p></blockquote><h3>Figma&#8217;s fight to preserve the fiction</h3><p>And this is where things might start to get uncomfortable. Figma - much-loved, brilliant, ubiquitous - has built an empire on making a canvas abstraction more accessible and easier to use. Modern product teams use it because it allows design to <em>feel</em> close to reality without ever being real. That was Figma&#8217;s superpower.</p><p>But now we live in a world where systems that underly design and code can interact directly. Which makes the abstraction less necessary. Which is a threat - to Figma&#8217;s business model, not to design itself.</p><p>It&#8217;s not surprising that Figma&#8217;s AI strategy seems focused on reinforcing the centrality of its own canvas. Auto-generated design screens (which then need to be &#8220;auto-generated&#8221; again into code), AI-driven component suggestions, natural language layout tools - they&#8217;re building features to keep you <em>inside</em> Figma. </p><p>These are impressive capabilities. But they also serve Figma&#8217;s strategic purpose - encourage designers to do more within Figma rather than reach out to collaborate directly with the systems that actually deliver products.</p><p>I don&#8217;t believe that&#8217;s malice. It&#8217;s just sensible economics. Figma&#8217;s success depends on it remaining the hub of the workflow. But <em>what</em> workflow? As AI and structured design systems collapse the distance between design intent and delivery reality, the workflow itself is evaporating. </p><p>We&#8217;re not working in a future where working faster within the canvas is the vital factor. It&#8217;s about questioning whether we need the canvas at all for much of the work we currently do there.</p><h3>AI is collapsing the middle layer</h3><p>AI doesn&#8217;t - despite the fears - replace designers or developers. It replaces the abstraction between them. It can interpret structure and make translation unnecessary. We feed design tokens, behavior definitions, rules and guidance into a working interface through orchestration systems.</p><p>The value of design isn&#8217;t in the mockup - it&#8217;s in the metadata, and it&#8217;s in the final thing. The design system <em>is</em> design. What was in static screens is in structured definitions - which means it can be interpreted, reasoned, and improved by machines as well as humans.</p><p>That can be liberating. It disappears the rote work - redrawing, aligning, re-specifying. Human creativity can switch to the areas it&#8217;s most valuable: judgement, craft, and care.</p><blockquote><p>My colleague <a href="https://www.linkedin.com/in/carlyjstevens/">Carly Stevens</a> said recently that, done well, AI &#8220;can free designers to do the jobs they were actually hired for&#8221;.</p></blockquote><p>The automation of the mechanical layers doesn&#8217;t erase design. It brings it home to its true purpose.</p><h3>Design is structured input, not static output</h3><p>The act of defining patterns, constraints, and relationships isn&#8217;t pre-delivery work - it <em>is</em> the delivery. Structured systems turn these definitions into executable outcomes.</p><p>Design isn&#8217;t a set of abstracts handed off to be interpreted. It&#8217;s the structured language embedded in the delivery process. The different between &#8220;the design&#8221; and &#8220;the product&#8221; becomes semantic. The expression of intent is the same as what&#8217;s used to create reality.</p><p>This makes design more human. The sense of what&#8217;s good instead of just correct is where human excellence will continue to thrive.</p><p>Designers will be working <em>after</em> automation. Curating, adjusting, improving what AI systems produce. They&#8217;ll guide the system - working as stewards to ensure that what&#8217;s delivered is crafted, coherent, and feels alive.</p><h3>A new creative edge for developers</h3><p>Developers aren&#8217;t losing ground here, either.</p><p>When the system can handle assembly, developers get more time to focus on architecture. That means authoring the meta-systems: rules, boundaries and logic that govern AI&#8217;s interpretation and execution of design.</p><p>It&#8217;s not thinking &#8220;how do I build that screen from that design?&#8221; - the question now is &#8220;how do I build the platform to make a thousand screens possible?&#8221; It&#8217;s a different scale of artistry, and the future of engineering excellence.</p><p>Intelligent delivery elevates human roles, rather than necessarily eliminating them. Designers and developers can be genuine co-authors and co-creators of their ecosystem.</p><h3>What&#8217;s next?</h3><p>The end of the digital workflow shouldn&#8217;t mean chaos. It should mean more continuity.</p><p>The canvas will be a live view into the reality of the system, not a staging ground for ideas that sit outside it. Delivery needs to become intelligent, contextual, and collaborative.</p><p>To thrive here:</p><ul><li><p><strong>Build structured systems</strong>. Resources and guidance legible to humans and machines.</p></li><li><p><strong>Connect canvas to code</strong>. Design tools need to be interfaces for living systems, not static artifacts.</p></li><li><p><strong>Focus on human excellence</strong>. Automation can get you to &#8220;good&#8221; (well, maybe &#8220;ok&#8221;) faster. Invest human judgement into making it great.</p></li><li><p><strong>Question your tools</strong>. If any platform&#8217;s roadmap is pulling you deeper into its own walls, ask who benefits most from that dependence.</p></li></ul><p>The digital workflow that we&#8217;ve known was a bridge between silos. But the silos are disappearing. We don&#8217;t have product pipelines, we have shared environments of systems and craft.</p><p>Design isn&#8217;t a stage before delivery.</p><p><strong>Design is delivery</strong>.</p><div><hr></div><h4>Further reading</h4><ul><li><p><em><a href="https://www.linkedin.com/posts/figma_q3-recently-activity-7391945014630588417-NUa4?utm_source=share&amp;utm_medium=member_desktop&amp;rcm=ACoAAAFdJ_QBu32gGHnnUAMnPXhRUi4iAXhJVe4">Figma&#8217;s Q3, recently</a></em> LinkedIn post, Nov 2025.</p></li></ul><div class="subscription-widget-wrap-editor" data-attrs="{&quot;url&quot;:&quot;https://www.robin-cannon.com/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe&quot;,&quot;language&quot;:&quot;en&quot;}" data-component-name="SubscribeWidgetToDOM"><div class="subscription-widget show-subscribe"><div class="preamble"><p class="cta-caption"><strong>Subscribe for essays on design, technology, and culture - plus original fiction.</strong></p></div><form class="subscription-widget-subscribe"><input type="email" class="email-input" name="email" placeholder="Type your email&#8230;" tabindex="-1"><input type="submit" class="button primary" value="Subscribe"><div class="fake-input-wrapper"><div class="fake-input"></div><div class="fake-button"></div></div></form></div></div>]]></content:encoded></item><item><title><![CDATA[How to build the thing [Part 3: Make it vanish]]]></title><description><![CDATA[The more your system is felt, the less it works.]]></description><link>https://www.robin-cannon.com/p/how-to-build-the-thing-part-3-make</link><guid isPermaLink="false">https://www.robin-cannon.com/p/how-to-build-the-thing-part-3-make</guid><dc:creator><![CDATA[Robin Cannon]]></dc:creator><pubDate>Tue, 11 Nov 2025 16:01:39 GMT</pubDate><enclosure url="https://substack-post-media.s3.amazonaws.com/public/images/9de09638-25e0-4d17-b073-f62b7199da0f_1024x1024.jpeg" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>The best systems don&#8217;t need to be the center of attention. They don&#8217;t demand credit or visibility. They just work.</p><p>When something actually matches the way people already think and act, it starts to fade into the background. Nobody has to say, <em>&#8220;I&#8217;m using the sytem.&#8221; </em>They&#8217;re just doing the work system was supposed to enable in the first place.</p><h2>The quiet success</h2><p>We like to celebrate launches. Demos. Dashboards. Looking at adoption curves. Common workplace rituals. But the quiet success of a system isn&#8217;t in how many people are talking about it. It&#8217;s how little they have to.</p><blockquote><p>Success is when people stop noticing it&#8217;s there at all.</p></blockquote><h2>Just do the thing</h2><p>Every release you make to a tool or piece of infrastructure, every update, should make it feel less like &#8220;using software&#8221;. We want to make it feel like &#8220;doing the thing.&#8221;</p><p>That means:</p><ul><li><p>Fewer steps from intent to outcome.</p></li><li><p>Less translation between though and action.</p></li><li><p>Immediacy between what someone <em>thinks</em> and what actually <em>happens.</em></p></li></ul><p>When you reduce friction, you make the system smaller and the work bigger. That&#8217;s the direction you always want to be moving.</p><h2>When the system intrudes</h2><p>It&#8217;s fairly obvious when a system has too much of a presence:</p><ul><li><p>People are asking, &#8220;how do I use it?&#8221; and not &#8220;what can I do with it?&#8221;</p></li><li><p>Teams talk about the <em>process</em> more than they talk about the output.</p></li><li><p>Success is about compliance, not delivery.</p></li><li><p>The ritual exists - tickets, status updates, checkboxes - without making progress.</p></li></ul><p>The problem is that&#8217;s not structure or culture. It&#8217;s just bureaucracy disguised as enablement.</p><h2>Absorb, don&#8217;t instruct</h2><p>The greatest of systems feel like intuition. They don&#8217;t need onboarding decks, mandatory training - you learn through doing. They&#8217;re a mirror to how people already think and work.</p><p>Start looking, and you&#8217;ll start to see this principle everywhere:</p><ul><li><p>Docs begin to emerge naturally from the work.</p></li><li><p>Slack bots surface what&#8217;s next when it&#8217;s needed.</p></li><li><p>GitHub Copilot fills in behind you while you focus on logic.</p></li><li><p>A design system isn&#8217;t something you adopt - it&#8217;s just <em>how we build things here</em>.</p></li></ul><p>It&#8217;s not magic. It&#8217;s empathy. Designing for active fluency, not academic literacy.</p><h2>Invisibility</h2><p>You&#8217;re closer to success when:</p><ul><li><p>New team members learn by watching and doing, not reading.</p></li><li><p>Every sprint has fewer &#8220;how do I&#8230;?&#8221; questions than the last.</p></li><li><p>You hear &#8220;this was easy,&#8221; not &#8220;I finally figured it out.&#8221;</p></li><li><p>Nobody discusses the system in retros.</p></li></ul><p>Invisibility isn&#8217;t the same as absence. It means that you&#8217;ve got a took or system that&#8217;s so well aligned with your intent that it&#8217;s becoming unspoken.</p><h2>The direction, not the destination</h2><p>Look, you&#8217;re never going to make these systems completely invisible. That&#8217;s fine. It&#8217;s about  having a goal to continually reduce its footprint - keep shifting everyone&#8217;s energy away from <em>using the system</em> toward <em>doing the thing</em>.</p><p>The first moment that people outright forget they&#8217;re &#8220;using&#8221; something, that&#8217;s when it&#8217;s finally doing its job.</p><div><hr></div><h3>Further reading:</h3><ul><li><p><em><a href="https://www.robin-cannon.com/p/the-thing-that-gets-you-to-the-thing">The thing that gets you to the thing</a> </em>- why the tool isn&#8217;t the point</p></li><li><p>Krug, Steve. <em><a href="https://sensible.com/dont-make-me-think/">Don&#8217;t Make Me Think, Revisited</a>. </em>New Riders, 2014.</p></li></ul><div><hr></div><p><em>Part 3 in a six-part series on <a href="https://www.robin-cannon.com/t/gets-you-to-the-thing">building in a way that serves real human outcomes</a>. </em></p><p><em>Part 4 will be published in two weeks.</em></p><div class="subscription-widget-wrap-editor" data-attrs="{&quot;url&quot;:&quot;https://www.robin-cannon.com/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe&quot;,&quot;language&quot;:&quot;en&quot;}" data-component-name="SubscribeWidgetToDOM"><div class="subscription-widget show-subscribe"><div class="preamble"><p class="cta-caption"><strong>Subscribe for essays on design, technology, and culture - plus original fiction.</strong></p></div><form class="subscription-widget-subscribe"><input type="email" class="email-input" name="email" placeholder="Type your email&#8230;" tabindex="-1"><input type="submit" class="button primary" value="Subscribe"><div class="fake-input-wrapper"><div class="fake-input"></div><div class="fake-button"></div></div></form></div></div>]]></content:encoded></item><item><title><![CDATA[How to build the thing [Part 2: Don't build what nobody needs]]]></title><description><![CDATA[The trap of building systems, tools, or workflows that serve themselves, not people.]]></description><link>https://www.robin-cannon.com/p/how-to-build-the-thing-part-2-dont</link><guid isPermaLink="false">https://www.robin-cannon.com/p/how-to-build-the-thing-part-2-dont</guid><dc:creator><![CDATA[Robin Cannon]]></dc:creator><pubDate>Tue, 28 Oct 2025 15:30:28 GMT</pubDate><enclosure url="https://substack-post-media.s3.amazonaws.com/public/images/4f602e87-1d5f-4789-93e4-d9e720dbb040_1024x1024.jpeg" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>It&#8217;s really tempting, especially if you&#8217;re working in a fast-paced environment, with resources to support you, to build something because you can build it. Stand up a new platform, write a new wiki, launch some AI tooling, create a design system. That&#8217;s what teams are supposed to do, right? Make stuff. And for systems and infrastructure teams, &#8220;stuff&#8221; is&#8230;more systems and infrastructure.</p><p>But a system that exists for its own sake, no matter how polished it is, is just a black hole. Drawing in time, attention, energy, resources&#8230;without any clear proportional value.</p><h2>Goal collapse</h2><p>If you don&#8217;t have a goal, that means there&#8217;s no definition of what the work is <em>for</em>. Which means the work itself starts masquerading as the reason for the work.</p><ul><li><p>&#8220;We need a design system,&#8221; rather than what the design system enables.</p></li><li><p>&#8220;We&#8217;re implementing AI,&#8221; should be a project, not an outcome.</p></li></ul><p>This is <strong>goal collapse</strong>. Infrastructure becomes the obsession, and eventually you&#8217;ll reach a point where you&#8217;re building somebody and genuinely nobody remembers why.</p><h2>Honor desire paths</h2><p>People already know how they work best. So if you come barreling in with something that breaks their natural flow, you create resistance. Or maybe even worse, surface compliance while they ignore you and work around things in the background.</p><p>So instead of asking &#8220;how should everyone work?&#8221;, perhaps start by asking &#8220;how does everyone work today?&#8221;</p><p>That includes the hacks and workarounds, hidden docs, side Slack conversations, team-specific shared templates. Those should be a starting point to build from, not an obstacle to destroy.</p><p>Don&#8217;t break desire paths, build on them.</p><h2>It&#8217;s a tool, not a temple</h2><p>You&#8217;re not building a monument to how great you are. You&#8217;re building a wrench for someone else to use. </p><p>That means creating something that people will <strong>use</strong>, not something that people will stand back and admire from afar.</p><p>If there&#8217;s any sense that the system you built is &#8220;sacred&#8221;, fragile, forced&#8230;then it probably isn&#8217;t doing what it&#8217;s supposed to do. The best systems are probably a little bit messy, a little bit lived-in, which makes them comfortable.</p><h2>Systems are for people, not the reverse</h2><p>People who do the work should have the capacity to shape the system. Not just through giving their feedback, but by how we observe their behavior and adjust.</p><ul><li><p>Watch how teams successfully ship things.</p></li><li><p>Identify and realize that&#8217;s slowing them down.</p></li><li><p>Build what actually addresses that friction.</p></li><li><p>Let parts of the system stay informal and flexible, if the work demands.</p></li></ul><p>A rigid, top-down system looks efficient on paper. I&#8217;ve seen design system teams in particular fall into this ivory tower trap many times - even with the best of intentions. It fails when reality intrudes.</p><h2>Would you build it if nobody saw it?</h2><p>If there was no deck for you to present what you were launching. No internal applause. If there was no roadmap to celebrate &#8220;we did the thing&#8221;&#8230;</p><p>Would it still be worth it?</p><p>If the answer is yes - it enables better work and smoother delivery, shipping things faster - build it.</p><p>If the answer is no - then maybe you don&#8217;t need to build it at all.</p><div><hr></div><h3>Further reading:</h3><ul><li><p><em><a href="https://www.interaction-design.org/literature/article/three-common-problems-in-enterprise-system-user-experience">Three Common Problems in Enterprise System User Experience</a></em>. IxDF, Feb 2016.</p></li><li><p><em><a href="https://www.robin-cannon.com/p/the-design-system-purity-trap">The design system purity trap</a></em> - Life isn&#8217;t tidy - why design systems fail by chasing neatness instead of reality.</p></li></ul><div><hr></div><p><em>Part 2 in a six-part series on <a href="https://www.robin-cannon.com/t/gets-you-to-the-thing">building in a way that serves real human outcomes</a>. </em></p><p><em>Part 3 will be published in two weeks.</em></p><div class="subscription-widget-wrap-editor" data-attrs="{&quot;url&quot;:&quot;https://www.robin-cannon.com/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe&quot;,&quot;language&quot;:&quot;en&quot;}" data-component-name="SubscribeWidgetToDOM"><div class="subscription-widget show-subscribe"><div class="preamble"><p class="cta-caption"><strong>Subscribe for essays on design, technology, and culture - plus original fiction.</strong></p></div><form class="subscription-widget-subscribe"><input type="email" class="email-input" name="email" placeholder="Type your email&#8230;" tabindex="-1"><input type="submit" class="button primary" value="Subscribe"><div class="fake-input-wrapper"><div class="fake-input"></div><div class="fake-button"></div></div></form></div></div>]]></content:encoded></item><item><title><![CDATA[How to build the thing [Part 1: Start with what it enables]]]></title><description><![CDATA[Everything we build - tools, systems, or products - should begin with an outcome in mind.]]></description><link>https://www.robin-cannon.com/p/how-to-build-the-thing-part-1-start</link><guid isPermaLink="false">https://www.robin-cannon.com/p/how-to-build-the-thing-part-1-start</guid><dc:creator><![CDATA[Robin Cannon]]></dc:creator><pubDate>Tue, 14 Oct 2025 14:00:24 GMT</pubDate><enclosure url="https://substack-post-media.s3.amazonaws.com/public/images/f3dd458f-90c5-4c04-982b-674b3c424010_1024x1024.jpeg" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>Whatever you&#8217;re building, one principle comes first.</p><blockquote><p>Start with what it enables for someone.</p></blockquote><p>If you&#8217;re not doing that, then you&#8217;re just assembling parts in a vacuum.</p><p>That applies whether you&#8217;re talking about a design system, a new service, implementing AI tooling, or shipping a working feature to a consumer product. If you can&#8217;t say who it helps, how it helps them, what it makes possible that wasn&#8217;t possible before&#8230;it probably won&#8217;t stick. It won&#8217;t matter how elegant the architecture is, how impressive the technical implementation.</p><p>It&#8217;s the first principle in our series on building systems and tools that serve human goals. It&#8217;s the foundation we build our other principles on.</p><div><hr></div><h3>What changes because this exists?</h3><p>Here are questions to ask <strong>before</strong> you write code, name components, or integrate AI.</p><ul><li><p>What should people be able to do that they couldn&#8217;t do before?</p></li><li><p>What will it make easier, faster, or more consistent?</p></li><li><p>What friction does this remove?</p></li><li><p>What outcome does this unlock?</p></li></ul><p>Crucially, <em>who<strong> </strong></em>benefits from this, and <em>how </em>will they feel that benefit?</p><p>If we skip this step, we create things that feel important simply because we&#8217;re working on them! That&#8217;s seductive to the builders, irrelevant to the people they&#8217;re ultimately being built for.</p><div><hr></div><h3>Our focus is NOT &#8220;what are we building?&#8221;</h3><p>It sounds like semantics, but it&#8217;s not. Our focus is:</p><blockquote><p>What is made possible when this thing exists?</p></blockquote><p>You&#8217;re not building a system. You&#8217;re creating the conditions for better outputs.</p><p>You&#8217;re not building a product. You&#8217;re helping someone do something better, faster, or more meaningfully.</p><p>You&#8217;re not building a pipeline. You&#8217;re giving a team confidence that their work will ship.</p><p>We start from outcomes. Everything else then flows with more clarity.</p><div><hr></div><h3>Try the &#8220;Who/What/Wow&#8221; model</h3><p>I learned this framing at IBM, and it&#8217;s deceptively simple.</p><blockquote><p>&#8220;This thing will allow [who] to do [what] in a way that [wows them].&#8221;</p></blockquote><p>It forces clarity. It centers people on the purpose. It&#8217;s a great gut-check for if your work is valuable, or if it just exists. And the demand for a <em>&#8220;wow&#8221;</em> encourages us to push for real value, not just iterative utility.</p><p>Some adapted examples:</p><ul><li><p>&#8220;This AI product delivery tool helps product teams ship ten times faster than they did before.&#8221;</p></li><li><p>&#8220;A recipe app lets a home chef order all the ingredients needed for a single delivery in a single transaction.&#8221;</p></li></ul><p>The point isn&#8217;t exactly getting the syntax, so much as forcing intentionality.</p><div><hr></div><h3>Start your build with one sentence</h3><p>Steve Wozniak was a proponent of simplicity, exploring problems he wanted to solve. </p><p>Start with one sentence that says what this does, what it enables.</p><p>You can validate, build, explain, and iterate. Create your five-page strategy document from that first sentence.</p><p>Systems, tools, AI flows, finished products, all benefit from this grounding. If you can&#8217;t say what they enable then:</p><ul><li><p>You won&#8217;t know if it worked.</p></li><li><p>Your users won&#8217;t feel the value.</p></li><li><p>Your team won&#8217;t know if it mattered.</p></li></ul><p>The thing that gets you to the thing needs a reason to exist. You should start that reason here.</p><div><hr></div><h3>Further reading:</h3><ul><li><p>Mochari, Ilan, <em><a href="https://www.inc.com/ilan-mochari/wozniak-2-lessons.html">2 Lessons From Steve Wozniak&#8217;s Early Creative Experiments</a>. </em>Inc, May 2014.</p></li></ul><div><hr></div><p><em>Part 1 in a six-part series on <a href="https://www.robin-cannon.com/t/gets-you-to-the-thing">building in a way that serves real human outcomes</a>. </em></p><p><em>Part 2 will be published in two weeks.</em></p><div class="subscription-widget-wrap-editor" data-attrs="{&quot;url&quot;:&quot;https://www.robin-cannon.com/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe&quot;,&quot;language&quot;:&quot;en&quot;}" data-component-name="SubscribeWidgetToDOM"><div class="subscription-widget show-subscribe"><div class="preamble"><p class="cta-caption"><strong>Subscribe for essays on design, technology, and culture - plus original fiction.</strong></p></div><form class="subscription-widget-subscribe"><input type="email" class="email-input" name="email" placeholder="Type your email&#8230;" tabindex="-1"><input type="submit" class="button primary" value="Subscribe"><div class="fake-input-wrapper"><div class="fake-input"></div><div class="fake-button"></div></div></form></div></div>]]></content:encoded></item><item><title><![CDATA[A short guide to making AI work for you]]></title><description><![CDATA[Six moves design system professionals can take to accelerate and celebrate your work]]></description><link>https://www.robin-cannon.com/p/how-to-work-with-the-machine</link><guid isPermaLink="false">https://www.robin-cannon.com/p/how-to-work-with-the-machine</guid><dc:creator><![CDATA[Robin Cannon]]></dc:creator><pubDate>Tue, 30 Sep 2025 14:00:37 GMT</pubDate><enclosure url="https://substack-post-media.s3.amazonaws.com/public/images/c9de9bc9-77cc-44eb-bc01-785af08d897e_1024x1024.jpeg" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>Design systems live at the intersection of people, processes, and tooling. That means they&#8217;re one of the first places we&#8217;re seeing AI quietly showing up. It&#8217;s inside IDEs, docs platforms, in your linting tools, and even part of your design handoff. </p><p>If you work on a design system, you don&#8217;t need to be an AI expert. But it will make a huge difference to show that you can work with it. Staying relevant isn&#8217;t necessarily mastery, it&#8217;s about readiness.</p><h3>1. Build your prompt library</h3><p>The clearer your instructions, the better AI will work. Don&#8217;t start from scratch every time, collect the prompts that deliver useful output; generating test cases, summarizing breaking changes, scaffolding new component code. Reuse them, refine them, share them with your team. Maybe even version them.</p><p>You might think of prompts like design tokens. Consistent, reusable, and of increasing value the more they&#8217;re shared.</p><h3>2. Run AI on your own work</h3><p>Don&#8217;t wait to be told how to use AI &#8220;right&#8221;. You can start with your own output. Run pull requests through AI for accessibility flags. Or ask for a simplified version of documentation before it&#8217;s time to ship. It&#8217;ll be a good way to understand where AI can add more clarity, and when it might struggle.</p><p>Treat AI like an extra reviewer, not an oracle.</p><h3>3. Automate the most boring stuff</h3><p>Every person, every team, has tasks that eat time without adding much value. Format changelogs, drafting migration notes. Why not let AI take that sting out of the last 10%. Find a task, automate it, and share the results.</p><p>Show your team that you&#8217;re not just experimenting, you&#8217;re using your results to make everyone&#8217;s life easier.</p><h3>4. Track the pitfalls</h3><p>AI isn&#8217;t a magic fix-all. It hallucinates, writes inaccessible code, and can product misleading or tone-deaf copy. Keep logs of where it fails for you - snippets and examples. Then you&#8217;re positioning yourself not just as a cheerleader, but someone who understands limits, too.</p><p>When you suggest adopting AI in a workflow, you&#8217;ll know when it can and can&#8217;t be trusted. </p><h3>5. Talk about AI in your standup</h3><p>AI exploration isn&#8217;t a secret side project. </p><p>Share what you tried to do, talk about it with your team. &#8220;I asked AI to draft our migration docs, but I still had to fix the accessibility notes.&#8221;</p><p>It isn&#8217;t about branding yourself as the AI person, but being open about experiments that you&#8217;re leaning into.</p><h3>6. Celebrate your wins</h3><p>It&#8217;s easy to worry about AI use being seen as &#8220;cheating&#8221;. If an AI assistant helped you find a bug faster or draft docs that shipped&#8230;celebrate it. </p><p>Accelerated work with AI is still <em>your</em> work. You shaped the input, you judged the output, and you made the call. The more you can celebrate those successes, the easier it gets for the whole team to embrace AI.</p><div><hr></div><h3>Final takeaway</h3><p>Getting AI ready isn&#8217;t about being a master of the future. You&#8217;re showing today that you can use the tools, filter out the noise, and celebrate the successes. Help your team move faster without losing trust.</p><div class="subscription-widget-wrap-editor" data-attrs="{&quot;url&quot;:&quot;https://www.robin-cannon.com/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe&quot;,&quot;language&quot;:&quot;en&quot;}" data-component-name="SubscribeWidgetToDOM"><div class="subscription-widget show-subscribe"><div class="preamble"><p class="cta-caption"><strong>Subscribe for essays on design, technology, and culture - plus original fiction.</strong></p></div><form class="subscription-widget-subscribe"><input type="email" class="email-input" name="email" placeholder="Type your email&#8230;" tabindex="-1"><input type="submit" class="button primary" value="Subscribe"><div class="fake-input-wrapper"><div class="fake-input"></div><div class="fake-button"></div></div></form></div></div>]]></content:encoded></item><item><title><![CDATA[How to build the thing that gets you to the thing]]></title><description><![CDATA[A practical guide to making systems that serve people]]></description><link>https://www.robin-cannon.com/p/how-to-build-the-thing-that-gets</link><guid isPermaLink="false">https://www.robin-cannon.com/p/how-to-build-the-thing-that-gets</guid><dc:creator><![CDATA[Robin Cannon]]></dc:creator><pubDate>Tue, 16 Sep 2025 14:01:37 GMT</pubDate><enclosure url="https://substack-post-media.s3.amazonaws.com/public/images/86b73ea6-8004-46cc-b48e-84d7ff3c3b10_1024x1024.jpeg" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>Last week in <em>Signals</em>, I wrote about <em>Halt and Catch Fire</em>, and the importance of remembering that the best tools aren&#8217;t the point; they&#8217;re the thing that gets you to the thing. The web wasn&#8217;t the thing. Neither was the computer. And AI isn&#8217;t the thing, either. </p><p>The thing is what people do, make, discover, or become&#8230;once the system itself fades away into the background.</p><p>This piece is a follow up. A little more of a practical guide. That previous piece was about why we build. This is about how.</p><h3>1. Start with what it enables</h3><p>You&#8217;re not building something if you don&#8217;t know what it enables. You&#8217;re just assembling parts in a vacuum and hoping they come together as a product or a system.</p><p>Good systems begin with a clear sense of what the desired outcome is, the problem that you&#8217;re solving.</p><ul><li><p>What should people be able to do that they couldn&#8217;t do before?</p></li><li><p>What will be easier, faster, and more consistent to do?</p></li><li><p>Who benefits from this&#8230;and how?</p></li></ul><p>When you start designing something, start with a sentence of intent.</p><p>When I was at IBM, we used a &#8220;<strong>who, what, wow</strong>&#8221; model.</p><p><em>&#8220;This system will allow [who] to [do what] in a way that [wows them].&#8221;</em></p><h3>2. Don&#8217;t build a thing to have a thing</h3><p>Nobody builds a design system for the sake of having a design system. We build design systems to enable people to do better work. That same logic applies to everything from AI integration to internal documentation.</p><p>So if your goal is &#8220;to implement AI&#8221; or &#8220;stand up a new system,&#8221; then you&#8217;ve already missed the point.</p><p>And if the system you build doesn&#8217;t reflect real behavior, not an idealized way of working, then you&#8217;re building something that looks right instead of being actually fit for purpose. Systems should honor desire paths, not try to break them.</p><h3>3. Make the system vanish</h3><p>The best systems are invisible. They disappear. Not in a literal sense, but they recede from being the center of attention. If people talk about the system, more than what they built with it, then there&#8217;s probably something wrong.</p><p>I&#8217;m not talking about lack of adoption. In fact, it&#8217;s the point at which adoption is so natural, because the system has become infrastructure. Think of it like electricity - we don&#8217;t think often about &#8220;using electricity&#8221;, it&#8217;s infrastructure to our lives.</p><p>In a system that&#8217;s working, you should be able to do things like:</p><ul><li><p>Create something useful without fully knowing how the system works.</p></li><li><p>Be able to move from expressing your goal in natural language to starting to build.</p></li><li><p>Take action without needing to translate between your intent and the system interface.</p></li></ul><h3>4. Codify your intuition</h3><p>Really good systems are the ones that encode what works so that other people don&#8217;t have to guess. The best ones do that and also <em>support improvisation<strong>. </strong></em></p><p>In practice that means:</p><ul><li><p>Safe defaults, but adjustable.</p></li><li><p>Documented practices that are governed, not enforced.</p></li><li><p>The kind of guidance that surfaces <em>before</em> someone asks the question.</p></li></ul><p>If a system starts to represent the shared intuition, and scales it, you&#8217;re helping people get the best possible start.</p><h3>5. AI is a translator, not a controller</h3><p>The promise of AI in all this isn&#8217;t just automating it for automation&#8217;s sake. We want to use it to translate. To help us turn our raw intent into structured output.</p><p>A good AI-supported system doesn&#8217;t try to replace creativity. It wants to accelerate it.</p><ul><li><p>Turn sketches into code.</p></li><li><p>Turn briefs into outlines.</p></li><li><p>Turn documentation into conversation.</p></li></ul><p>At its best, we should be thinking of AI as a helpful translator between human intent and a digital structure.</p><h3>6. You&#8217;re doing it right if nobody talks about it</h3><p>You&#8217;ve built the right thing when people stop noticing it. When teams just get on with the work; moving faster, asking fewer questions.</p><p>The thing that gets you to the thing? It doesn&#8217;t need applause, it really just needs to work.</p><p>So when you do it right, people spend less time learning, and more time building what matters.</p><div><hr></div><p><em>Over the coming weeks, I plan to expand on each of these six ideas in more detail. Breaking down the practical and messy work of building systems that serve people. One of these principles per post, using real examples and honest pitfalls (including cautionary tales from my own mistakes!). Maybe even a checklist or two.</em></p><p><em>The thing that gets you to the thing? It&#8217;s not just a philosophy. It needs a blueprint.</em></p><div class="subscription-widget-wrap-editor" data-attrs="{&quot;url&quot;:&quot;https://www.robin-cannon.com/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe&quot;,&quot;language&quot;:&quot;en&quot;}" data-component-name="SubscribeWidgetToDOM"><div class="subscription-widget show-subscribe"><div class="preamble"><p class="cta-caption"><strong>Subscribe for essays on design, technology, and culture - plus original fiction.</strong></p></div><form class="subscription-widget-subscribe"><input type="email" class="email-input" name="email" placeholder="Type your email&#8230;" tabindex="-1"><input type="submit" class="button primary" value="Subscribe"><div class="fake-input-wrapper"><div class="fake-input"></div><div class="fake-button"></div></div></form></div></div>]]></content:encoded></item></channel></rss>