<?xml version="1.0" encoding="UTF-8"?><rss xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:atom="http://www.w3.org/2005/Atom" version="2.0" xmlns:itunes="http://www.itunes.com/dtds/podcast-1.0.dtd" xmlns:googleplay="http://www.google.com/schemas/play-podcasts/1.0"><channel><title><![CDATA[Robin Cannon: Signals]]></title><description><![CDATA[Essays and reflections on everyday life — finding the threads that connect personal experience to the systems that shape us. Culture, memory, technology, and the politics of ordinary moments. The place where story and system overlap.]]></description><link>https://www.robin-cannon.com/s/signals</link><generator>Substack</generator><lastBuildDate>Sat, 11 Apr 2026 05:37:01 GMT</lastBuildDate><atom:link href="https://www.robin-cannon.com/feed" rel="self" type="application/rss+xml"/><copyright><![CDATA[Robin Cannon]]></copyright><language><![CDATA[en]]></language><webMaster><![CDATA[shinytoyrobots@substack.com]]></webMaster><itunes:owner><itunes:email><![CDATA[shinytoyrobots@substack.com]]></itunes:email><itunes:name><![CDATA[Robin Cannon]]></itunes:name></itunes:owner><itunes:author><![CDATA[Robin Cannon]]></itunes:author><googleplay:owner><![CDATA[shinytoyrobots@substack.com]]></googleplay:owner><googleplay:email><![CDATA[shinytoyrobots@substack.com]]></googleplay:email><googleplay:author><![CDATA[Robin Cannon]]></googleplay:author><itunes:block><![CDATA[Yes]]></itunes:block><item><title><![CDATA[The constraint signal]]></title><description><![CDATA[Seeing isn't believing any more. Now what?]]></description><link>https://www.robin-cannon.com/p/the-constraint-signal</link><guid isPermaLink="false">https://www.robin-cannon.com/p/the-constraint-signal</guid><dc:creator><![CDATA[Robin Cannon]]></dc:creator><pubDate>Tue, 07 Apr 2026 15:01:58 GMT</pubDate><enclosure url="https://substack-post-media.s3.amazonaws.com/public/images/2bf93838-66a4-4745-8e72-c42a65fffbd8_4250x3190.jpeg" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>For a long time, clarity has been synonymous with credibility.</p><p>In the digital age, higher resolution meant better cameras. Better cameras meant fewer distortions. Fewer distortions meant fewer lies, less confusion.</p><p>But in the age of generative media, clarity has become cheap and easy.</p><p>We have systems that produce convincing video and audio. They start at a point of high-fidelity. Being able to synthesize an infinite number of possibilities is the baseline. No pixellated faces, no cracked voices. The lighting looks right. Imperfections are optional.</p><p>The problem for institutions, and for civil society, in 2026 is not that synthetic media looks fake.</p><p>It&#8217;s that it can look better than reality.</p><div><hr></div><h3>Some fakes still ring false</h3><p>Consider a (sadly not hypothetical) fake video: armed police arresting Barack Obama in the Oval Office, and dragging him out.</p><p>A sufficiently advanced generative model can render this with astonishing levels of realism. You could match the lighting with archival footage. The audio could replicate not just voices, but room acoustics. The uniforms correct down to the details in the stitching.</p><p>Despite the technical plausibility, the video was broadly rejected.</p><p>Not universally. A significant minority believed it, shared it, and amplified it. But across political lines, most viewers sensed something was off.</p><p>Not because of compression artifacts. </p><p>Not because the faces weren&#8217;t quite right.</p><p>But because it violated the narrative constraint. It didn&#8217;t fit institutional reality, or plausible procedure. It wasn&#8217;t how people believe power behaves in the real world.</p><p>Humans are far from perfect as lie detectors. We&#8217;re inconsistent and biased, often overconfident. But we are sensitive to implausibility. We rely on whether the context makes sense - on friction, inertia, institutional continuity - to judge what&#8217;s real.</p><p>We don&#8217;t judge pixels.</p><p>We evaluate if the event makes sense inside the systems we think we understand.</p><p>It&#8217;s an uneven sensitivity. And it can be misled or overridden by partisanship, repetition, or desire.</p><p>But our sensitivity to bullshit doesn&#8217;t need to fail everywhere in order to be destabilizing. </p><p>Perception alone isn&#8217;t the battleground.</p><p>Constraint is.</p><div><hr></div><h3>The deepfake inversion</h3><p>Early deepfakes weren&#8217;t all that deep. They glitched. Faces slid. People had too many hands, or a weird elbow.</p><p>The next generation will fail situationally.</p><p>The most convincing synthetic media is not going to be the most spectacular. It will aim to be mundane. It will be modest. It&#8217;s going to try to mimic constraint.</p><p>We can already see that in some fraud attempts:</p><ul><li><p>Slightly distorted audio</p></li><li><p>Compressed video</p></li><li><p>Footage looks handheld rather than cinematic</p></li></ul><p>Attackers are learning that imperfection now increases, rather than reduces, plausibility.</p><p>Perfection triggers suspicion. Constraint is more convincing.</p><div><hr></div><h3>Constraint as authenticity</h3><p>Historically, constraint has been accidental:</p><ul><li><p>Film grain</p></li><li><p>Static</p></li><li><p>Limited bitrates and compression</p></li><li><p>Background noise</p></li></ul><p>But these were artifacts that came from the limits of the medium.</p><p>Now, given the near-infinite generative capacity, those artifacts can be created synthetically.</p><p>We can&#8217;t rely on visual or audio imperfection as proof.</p><p>So we need a constraint that is <em>bound to the conditions of capture</em>.</p><p>That means changing how we think about media authenticity.</p><p>No longer:</p><blockquote><p>Does this look real?</p></blockquote><p>But: </p><blockquote><p>Was this captured under verifiable, constrained conditions?</p></blockquote><p>Authenticity becomes a question of provenance, not just of optics. </p><div><hr></div><h3>Certifying constraint</h3><p>Imagine a defined and scoped &#8220;human-origin&#8221; channel for official communication.</p><p>Some possible characteristics:</p><ul><li><p>Hardware-bound capture</p></li><li><p>Fixed compression profiles</p></li><li><p>Real-time signing</p></li><li><p>Mandatory artifact retention</p></li><li><p>No post-processing or enhancement</p></li></ul><p>The output would be clean. But it would not be optimized for aesthetics. And it would not be flexible in its usage.</p><p>It would be certified by design.</p><p>The verifiable limitations would prove the origin.</p><p>Its visible limits become as much a feature as a flaw.</p><div><hr></div><h3>The adversarial response</h3><p>This isn&#8217;t going to be a stable situation. It&#8217;s already in flux.</p><p>The best synthetic media systems will  incorporate constraint modeling. They&#8217;ll simulate compression, inject sensor noise, try and mimic hardware signatures.</p><p>The arms race will go on.</p><p>When authenticity can&#8217;t rest on aesthetics and quality, it must be part of infrastructure.</p><ul><li><p>Chain of capture and possession</p></li><li><p>Cryptographic signing</p></li><li><p>Hardware identity and watermarks</p></li><li><p>Transmission logs</p></li></ul><p>It will become a case not of the visible imperfections, but on proving those imperfections came about because of the circumstances of physical capture.</p><div><hr></div><h3>Beyond pixels</h3><p>We&#8217;re part of a broader historical pattern.</p><p>When imitation is near-effortless, authenticity migrates from the surface layer to the origin. Classical art has long operated this way. A painting&#8217;s credibility is not determined by how convincing it looks, but by its provenance - who created it, who handled it, whether the materials used align with its supposed era, whether its lineage can be traced.</p><p>A perfect forgery without history is still a forgery.</p><p>Digital media is moving in the same direction.</p><p>As synthetic generation becomes trivial, our perception collapses as a useful test. A video can be flawless and it can be false. We can be persuaded by an audio clip that&#8217;s entirely invented. Without any traceable origin, media becomes an assertion without any evidence.</p><p>Constraint cannot remain based on the aesthetic. Constraint based on social trust - news media, government sources, institutions - is eroding. Constraint must be infrastructural. A digital equivalent of checking the canvas grain and pigment composition - the trace of the capture that&#8217;s embedded in the artifact itself.</p><p>In a world where anything can be rendered, the future of authenticity isn&#8217;t higher resolution.</p><p>It may be certified imperfection.</p><div><hr></div><h4>Further reading:</h4><ul><li><p>Kara-Yakoubian, M. <em><a href="https://www.psypost.org/new-psychology-research-reveals-the-bullshit-blind-spot/">New psychology research reveals the &#8220;bullshit blind spot&#8221;</a>. </em>PsyPost, May 2023.</p></li><li><p>Villasenor, J. <em><a href="https://www.brookings.edu/articles/artificial-intelligence-deepfakes-and-the-uncertain-future-of-truth/">Artificial intelligence, deepfakes, and the uncertain future of truth</a></em>. The Brookings Institute, Feb 2019.</p></li></ul><p><em>Photo by <a href="https://unsplash.com/@lajaxx?utm_source=unsplash&amp;utm_medium=referral&amp;utm_content=creditCopyText">JACQUELINE BRANDWAYN</a> on <a href="https://unsplash.com/photos/people-in-black-suit-jacket-standing-in-front-of-brown-wooden-framed-wall-art-cEqYGNEuX_A?utm_source=unsplash&amp;utm_medium=referral&amp;utm_content=creditCopyText">Unsplash</a></em></p><div class="subscription-widget-wrap-editor" data-attrs="{&quot;url&quot;:&quot;https://www.robin-cannon.com/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe&quot;,&quot;language&quot;:&quot;en&quot;}" data-component-name="SubscribeWidgetToDOM"><div class="subscription-widget show-subscribe"><div class="preamble"><p class="cta-caption"><strong>Subscribe for essays on design, technology, and culture - plus original fiction.</strong></p></div><form class="subscription-widget-subscribe"><input type="email" class="email-input" name="email" placeholder="Type your email&#8230;" tabindex="-1"><input type="submit" class="button primary" value="Subscribe"><div class="fake-input-wrapper"><div class="fake-input"></div><div class="fake-button"></div></div></form></div></div><p></p><p></p>]]></content:encoded></item><item><title><![CDATA[The costs of the default]]></title><description><![CDATA[Why economics of inference are dissolving shared baselines faster than design could.]]></description><link>https://www.robin-cannon.com/p/the-costs-of-the-default</link><guid isPermaLink="false">https://www.robin-cannon.com/p/the-costs-of-the-default</guid><dc:creator><![CDATA[Robin Cannon]]></dc:creator><pubDate>Tue, 10 Mar 2026 15:03:08 GMT</pubDate><enclosure url="https://substack-post-media.s3.amazonaws.com/public/images/bda77c0d-14bf-4644-b7a4-bb946c3e2418_5530x3687.jpeg" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>A couple of weeks ago I wrote about the death of the default. The baseline experiences we share - in the digital sphere, but more broadly as a community and as a society - are eroding as systems become more adaptive. It&#8217;s cultural. Defaults aren&#8217;t just convenient design choices; they&#8217;re part of how we make all kinds of systems legible, contestable, and shared.</p><p>It&#8217;s happening because personalization isn&#8217;t expensive anymore.</p><p>It used to be. A static interface everyone had to use wasn&#8217;t just a choice, it was a necessary budget decision. Customization means more design work, more engineering, more testing. More cost. Our &#8220;default&#8221; was an economic constraint. But we dressed it up as a deliberate choice. One solution, simple enough for everyone.</p><p>Now our cost constraint is disappearing.</p><p>All the major cloud providers are investing heavily in hardware built specifically to make inference cheaper. In late January, Microsoft unveiled Maia 200 - a custom-built chip for AI inference that delivers 30% better performance per dollar than previous hardware. It&#8217;s getting cheaper to generate a personalized response, and that isn&#8217;t going to stop.</p><p>Once we get to that point, serving everyone the same thing isn&#8217;t a neutral choice. It starts to become actively wasteful. Why ignore contextual intelligence <em>you already have</em>? It&#8217;s inefficient not to personalize.</p><p>That changes some basic assumptions about how we create experiences.</p><p>Settings pages - where you configure preferences and forget about them - stop making sense. The system can just infer what you need. The experience gets generated. And if every user is getting a different experience, how do we do quality assurance? It&#8217;s not a check of a single interface. It becomes something statistical. It isn&#8217;t &#8220;does this screen work?&#8221; but &#8220;do all these different outputs stay within acceptable bounds?&#8221;</p><p>It&#8217;s still political. Defaults are political by nature - consciously, and unconsciously, they encode assumptions about who users are and what they need. If inference replaces design defaults, the political decisions move somewhere harder to see. They&#8217;ll bury themselves in the training data, in system prompts. Much less visible. Harder to challenge.</p><p>Defaults aren&#8217;t just about simplification. Even with their unavoidable flaws, they give people a shared experience to point at, argue about, and hold accountable. If a default discriminates, people can do something about it. When they dissolve, that shared reference dissolves too.</p><p>Now it&#8217;s being dissolved by economics. It&#8217;s not necessarily a conscious decision on anyone&#8217;s part. That makes our usual responses - a new design pattern, a framework, a set of guidelines - less effective at mitigating the issue.</p><p>We can&#8217;t bolt a solution on later. Whatever replaces the social function of defaults has to be built into the infrastructure itself.</p><div><hr></div><h4>Further reading: </h4><ul><li><p><em><a href="https://www.robin-cannon.com/p/death-of-the-default">Death of the default</a></em> - on shared reality in a world of adaptive systems.</p></li><li><p>Guthrie, S. <em><a href="https://blogs.microsoft.com/blog/2026/01/26/maia-200-the-ai-accelerator-built-for-inference/">Maia 200: The AI accelerator built for inference</a></em>. Jan, 2026.</p></li></ul><p><em>Article photo by <a href="https://unsplash.com/@alexkixa?utm_source=unsplash&amp;utm_medium=referral&amp;utm_content=creditCopyText">Alexandre Debi&#232;ve</a> on <a href="https://unsplash.com/photos/macro-photography-of-black-circuit-board-FO7JIlwjOtU?utm_source=unsplash&amp;utm_medium=referral&amp;utm_content=creditCopyText">Unsplash</a>.</em></p><div class="subscription-widget-wrap-editor" data-attrs="{&quot;url&quot;:&quot;https://www.robin-cannon.com/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe&quot;,&quot;language&quot;:&quot;en&quot;}" data-component-name="SubscribeWidgetToDOM"><div class="subscription-widget show-subscribe"><div class="preamble"><p class="cta-caption"><strong>Subscribe for essays on design, technology, and culture - plus original fiction.</strong></p></div><form class="subscription-widget-subscribe"><input type="email" class="email-input" name="email" placeholder="Type your email&#8230;" tabindex="-1"><input type="submit" class="button primary" value="Subscribe"><div class="fake-input-wrapper"><div class="fake-input"></div><div class="fake-button"></div></div></form></div></div><p></p>]]></content:encoded></item><item><title><![CDATA[The system builds what the system builds]]></title><description><![CDATA[Conway's Law, and the structures we find so difficult to escape]]></description><link>https://www.robin-cannon.com/p/the-system-builds-what-the-system</link><guid isPermaLink="false">https://www.robin-cannon.com/p/the-system-builds-what-the-system</guid><dc:creator><![CDATA[Robin Cannon]]></dc:creator><pubDate>Tue, 03 Mar 2026 16:01:11 GMT</pubDate><enclosure url="https://substack-post-media.s3.amazonaws.com/public/images/e984fb52-179e-42ab-8b6e-9e2ecfa12b25_4147x2761.jpeg" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>We all know the quote. Widely misattributed, endlessly recycled.</p><blockquote><p>Those who cannot remember the past are condemned to repeat it.</p></blockquote><p>It&#8217;s deployed as a warning. A call to learn. To notice, and to break the pattern.</p><p>I don&#8217;t think we ever do. Not really.</p><p>We repeat it constantly. People create the crisis. And then - and I think this is the part that gets left out - we fix it.</p><p>It&#8217;s not clean. It has a cost. There&#8217;s likely significant suffering along the way. But humans, faced with a crisis of their own making, are - at the truly critical point - remarkably ingenious. Remarkably dedicated. We find a way through. And we emerge having changed something structural about the way we operate.</p><p>We don&#8217;t learn from history. We live it again, solve it again, in some new form.</p><p>What&#8217;s curious about today is that the &#8220;new form&#8221; is moving faster than any previous version. And we have, in extraordinary detail and in a hundred ways, already imagined what that might look like.</p><p>Science fiction has spent decades drawing us maps of all the way things might fail. Systems amplifying the worst of us. Institutions too rigid to adapt to change. Technology outpacing wisdom. We read all those stories. They were thrilling, cautionary, resonant.</p><p>Then we built the things anyway.</p><p>Not out of ignorance. That&#8217;s what makes it interesting. Out of something harder to name.</p><div><hr></div><p>In 1967 a computer scientist, Melvin Conway, made an observation. Something simple, obvious in retrospect. The kind of thing that becomes a general &#8220;law&#8221;.</p><blockquote><p><em>Any organization that designs a system (defined broadly) will produce a design whose structure is a copy of the organization's communication structure.</em></p></blockquote><p>If your teams don&#8217;t talk to each other, your software won&#8217;t either. The architecture of what you build reflects the architecture of how you&#8217;re arranged.</p><p>And, while it&#8217;s usually expressed in the context of software development, it applies beyond it. Conway&#8217;s Law is a law about what complex systems reflect back about the structures that make them.</p><div><hr></div><p>Social media. Platforms built by teams optimized for engagement. Inside companies measured on growth. Inside economies that treat growth as unqualified virtue.</p><p>The platforms didn&#8217;t fracture communities and amplify outrage through malice. They did so because the structures of the organizations that built them optimized for that result. The product reflects the organization. The organization reflects its incentives. The incentives reflect something deep - perhaps unexamined - about how we agree to measure progress.</p><p>That&#8217;s Conway&#8217;s Law at societal scale.</p><p>Climate change is the same pattern, older and slower. It&#8217;s not a failure of knowledge. The science has been clear for decades. But our economies and governments are built on an assumption that carbon extraction and prosperity are inseparable. That holds when individuals, even those inside the institutions, know better. </p><p>The system builds what the system is.</p><p>There&#8217;s a foundational assumption that shapes both of these. Growth is always good. More is always the right way to go. Progress is expansion. And it&#8217;s so deeply embedded in our institutional structures that it shapes the output of everything those institutions touch - whether we choose it or not.</p><div><hr></div><p>AI is reflecting these patterns as well. I observe it in my professional world and in the broader cultural conversation.</p><p>Organizations are expending a great deal of energy around AI adoption. Companies, governments, schools, media. Mobilizing to &#8220;do AI.&#8221; </p><p>Because &#8220;do AI&#8221; is, at its core, a growth assumption.</p><p>The belief may not be stated so plainly, but it is structurally present. AI adoption leads to more. More efficiency. More output. More competitive advantage. More revenue. The same foundational logic is just being applied to a new capability.</p><p>But the mobilization is around an imperative that hasn&#8217;t been properly examined. What is &#8220;do AI?&#8221;. What are we actually organizing toward, does AI help us there, and does it get us there faster?</p><p>That gap is a Conway&#8217;s Law problem.</p><p>Any institution that isn&#8217;t clear what it&#8217;s optimizing for can&#8217;t give clear guidance for any system - human or artificial - to act coherently. And AI doesn&#8217;t resolve that ambiguity. It inherits it. It generates output - fast, confident, at potentially massive scale - that reflects and amplifies the institution&#8217;s unexamined assumptions back into the world.</p><p>It&#8217;s the fracturing logic of social media. But faster.</p><p>It&#8217;s the growth-at-all-costs assumption. Embedded in the models we use to accelerate.</p><p>Following the science-fiction maps we drew.</p><p>When people talk about the possibility of an AI crash - a real conversation with varying predictions - I think this underlies the anxiety. It&#8217;s not about a technical failure of the models. It&#8217;s that we don&#8217;t even know what we&#8217;re building toward. And technology this fast and this capable will surface that uncertainty in ways we&#8217;re not prepared for, and may not be able to contain.</p><div><hr></div><p>Conway&#8217;s Law is a description, not an unavoidable imperative.</p><p>There&#8217;s a corollary - the Inverse Conway Maneuver. You can deliberately restructure an organization to produce different outputs. If you change how people communicate, change the underlying structural rewards, it will change what the system builds. It works. It&#8217;s been done.</p><p>Humans, more broadly, have changed our structures before. The ozone layer. Public health transformations like the creation of the UK&#8217;s NHS. Moments where crisis became legible enough, a desire for action concentrated enough, that the pressure to reorganize overcame the powerful inertia of existing structures. </p><p>We repeated history. </p><p>We created the crisis. </p><p>And then we fixed it. At cost, with difficulty, and emerged with something structurally different on the other side.</p><p>The question that doesn&#8217;t have an answer yet is what kind of crisis AI might be. Will it become something legible enough and concentrated enough to mobilize structural change? Or will it follow the social media pattern: diffuse and gradual? A slow accumulation of damage accumulating slowly enough that it embeds itself so deep in our structures that the change becomes infinitely more difficult.</p><p>I have faith in humans. I doubt institutions. About their ability to act before a crisis rather than inside it.</p><p>Maybe that&#8217;s the deal. Maybe it always will be.</p><p>We don&#8217;t learn. We repeat. We solve. We move on. And the question is whether we find the way through before or after the most difficult parts.</p><div><hr></div><h4>Further reading:</h4><ul><li><p>Kobetz, R. <em><a href="https://kobewan.substack.com/p/what-the-system-rewards">What The System Rewards</a></em>. Defining Experience, Feb 2026.</p></li><li><p>Stoermer, T. <em><a href="https://www.tadstoermer.com/tads-blog/understanding-santayanas-warning-the-price-of-forgetting-the-past">Understanding Santayana&#8217;s Warning: The Price of Forgetting the Past</a></em>. Tad Stoermer&#8217;s Resistance History, Nov 2025.</p></li><li><p><em><a href="https://en.wikipedia.org/wiki/Conway%27s_law">Conway&#8217;s Law</a></em>. Wikipedia.</p></li></ul><p><em>Article photo by <a href="https://unsplash.com/@jkkantakbailey?utm_source=unsplash&amp;utm_medium=referral&amp;utm_content=creditCopyText">Jessica Kantak Bailey</a> on <a href="https://unsplash.com/photos/a-large-room-with-a-chandelier-in-it-WMCvwBTWSi0?utm_source=unsplash&amp;utm_medium=referral&amp;utm_content=creditCopyText">Unsplash</a>.</em></p><div class="subscription-widget-wrap-editor" data-attrs="{&quot;url&quot;:&quot;https://www.robin-cannon.com/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe&quot;,&quot;language&quot;:&quot;en&quot;}" data-component-name="SubscribeWidgetToDOM"><div class="subscription-widget show-subscribe"><div class="preamble"><p class="cta-caption"><strong>Subscribe for essays on design, technology, and culture - plus original fiction.</strong></p></div><form class="subscription-widget-subscribe"><input type="email" class="email-input" name="email" placeholder="Type your email&#8230;" tabindex="-1"><input type="submit" class="button primary" value="Subscribe"><div class="fake-input-wrapper"><div class="fake-input"></div><div class="fake-button"></div></div></form></div></div><p></p>]]></content:encoded></item><item><title><![CDATA[Algorithmic interfaces]]></title><description><![CDATA[When systems decide what you're allowed to do]]></description><link>https://www.robin-cannon.com/p/algorithmic-interfaces</link><guid isPermaLink="false">https://www.robin-cannon.com/p/algorithmic-interfaces</guid><dc:creator><![CDATA[Robin Cannon]]></dc:creator><pubDate>Tue, 10 Feb 2026 16:02:52 GMT</pubDate><enclosure url="https://substack-post-media.s3.amazonaws.com/public/images/0840312b-d301-42ad-b3bc-0f85b9ba6e02_4200x2625.jpeg" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>We&#8217;ve been getting used to algorithmic content for at least a decade.</p><p>It&#8217;s your social feed when it changed from chronological to &#8220;most relevant&#8221;. It&#8217;s the stories that rise to the top. And, which stories are disappeared for you. Our platforms have taught us that different people live in very different versions of the same system. Even if they&#8217;re standing next to each other.</p><p>That was only the beginning.</p><p>What&#8217;s next isn&#8217;t serving algorithmic content. It&#8217;s serving algorithmic interfaces.</p><p>That&#8217;s another hugely consequential shift.</p><div><hr></div><h3>From content to experience</h3><p>Algorithmic content changes what you can <em>see</em>.</p><p>Algorithmic interfaces change what you can <em>do.</em></p><p>I&#8217;m not talking primarily about look and feel. Not color themes, branding, or configurable dashboards. It&#8217;s more than just a flexible version of a UI that lets us set our own preferences.</p><p>These are systems that infer, in real time, which actions, paths, and affordances you should even have available to you.</p><p>The interface becomes less a surface that you navigate, and more a negotiation - between your intent, the inferred context, behavioral probability, and institutional constraints.</p><p>It&#8217;s not just two users seeing different things.</p><p>It&#8217;s users who might not be offered the same possibilities.</p><div><hr></div><h3>Interfaces stop being a single &#8220;thing&#8221;</h3><p>There&#8217;s a comfortable familiarity around traditional interfaces. They&#8217;re describable.</p><p>I can point at a screen and say <em>this is how this works</em>. I can draw it and document it. So can you - and we&#8217;ll recognize we&#8217;re talking about the same thing. We can complain about it together. Even when they&#8217;re complex, there&#8217;s a baseline stability that makes the interface legible.</p><p>Algorithmic interfaces erode that stability.</p><p>If we&#8217;re letting systems assemble our experiences dynamically - based on how the system infers our goals, prior behavior, and our predicted competence - there&#8217;s no single &#8220;what it looks like&#8221;.  We don&#8217;t have a canonical version.</p><p>This isn&#8217;t an interface that&#8217;s designed once (though it will likely have design philosophy behind it). It&#8217;s continuously decided.</p><p>This can make it very effective. It can also make it harder for us to see.</p><div><hr></div><h3>Not configurable</h3><p>Let&#8217;s be clear and precise what this is <em>not.</em></p><p>A configurable interface is something stable that the <em>user</em> has the capability to adjust. They make explicit choices about their preferences. Options are visible even if they&#8217;re unused. There is a shared set of underlying capabilities.</p><p>An algorithmic interface can invert that.</p><p>Now the system is adjusting itself before the user even arrives. So the choices are inferred and not chosen. </p><p>The system is mediating how to personalize on your behalf.</p><p>The interface is steering - and constraining - your behavior, whether you intended it to or not.</p><div><hr></div><h3>Hidden options, unequal paths</h3><p>I think one of the most uncomfortable implications of algorithmic UI is this:</p><p>Some options may never appear for some people.</p><p>They&#8217;re not disabled in a settings panel. The user hasn&#8217;t actively opted out. But the system has inferred - correctly or not - that they&#8217;re unnecessary for that user. Unsafe, too complex, irrelevant, or unlikely to succeed.</p><p>That&#8217;s a subtle shift in power. The system isn&#8217;t neutral.</p><p>If the option is invisible, how does anyone know how to ask for it? When those paths differ in quiet ways, how can you compare? Two people could use the same product, have the same aims, arrive at different outcomes - and not be able to explain why.</p><p>Did it choose the correct context? Was it a structural bias? Was it a protective action? The interface doesn&#8217;t necessarily offer us the answers. </p><div><hr></div><h3>It may be an inevitable shift</h3><p>This isn&#8217;t a drive for aesthetic ambition. It&#8217;s a drive for efficiency.</p><p>AI systems are better at inferring than instructing. They predict what&#8217;s next, identify patterns, and minimize friction. Static interfaces can create costly friction when context is volatile.</p><p>There are real advantages to algorithmic UI:</p><ul><li><p>Reduced cognitive load</p></li><li><p>Faster task completion</p></li><li><p>Fewer dead ends</p></li><li><p>Adaptive accessibility</p></li><li><p>Interfaces meet users where they are</p></li></ul><p>This is rational from a system perspective. If people only need a small part of the space, why expose all the possibilities and risk paralysis of choice? Optimize locally rather than force everyone through the same pathways.</p><p>Users benefit from this:</p><ul><li><p>Onboarding that doesn&#8217;t confuse by showing advanced options to all users.</p></li><li><p>&#8220;Safe mode&#8221; interfaces that can be triggered for certain behavioral profiles.</p></li><li><p>Let expert users see dense control surfaces while others are more guided.</p></li><li><p>Interfaces that change between sessions based on learned and inferred confidence.</p></li></ul><p>It&#8217;s not a theoretical appeal - it can be a genuinely improved experience.</p><div><hr></div><h3>Encoding preferences within the structure</h3><p>Algorithmic interfaces don&#8217;t eliminate bias. They can operationalize and entrench it.</p><p>Every inference made is based on some model of what success, competence, risk, and clarity look like. And those models are based on training data, cultural assumptions, and historical precedent. Just like language models today are trained disproportionately in English compared to other languages, and so become more fluent in English contexts.</p><p>So preferences become structural.</p><ul><li><p>Certain communication styles are seen as &#8220;confident&#8221;.</p></li><li><p>Some interaction patterns are read as &#8220;efficient&#8221;.</p></li><li><p>These approaches to problem-solving are &#8220;normal&#8221;.</p></li></ul><p>And others are quietly discouraged. Not explicitly, but because the model tends to omit them.</p><p>Algorithmic UI shapes what gets shown, it doesn&#8217;t announce what&#8217;s forbidden. It&#8217;s tough to detect these preferences and biases. The system can feel helpful and fair, even if it&#8217;s nudging users towards certain behaviors and outcomes that it thinks are the best ones.</p><p>Over time, this reinforces existing advantages. The system learns who it works well for.</p><p>So it doesn&#8217;t just adapt to users, it starts to adapt users to itself.</p><div><hr></div><h3>What we need to see</h3><p>As interfaces become more inferred, our challenges are going to shift.</p><p>We can&#8217;t simply ask about whether outcomes are fair, or whether the system performs well, on average. We need to understand and reason about the underlying possibilities - which paths were offered, which were hidden or never considered, and why.</p><p>It&#8217;s harder to define who or what is accountable, because we&#8217;re no longer pointing to a single interface that &#8220;doesn&#8217;t work&#8221; for some people. Critique is complex because the experience is more provisional. How do we apply governance when the system isn&#8217;t presenting consistent surfaces to view?</p><p>Algorithmic UI is a new way of building and presenting products. It&#8217;s also a new way of mediating the end user&#8217;s agency. Deciding, moment by moment, what the system will allow you to do.</p><p>And our capacity to assess needs to expand from understanding what just happened, to what else might have been possible.</p><div><hr></div><h4>Further reading:</h4><ul><li><p><em><a href="https://www.robin-cannon.com/p/death-of-the-default">Death of the default</a></em> - on shared reality in a world of adaptive systems.</p></li><li><p>Walsh, D. <em><a href="https://mitsloan.mit.edu/ideas-made-to-matter/generative-ai-isnt-culturally-neutral-research-finds">Generative AI isn&#8217;t culturally neutral, research finds</a></em>. MIT Sloan School of Management, Sep 2025.</p></li><li><p>Nielsen, J. <a href="https://www.nngroup.com/articles/ai-paradigm/">AI: First New UI Paradigm in 60 Years</a>. NN/G, Jun 2023.</p></li></ul><p><em>Article photo by <a href="https://unsplash.com/@lazycreekimages?utm_source=unsplash&amp;utm_medium=referral&amp;utm_content=creditCopyText">Michael Dziedzic</a> on <a href="https://unsplash.com/photos/blue-and-white-water-wave-nbW-kaz2BlE?utm_source=unsplash&amp;utm_medium=referral&amp;utm_content=creditCopyText">Unsplash</a></em></p><div class="subscription-widget-wrap-editor" data-attrs="{&quot;url&quot;:&quot;https://www.robin-cannon.com/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe&quot;,&quot;language&quot;:&quot;en&quot;}" data-component-name="SubscribeWidgetToDOM"><div class="subscription-widget show-subscribe"><div class="preamble"><p class="cta-caption"><strong>Subscribe for essays on design, technology, and culture - plus original fiction.</strong></p></div><form class="subscription-widget-subscribe"><input type="email" class="email-input" name="email" placeholder="Type your email&#8230;" tabindex="-1"><input type="submit" class="button primary" value="Subscribe"><div class="fake-input-wrapper"><div class="fake-input"></div><div class="fake-button"></div></div></form></div></div><p></p>]]></content:encoded></item><item><title><![CDATA[Death of the default]]></title><description><![CDATA[Shared reality in a world of adaptive systems]]></description><link>https://www.robin-cannon.com/p/death-of-the-default</link><guid isPermaLink="false">https://www.robin-cannon.com/p/death-of-the-default</guid><dc:creator><![CDATA[Robin Cannon]]></dc:creator><pubDate>Tue, 03 Feb 2026 16:02:53 GMT</pubDate><enclosure url="https://substack-post-media.s3.amazonaws.com/public/images/755be7b3-2ce5-4d84-a60b-74029c8ac0e1_4879x3168.jpeg" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>Defaults make society legible.</p><p>They exist everywhere. Office hours, queuing systems, broadcast schedules, form layouts, school terms, work weeks, what &#8220;normal&#8221; looks like. They may have a logical origin, but they&#8217;re not always optimal and they certainly aren&#8217;t neutral. But we share them.</p><p>Defaults give us a common starting point. They create a baseline that we can orient around, or push back against, or discuss. They give us a certain expectation that we can use to coordinate - making systems predictable enough for us to live in.</p><p>Defaults are a simple convenience. They&#8217;re also social infrastructure.</p><p>They&#8217;re quietly disappearing.</p><div><hr></div><p>It&#8217;s not an instant vanishing. There&#8217;s rarely an announcement or a policy change. Not a single moment where you can point to a decision to remove them. But as different systems become more adaptive, responsive, and personalized, defaults erode.</p><p>This can feel like progress. Why should we all experience the same thing when systems can now adjust to individual needs? Why are we trying to force a single path that doesn&#8217;t account for our context? If optimization is available then why do we accept friction?</p><p>But there are unexpected impacts when we no longer have a shared baseline.</p><p>If all our experiences are adapting, it&#8217;s difficult to locate an idea of &#8220;normal&#8221;. There&#8217;s still coordination, but it lacks a common reference. So we start to think more in terms of probabilistic expectations rather than naturally shared ones.</p><div><hr></div><p>Defaults do important work.</p><ul><li><p>They make institutions explainable.</p></li><li><p>They help people to help each other navigate systems.</p></li><li><p>They make it possible to critique with a shared reference.</p></li></ul><p>If you and I have matching experiences, we can talk about them. If there&#8217;s something wrong, we can identify it together. Defaults mean we can say that <em>this</em> is what the system does.</p><p>But when systems become more adaptable, that shared reference is weakening.</p><p>Two people can go through the same process but with different experiences, and different outcomes. And neither of them may be able to explain why. Was it an intentional difference? Contextual? Personal? Experimental?</p><p>The system becomes more difficult to see.</p><div><hr></div><p>This is a cultural issue as much as it is a technical one.</p><p>Defaults are one of the ways in which societies created shared expectations. While they don&#8217;t remove (and can entrench) power imbalances, they also make them visible. If we all have a shared set of rules, then those rules can be challenged. A predictable process can be contested.</p><p>Adaptive systems replace visible sameness with a more opaque differentiation.</p><p>That means that our understanding of fairness shifts from something we experience, to something that needs different - statistical - justification. Are we treating people well in the aggregate, rather than thinking in individual terms?</p><p>But how do you contest a system that you can&#8217;t clearly see?</p><div><hr></div><p>These defaults also have a quieter, emotional, effect.</p><p>If we understand the default, however imperfect, it can help to instill a sense of belonging. Those rough edges you face - everyone else faces them too. I know where I stand - even if I don&#8217;t like it.</p><p>By dissolving defaults, we open up more individualized experiences. Obviously that brings advantages, but it can also be isolating. It&#8217;s hard to compare notes. Is that thing broken, or just different? Is my experience unusual, or merely unique?</p><p>There&#8217;s nothing obviously wrong, but there&#8217;s also a lack of clear stability.</p><div><hr></div><p>This all gets thrown into even more stark relief when it comes to software.</p><p>Interfaces are often the clearest expressions of default. Here&#8217;s a product version. It&#8217;s easy to describe. I can draw it, point to it, and say <em>this is how this works</em>.</p><p>But interfaces are becoming less fixed surfaces. They&#8217;re starting to behave as much like a negotiation - assembled in real time from data, context, probability, and intent.</p><p>There isn&#8217;t a consistent answer any more to &#8220;what does it look like?&#8221;</p><p>Again - this isn&#8217;t bad, but it is different. Adaptive systems can be much more inclusive, efficient, and responsive than the static alternatives. But they also remove that shared point of reference.</p><div><hr></div><p>My point is not to wax nostalgic.</p><p>Defaults exclude people. They ignore differences. They entrench the preferences of the powerful. And we&#8217;ve spent much of the last decade or more trying to rightly challenge the idea that &#8220;one size fits all&#8221; can ever be fair.</p><p>So my problem isn&#8217;t that defaults are disappearing.</p><p>But they&#8217;re disappearing without us thinking about what replaces the social and cultural function that defaults have served.</p><p>Adaptive systems are great at optimization. They are less great at maintaining shared reality.</p><div><hr></div><p>More of the world is becoming mediated by inference rather than direct instruction. Static stability is giving way to adaptive probability.</p><p>Our systems will still work. And, often, work much better. But they also work differently. They have no fixed point to default to. So the default is something inferred - provisional, contextual, and quietly variable.</p><p>Defaults are anchors not necessarily because they&#8217;re correct - but because they&#8217;re there.</p><p>And those anchors are dissolving. We&#8217;re replacing them with systems that decide moment by moment what you should see, what your experience should be.</p><p>It&#8217;s not a question whether the shift is coming. It&#8217;s already here.</p><p>The question is how we prepare ourselves for a culture where &#8220;default&#8221; isn&#8217;t something we necessarily share. But instead is something the system quietly decides on our behalf.</p><div><hr></div><h4>Further reading</h4><ul><li><p>Clinehens, J. <em><a href="https://www.choicehacking.com/2020/11/09/what-is-the-default-effect/">Silent Decision-Makers: How Defaults Guide Decisions</a></em>. ChoiceHacking, Nov 2020.</p></li><li><p>Sosa, D. <em><a href="https://blogs.lse.ac.uk/psychologylse/2022/01/20/a-default-life/">A Default Life</a></em>. LSE Psychological &amp; Behavioral, Jan 2022.</p></li><li><p><em><a href="https://figr.design/blog/adaptive-defaults-when-your-product-knows-you-better-than-you-know-yourself">Adaptive Defaults: When Your Product Knowns You Better Than You Know Yourself</a></em>. figr, Oct 2025.</p></li></ul><p><em>Article photo by <a href="https://unsplash.com/@chaozzy?utm_source=unsplash&amp;utm_medium=referral&amp;utm_content=creditCopyText">Chaozzy Lin</a> on <a href="https://unsplash.com/photos/red-powder-in-three-clear-drinking-glasses-4DAzYHVEqd8?utm_source=unsplash&amp;utm_medium=referral&amp;utm_content=creditCopyText">Unsplash</a></em></p><div class="subscription-widget-wrap-editor" data-attrs="{&quot;url&quot;:&quot;https://www.robin-cannon.com/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe&quot;,&quot;language&quot;:&quot;en&quot;}" data-component-name="SubscribeWidgetToDOM"><div class="subscription-widget show-subscribe"><div class="preamble"><p class="cta-caption"><strong>Subscribe for essays on design, technology, and culture - plus original fiction.</strong></p></div><form class="subscription-widget-subscribe"><input type="email" class="email-input" name="email" placeholder="Type your email&#8230;" tabindex="-1"><input type="submit" class="button primary" value="Subscribe"><div class="fake-input-wrapper"><div class="fake-input"></div><div class="fake-button"></div></div></form></div></div><p></p>]]></content:encoded></item><item><title><![CDATA[Why don't we talk to our computers?]]></title><description><![CDATA[The technology has arrived. Our comfort levels haven't caught up.]]></description><link>https://www.robin-cannon.com/p/why-dont-we-talk-to-our-computers</link><guid isPermaLink="false">https://www.robin-cannon.com/p/why-dont-we-talk-to-our-computers</guid><dc:creator><![CDATA[Robin Cannon]]></dc:creator><pubDate>Tue, 30 Dec 2025 16:00:54 GMT</pubDate><enclosure url="https://substack-post-media.s3.amazonaws.com/public/images/44eb3156-d5fc-4a92-b0d3-3fb0dfc156fa_1536x1024.jpeg" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>In <em>Star Trek IV: The Voyage Home </em>there&#8217;s a scene that I still remember well. Scotty, having been transported back to the mid 1980s, sits down in front of a computer and speaks to it. Nothing happens. He&#8217;s offered a mouse, picks it up and cheerfully holds it to his mouth.</p><blockquote><p>&#8220;Hello, computer.&#8221;</p></blockquote><p>Still nothing, until he&#8217;s offered a keyboard.</p><p>The gag works because it&#8217;s implying speech is the obvious interface of the future. Only primitive technology would still need us to type, right?</p><p>That movie was nearly forty years ago. We have natural language AI, speech recognition, microphones, earbuds, and powerful processing models. Most of that future is already here.</p><p>And when it&#8217;s time to interact with our computers, we still place our hands on the keyboard. To type, write, and edit. It feels kinda weird to talk to our computer, almost never the default (and we think it strange if someone else strays from that default). Especially outside our homes.</p><p>Speech technology hasn&#8217;t failed. We use it all the time. But it seems like it&#8217;s mostly in private. I tell Alexa to set a timer while I&#8217;m cooking. We change the music, adjust the thermostat, or dim the lights. All very domestic interactions, whispered between our own walls. Nobody&#8217;s watching, and the stakes are low. And nobody really cares what we&#8217;re saying to our appliance.</p><p>But the moment we move into shared spaces, we fall silent. Very few people dictate text messages on the train (and we often look strangely at the people who do). We&#8217;re not seeing people walking into a library and saying &#8220;summarize this article&#8221; in a clear and confident tone. Even in our most open-plan, designed-for-collaboration offices, everyone is quietly typing. </p><p>We&#8217;re speaking to machines at home. We&#8217;re typing to them in public.</p><p>So if it&#8217;s not a technological problem, it must be a cultural one.</p><p>Speech is expressive, and it&#8217;s social. Sometimes it can be as likely to reveal uncertainty as intent. Typing is private, and it lets us hesitate, erase. If I mistype a sentence it can be erased, but saying something dumb by accident can hang in the air. Speech is performing, typing is more concealed. And I think for most people, professional lives are more than a little concealed.</p><p>I&#8217;ve heard the argument that speech interfaces lack precision. Work needs exactness, and language is messy. But this age of AI interaction - where our engagement <strong>can</strong> be imprecise and metaphorical - seems to disprove that argument. We use adjectives, mood, fragments. It&#8217;s one of the biggest reasons why AI can feel approachable. We tell it what we want, not the specifics of how to do it. And the language models can cope by inferring, even guessing. </p><p>My guess is that speaking leaves us feeling more exposed. When we&#8217;re talking to a machine, other people might overhear. And that&#8217;s more embarrassing, somehow, than if we&#8217;re overheard talking to another person. We worry about sounding foolish - not to the computer we&#8217;re talking to, but to the real person who&#8217;s walking past at the same time.</p><p>Technological improvement isn&#8217;t going to help speech become the dominant interface. It needs our relationship to machines to change. I converse with people, I instruct objects. We avoid speech because computers still feel like tools. If they begin to feel like collaborators, speech might follow. But it might also raise further anxiety.</p><p>Scott&#8217;s joke wasn&#8217;t about the keyboard or the mouse. It was about a future in which speaking to a computer wasn&#8217;t just possible, it was entirely unremarkable. We&#8217;ve got that technology. We don&#8217;t have anything like that level of comfort. Talking to a machine still feels like talking to yourself. We can do it, we just don&#8217;t like to where anyone else can hear.</p><div><hr></div><h4>Further reading</h4><ul><li><p>Banks, David. <em><a href="https://thesocietypages.org/cyborgology/2016/10/10/why-we-are-uncomfortable-talking-to-our-computers/">Why we Are Uncomfortable Talking to Our Computers</a></em>. The Society Pages (Univ. of Minnesota), Oct 2016.</p></li><li><p><em><a href="https://www.youtube.com/watch?v=QpWhugUmV5U">Great Moments in Star Trek History - Hello, Computer</a></em> (YouTube)</p></li></ul><div class="subscription-widget-wrap-editor" data-attrs="{&quot;url&quot;:&quot;https://www.robin-cannon.com/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe&quot;,&quot;language&quot;:&quot;en&quot;}" data-component-name="SubscribeWidgetToDOM"><div class="subscription-widget show-subscribe"><div class="preamble"><p class="cta-caption"><strong>Subscribe for essays on design, technology, and culture - plus original fiction.</strong></p></div><form class="subscription-widget-subscribe"><input type="email" class="email-input" name="email" placeholder="Type your email&#8230;" tabindex="-1"><input type="submit" class="button primary" value="Subscribe"><div class="fake-input-wrapper"><div class="fake-input"></div><div class="fake-button"></div></div></form></div></div><p></p>]]></content:encoded></item><item><title><![CDATA[What if the machines don't end us?]]></title><description><![CDATA[How resisting the forces seeking to control AI opens futures beyond the apocalyptic.]]></description><link>https://www.robin-cannon.com/p/what-if-the-machines-dont-end-us</link><guid isPermaLink="false">https://www.robin-cannon.com/p/what-if-the-machines-dont-end-us</guid><dc:creator><![CDATA[Robin Cannon]]></dc:creator><pubDate>Tue, 02 Dec 2025 16:02:58 GMT</pubDate><enclosure url="https://substack-post-media.s3.amazonaws.com/public/images/0ffce122-15e8-41e4-b62a-d208bea033fb_1536x1024.jpeg" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>A recent piece in The Guardian explored popular culture depictions of <a href="https://www.theguardian.com/tv-and-radio/2025/nov/25/years-and-years-black-mirror-tv-show-depictions-ai-repurcussions">how AI might &#8220;end us all&#8221;.</a> This is an increasingly common framing - AI as a mathematical horseman of the apocalypse - it&#8217;s a technology that&#8217;s accelerating so quickly, and consuming so much energy, that the inevitable consequence is either ecological collapse or a social unraveling. It&#8217;s an argument that&#8217;s not without merit. But The Guardian&#8217;s piece, light-hearted though it was, provoked me into thinking of a different kind of question.</p><blockquote><p><strong>What if AI doesn&#8217;t end us?</strong></p><p><strong>What if AI forces forms of progress that we&#8217;ve avoided for decades?</strong></p></blockquote><p>I&#8217;m not taking an optimistic viewpoint for the sake of it. The real risks exist - this isn&#8217;t a rebuttal so much as an attempt to widen the possibility space. It&#8217;s always easy to catastrophize, but that shouldn&#8217;t automatically be the last word.</p><p>I considered a counterfactual to the doom narrative. Learning from technology&#8217;s always-messy history, how the same systems we fear might also be the ones that drive us forward. </p><div><hr></div><h3>1. The energy reckoning</h3><p>For all the imaginative stories about AI itself choosing to destroy us, the primary anxiety right now is AI&#8217;s energy consumption.</p><p>Researchers have been warning us for months and years - today&#8217;s AI models require extraordinary power, and the demand is accelerating quickly. If you project our current models of usage forward, it becomes ecologically (and practically) unsustainable.</p><p>But linear projections like that are often wrong. Especially you extrapolate from a fairly short period of technological inflection.</p><p>Existential pressure can change the shape of innovation.</p><ul><li><p>The ozone crisis saw one of the fastest global regulatory responses ever enacted.</p></li><li><p>Post-war industrialization has driven chemical standards and safety legislation.</p></li><li><p>Energy shocks have resulted in efficiency mandates and new research.</p></li></ul><p>It&#8217;s likely that AI will become the first digital environmental emergency. But it might also be a force large enough to make us do what slow climate collapse, and political and economic inertia, hasn&#8217;t yet managed.</p><p><strong>AI can be a catalyst to truly accelerate clean energy.</strong></p><p>That&#8217;s not because all the tech companies are suddenly going to become benevolent environmental stewards. It&#8217;s just that they might not survive without solving the problem. </p><p>Pressure drives adaptation. AI will strain our energy resources, but it could also be the first technology that forces us to build the new resources that we actually need.</p><div><hr></div><h3>2. The return of the open web?</h3><p>The early promise of the internet has been commoditized - monopolies, proprietary infrastructure - have turned the world wide web into corporate fiefdoms. And AI centralization has the potential to further accelerate that trend.</p><p>But trajectories don&#8217;t have to move in a straight line, and there&#8217;s another - more hopeful - potential scenario.</p><p><strong>AI accidentally re-opens the web.</strong></p><p>There are two converging dynamics which I think might make this possible.</p><h4>AI will build more AI</h4><p>We&#8217;re seeing model architectures, training loops, better distillation techniques, optimization routines all already being automated. And this could mean:</p><ul><li><p>The proliferation of smaller, powerful models</p></li><li><p>Community access to remixed, specialized systems</p></li><li><p>A plummeting barrier to entry</p></li></ul><p>That wouldn&#8217;t be a metaverse of corporate-owned silos. That would be a Cambrian-like explosion of new tools.</p><h4>Centralization is fragile</h4><p>We already see the following in the AI sphere:</p><ul><li><p>Models leaking</p></li><li><p>Diffusion of capabilities</p></li><li><p>Proprietary advances mirrored almost instantly by other networks</p></li></ul><p>At which point, it can start to become difficult for any one entity, or small group of entities, to contain the ecosystem.</p><p>There are parallels here to the ethos of the early web. It&#8217;s messy, anarchic, generative, remixed, and with an exploding number of stakeholders.</p><p>Dystopia doesn&#8217;t have to be the only direction of travel. The web might have closed off, become &#8220;owned&#8221;, but that wasn&#8217;t inevitable. And the same pressures for platform lock-in also create pressure for the alternative - profound openness.</p><div><hr></div><h3>Removing the barriers to creativity</h3><p>The cultural panic around AI centers on creativity - what it means, who owns it, what it&#8217;s worth.</p><p>Creativity isn&#8217;t a finite resource. The limitation is access to creative tools.</p><p>Technical shifts in human expression tend to be met with existential fears. Photography kills painting. Synthesizers kill music. Streaming kills cinema. Digital kills analog. </p><p>But we see a repeating pattern - human imagination expands to account for the changes in medium. And craft changes, too. The fear of makers that their skills might be devalued is fair and real. But craft recontextualizes, it doesn&#8217;t disappear. </p><p>Can we recognize what becomes possible when the <em>administrative tax</em> on creativity is removed?</p><ul><li><p>People who can&#8217;t draw can design</p></li><li><p>People who can&#8217;t code can build</p></li><li><p>People who can&#8217;t compose can orchestrate</p></li><li><p>People whose bodies or circumstances limited expression can now gain entirely new channels</p></li><li><p>Children can utilize their deeply creative minds with a fluency never before available</p></li></ul><p>Creativity increases when the gatekeepers fall away. Craft may refocus further on taste, coherence, defining intent, and curation.</p><p>AI doesn&#8217;t eliminate artists. It eliminates the scarcity model that decided who could become one.</p><p>And - because I like a connection to folklore - might this see a return of storytelling to the commons? Mythmaking can become shared, remixed, co-authored.</p><div><hr></div><h3>The rise of augmented labor</h3><p>We&#8217;re often presented with the argument that &#8220;AI will replace workers&#8221;. I think the story is more &#8220;AI changes what human work even is.&#8221;</p><p>Human value is relational.</p><ul><li><p>Judgement</p></li><li><p>Taste</p></li><li><p>Cultural context</p></li><li><p>Emotional intelligence</p></li><li><p>Synthesis</p></li><li><p>Narrative awareness</p></li><li><p>Ethics</p></li><li><p>Lived experiences</p></li></ul><p>That value isn&#8217;t just a nice-to-have. That value is the difference between good decisions and terrible, even destructive ones.</p><p>AI can simulate, or use pattern recognition to generate something plausible. It can&#8217;t situate that generation within a society, understand its impact. Only humans can do that.</p><p><strong>What if the future of labor is augmentation, not automation?</strong></p><ul><li><p>Imagine a union that&#8217;s built about the governance of an AI model, not just wages</p></li><li><p>Workers negotiating for an ownership stake in the tools that amplify their own value</p></li><li><p>Redistribution of productivity downward, rather than hoarding it upward</p></li></ul><p>That would be a meaningful, positive, shift. AI wouldn&#8217;t erase work, but it would change it, and maybe (albeit the forever unfulfilled promise of technology to date!) give people their time back.</p><div><hr></div><h3>The danger is who governs, not AI itself</h3><p>The danger of AI isn&#8217;t that will decide to end us, one way or another.</p><blockquote><p><strong>AI will magnify the power of whoever controls it</strong></p></blockquote><p>Authoritarian states. Corporations. Rogue actors. Militaries. Billionaires.</p><p>They&#8217;re already shaping the terrain, trying to land grab and take ownership.</p><p>But it makes for a reframed debate. It&#8217;s not the technology, it&#8217;s about people, policy, access and agency.</p><p>If AI remains centralized then I fear for future collapse. But if we can manage to democratize access to AI, then the future expands instead.</p><p>There is a huge risk, and one that malign actors are already working to bring about. But risk isn&#8217;t the same as destiny.</p><div><hr></div><h3>A wider horizon</h3><p>It&#8217;s weird, but apocalyptic narratives can actually be comforting - they feel inevitable, and so they relieve us of responsibility. If the machines are going to end us, then the only rational thing we can do is to run away.</p><p>But other stories deserve equal weight.</p><p><strong>AI might help us fix what we already broke.</strong></p><ul><li><p>Restoring ecosystems</p></li><li><p>Modeling climate intervention</p></li><li><p>Augmenting scientific discovery</p></li><li><p>Personalizing medicine</p></li><li><p>Expanding new forms of craft</p></li><li><p>Expanding who gets to participate in future imagination</p></li></ul><p>This is not guaranteed - not even close. It&#8217;s not automatic.</p><p>But collapse isn&#8217;t guaranteed, either.</p><p>The future is a branching corridor, and we shape our direction through decisions, governance, distribution and participation - as individuals and as groups.</p><p>If AI &#8220;ends us&#8221; then it&#8217;s because we let power continue to consolidate unchecked. Not because of the technology itself.</p><p>But if we can push back against that trend (and, make no mistake, that consolidation <em>is<strong> </strong></em>the trend) then AI doesn&#8217;t have to be the end of the story. We can build cleaner energy, open infrastructure, expand creativity, and augment our labor - and begin a new story.</p><div><hr></div><h4>Further reading:</h4><ul><li><p>Fox, Jeremy. <em><a href="https://dynamicecology.wordpress.com/2020/12/10/the-worst-forecasting-failures-and-what-we-can-learn-from-them/">The worst forecasting failures and what we can learn from them</a>. </em>Dynamic Ecology, Dec 2020</p></li><li><p>Litvinets, Volha. <em><a href="https://www.ey.com/en_nl/insights/climate-change-sustainability-services/ai-and-sustainability-opportunities-challenges-and-impact">AI and Sustainability: Opportunities, Challenges, and Impact</a>. </em>EY Global, Nov 2024</p></li></ul><div class="subscription-widget-wrap-editor" data-attrs="{&quot;url&quot;:&quot;https://www.robin-cannon.com/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe&quot;,&quot;language&quot;:&quot;en&quot;}" data-component-name="SubscribeWidgetToDOM"><div class="subscription-widget show-subscribe"><div class="preamble"><p class="cta-caption"><strong>Subscribe for essays on design, technology, and culture - plus original fiction.</strong></p></div><form class="subscription-widget-subscribe"><input type="email" class="email-input" name="email" placeholder="Type your email&#8230;" tabindex="-1"><input type="submit" class="button primary" value="Subscribe"><div class="fake-input-wrapper"><div class="fake-input"></div><div class="fake-button"></div></div></form></div></div><p></p>]]></content:encoded></item><item><title><![CDATA[The lies we choose to know]]></title><description><![CDATA[Everyone has the facts, nobody shares the truth]]></description><link>https://www.robin-cannon.com/p/the-lies-we-choose-to-know</link><guid isPermaLink="false">https://www.robin-cannon.com/p/the-lies-we-choose-to-know</guid><dc:creator><![CDATA[Robin Cannon]]></dc:creator><pubDate>Tue, 04 Nov 2025 16:02:12 GMT</pubDate><enclosure url="https://substack-post-media.s3.amazonaws.com/public/images/b0c8fc48-b388-41b6-98a5-b0f77f989ceb_1024x1024.jpeg" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>After I published <em><a href="https://www.robin-cannon.com/p/the-end-of-not-knowing">The end of not knowing</a></em>, my dad wrote to me:</p><blockquote><p>I don&#8217;t know how it deals with those - perhaps like your father - who will happily reinvent the truth, whether seriously or for a laugh. Not sure there aren&#8217;t many others with far greater power who will not only invent a truth but find others to not only endorse it, but believe it.</p></blockquote><p>It was a good thought. My earlier essay argued how we've lost the habit of uncertainty. We don&#8217;t linger in doubt long enough to imagine, to try testing our own thinking. But dad&#8217;s note pointed out that there are other things that can fill that vacuum when uncertainty disappears: invention.</p><p>And not always the fun kind.</p><p>Perhaps we&#8217;ve moved from a world of shared not-knowing to one that isn&#8217;t just a single certainty, but competing certainties. Search engines erased factual doubt. AI can smooth over logical uncertainty. Instead of bringing us closer to the truth, we&#8217;ve multiplied realities. The problem isn&#8217;t ignorance - it&#8217;s conviction.</p><div><hr></div><h2>Reinventing the truth</h2><p>My dad is a storyteller. He&#8217;ll tweak a memory, stretch a punchline, shift a detail. Sometimes for effect, sometimes for mischief. Sometimes because he&#8217;s forgotten the original. Everyone in the family knows, everyone (sometimes) plays along. More often we might groan because we&#8217;ve heard it before. That kind of truth-bending is social, generous, very human.</p><p>But if you scale that instinct for tall-tales, it can become something else. If millions of people are bending the truth at once, it stops becoming play and becomes persuasion. Algorithms reward conviction over accuracy. As the apocryphal Mark Twain quote goes:</p><blockquote><p>A lie can travel halfway around the world while the truth is still putting on its shoes.</p></blockquote><p>We don&#8217;t share uncertainty any more. But we do share the act (if not the facts) of belief.</p><div><hr></div><p>In <em>The end of not knowing</em>, I said uncertainty matters because it can teach humility. But it&#8217;s pretty hard to sustain humility. Doubt requires effort. Certainty - true or false - feels better.</p><p>Especially when the world is unstable, confident simplicity is a comfort. We cling to what feels coherent with our worldview. And when those beliefs, perhaps especially the false beliefs, fuse with identity, correction feels like an attack.</p><p>We don&#8217;t debate to learn anymore. We defend our stance to preserve our identity. </p><p>Truth is based on allegiance, not evidence.</p><div><hr></div><h2>Machines that mirror us</h2><p>AI didn&#8217;t invent this pattern, but it does mirror it. Systems designed to &#8220;find truth&#8221; are built to deliver a satisfying answer, not a shrug and an &#8220;I don&#8217;t know.&#8221; They try to collapse complexity into something that&#8217;s plausibly coherent.</p><p>This is an example of machines mimicking our own bad habits. They sound sure, even when they shouldn&#8217;t be. They make stuff up, rather than admit they don&#8217;t know. They give us back our confidence, polished up and packaged.</p><p>AI isn&#8217;t lying because it&#8217;s malicious. When it lies, it does so with conviction - because that&#8217;s what we asked it to do. AI is learning that, for humans, certainty sells.</p><div><hr></div><h2>The rediscovery of honest doubt</h2><p>Maybe the next step after trying to rediscover uncertainty is to rediscover honesty.</p><p>That means admitting that not all of our &#8220;knowledge&#8221; is equal. Truth isn&#8217;t a possession, it&#8217;s a practice. It&#8217;s an act that&#8217;s been proven fairly fragile, dependent on curiosity and care.</p><p>I don&#8217;t think better tools will save us, however tempting that hope might be. We need something simpler, and much much rarer.</p><p>Moral doubt.</p><p>That&#8217;s a willingness to pause, and ask, <em>&#8220;What if I&#8217;m wrong?&#8221;</em></p><p>If enough lies get told, and enough of them get believed, everyone ends up knowing differently.</p><p>The danger isn&#8217;t so much that the truth can be destroyed. That might trigger a search in its absence. The danger is that, for many, it can be comfortably replaced.</p><p>Our challenge isn&#8217;t to try and rebuild certainty. We should try to rebuild a space where truth and doubt can coexist. Where it&#8217;s OK for us to look at each other and say, <em>&#8220;I don&#8217;t know. Let&#8217;s find out together.&#8221;</em></p><div class="subscription-widget-wrap-editor" data-attrs="{&quot;url&quot;:&quot;https://www.robin-cannon.com/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe&quot;,&quot;language&quot;:&quot;en&quot;}" data-component-name="SubscribeWidgetToDOM"><div class="subscription-widget show-subscribe"><div class="preamble"><p class="cta-caption"><strong>Subscribe for essays on design, technology, and culture - plus original fiction.</strong></p></div><form class="subscription-widget-subscribe"><input type="email" class="email-input" name="email" placeholder="Type your email&#8230;" tabindex="-1"><input type="submit" class="button primary" value="Subscribe"><div class="fake-input-wrapper"><div class="fake-input"></div><div class="fake-button"></div></div></form></div></div><p></p>]]></content:encoded></item><item><title><![CDATA[Did AI write this post?]]></title><description><![CDATA[Rethinking AI as a medium, not a threat. Like photography or film before it.]]></description><link>https://www.robin-cannon.com/p/did-ai-write-this-post</link><guid isPermaLink="false">https://www.robin-cannon.com/p/did-ai-write-this-post</guid><dc:creator><![CDATA[Robin Cannon]]></dc:creator><pubDate>Tue, 21 Oct 2025 14:01:29 GMT</pubDate><enclosure url="https://substack-post-media.s3.amazonaws.com/public/images/b93078f6-4f93-46f0-8322-f2d2278f092d_1024x1024.jpeg" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>When people encounter a piece of art today - an image, a poem, a scene - there&#8217;s sometimes suspicion. </p><blockquote><p><em>Did AI make that?</em></p></blockquote><p>It&#8217;s used as an accusation, and as a way of stripping away the value of the piece. It makes an assumption that if the machine played a part in its creation then the work is tainted. Even fraudulent.</p><p>But that seems to me to be a false binary. We&#8217;re not looking at the wholly human or wholly machine. Art lies somewhere in between, in the interaction. We shouldn&#8217;t be asking <em>if</em> AI was involved, we should be asking <em>how</em>.</p><h3>The threat lens</h3><p>Every new artistic medium faces suspicion. Painters dismissed photography as a cheat. Mechanical reproduction that was incapable of artistry. Theater derided film as cheap spectacle, mass amusement rather than an art form capable of depth or meaning. Digital art was mocked as sterile - an inauthentic shortcut, not a practice.</p><p>AI has the same stigma now.</p><ul><li><p>If AI shaped it, is it really art? Is it authentic?</p></li><li><p>Does this replace the artist entirely? Remove the human from the equation?</p></li><li><p>Won&#8217;t everything look the same? Be homogenized?</p></li></ul><p>They&#8217;re perfectly reasonable anxieties. They&#8217;re also echoes of the same fears from earlier eras. And like photography, film, digital tools, the story shouldn&#8217;t just be about the threat.</p><h3>Tools, medium, collaboration</h3><p>We can frame AI simply as a tool. Think of it like an upgraded Photoshop. A cultural autocomplete. But that seems pretty limiting. Tools are an extension of human intention, but they don&#8217;t reshape forms of expression.</p><p>On the reverse of that, we might see AI as an entirely autonomous artist. As if the model &#8220;creates&#8221; independently. But this also falls short - without human input in the form of context, curation, intention, and refinement, AI&#8217;s outputs are raw material at best.</p><p>We should think of AI as a medium. Mediums aren&#8217;t passive. Oil paints behave in a different way, result in different art, than watercolors. 16mm film grain shapes an alternative experience to digital pixels. Mediums have textures and limitations. And artists worth with those mediums, finding creativity at the edges, by testing their thresholds.</p><p>AI is like that. A convergence of human input, algorithms and pattern recognition, and cultural data.</p><h3>Crafted versus &#8220;spat out&#8221;</h3><p>The accusations of &#8220;AI wrote that&#8221; or &#8220;AI made that&#8221; miss the point. There absolutely are shallow uses. Type in a prompt, publish the first output, and call it a day. That&#8217;s automation, that&#8217;s not art.</p><p>But what about engaged use? That&#8217;s different. It&#8217;s iterative. It&#8217;s rejecting nine results in order to find the tenth. Layering different outputs into a single collage. Editing heavily, to bend the model toward a personal vision. It can be structural, like deciding where to insert an AI-generated aspect into a broader work, staging an image within a performance.</p><p>That seems to me to be much the same as the difference between pressing a shutter button at random, and carefully composing, lighting, and developing a photograph. Both those tasks are using the same apparatus, but one of them reveals genuine craft.</p><h3>Artistic enablement</h3><p>Thinking of AI as our medium opens up new artistic possibilities.</p><ul><li><p>Scaled collage. The capacity for vast recombination and remixing of fragments.</p></li><li><p>Iterative exploration. Testing dozens or hundreds of variations in the search for real resonance.</p></li><li><p>Embracing glitches. Using the strange errors, hallucinations, the artifacts of non-determinative AI outputs, as aesthetic in themselves.</p></li><li><p>Accessibility. Open new paths for creative participation. Democratize art for people excluded from traditional tools through training, resource, or physical limitation.</p></li></ul><p>AI doesn&#8217;t create <em>instead</em> of an artist. It can create <em>with</em> the artist. That gives the artist a larger landscape of potential, to navigate, interpret, and reshape.</p><h3>The artist&#8217;s role in all this</h3><p>If AI is the medium, then we can better define the artist&#8217;s role. It&#8217;s one of contextualization, choice, and framing. The artistry is not instructing an AI model to &#8220;spit something out&#8221;, but in crafting a&#8230;relationship&#8230;with it.</p><ul><li><p>Using prompts as poetic acts. A combination of brushstroke and an incantation.</p></li><li><p>Selecting the outputs. Choosing what really resonates, or subverts - that&#8217;s authorship.</p></li><li><p>Integrating AI into larger works. Using its material in installations, performances, writing requires vision and the ability to make meaning.</p></li></ul><p>We&#8217;re not disappearing authorship, we&#8217;re transforming it. We don&#8217;t reduce photographers to merely their cameras. Filmmakers to their reels. Why should we try to reduce artists using AI to merely a single prompt? Their art is what they do with a medium.</p><h3>From threat to normalization</h3><p>We&#8217;ve seen that shift before. Photography has progressed from &#8220;mechanical theft&#8221; to fine art. Cinema isn&#8217;t just a cheap buzz, it&#8217;s perhaps the most complex cultural medium of the 20th century. Digital art is dominant now, not derivative.</p><p>So in a decade from now will we even be talking about &#8220;AI art&#8221; as some kind of distinct category? It may just be another current that runs through our visual, written, aural culture, indistinguishable from others. Our question is less about whether AI threatens art, so much as whether artists will have the agency to shape AI themselves.</p><p>That means working to ensure that the medium isn&#8217;t entirely captured by the corporate world. Locked behind subscriptions and policies. That&#8217;s a structural danger, not an aesthetic one. It would mean excluding artists from the medium and material of their era. </p><p>But if AI can be accessible, hackable, contextualized, then it&#8217;s another strand in the lineage of human creativity.</p><h3>In closing</h3><p>&#8220;Did AI make that?&#8221; is the wrong question. The right one is <em>what can artists do with AI?</em></p><p>Art isn&#8217;t about purity of method. It&#8217;s about creating meaning and resonance, transformation. There&#8217;s no reason why AI should end that story, it can extend it. It&#8217;s another language in which art can be spoken, and a medium with which artists can wrestle in order to best express themselves.</p><p>The threat narrative is easy. New is scary. But there&#8217;s a more promising deeper, long-term, truth. AI doesn&#8217;t replace the artist. It further frees the fields of possibility.</p><div class="subscription-widget-wrap-editor" data-attrs="{&quot;url&quot;:&quot;https://www.robin-cannon.com/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe&quot;,&quot;language&quot;:&quot;en&quot;}" data-component-name="SubscribeWidgetToDOM"><div class="subscription-widget show-subscribe"><div class="preamble"><p class="cta-caption"><strong>Subscribe for essays on design, technology, and culture - plus original fiction.</strong></p></div><form class="subscription-widget-subscribe"><input type="email" class="email-input" name="email" placeholder="Type your email&#8230;" tabindex="-1"><input type="submit" class="button primary" value="Subscribe"><div class="fake-input-wrapper"><div class="fake-input"></div><div class="fake-button"></div></div></form></div></div>]]></content:encoded></item><item><title><![CDATA[The end of not knowing]]></title><description><![CDATA[In a world of instant answers, doubt may be a valuable thing to practice]]></description><link>https://www.robin-cannon.com/p/the-end-of-not-knowing</link><guid isPermaLink="false">https://www.robin-cannon.com/p/the-end-of-not-knowing</guid><dc:creator><![CDATA[Robin Cannon]]></dc:creator><pubDate>Tue, 07 Oct 2025 14:00:58 GMT</pubDate><enclosure url="https://substack-post-media.s3.amazonaws.com/public/images/c5cf8f1b-7e34-4f62-b8a4-84ba62707bd2_1024x1024.jpeg" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>Used to be that you could have an argument in a bar for hours. What year did that film come out? Who scored the winning goal in that match? People would trade their own&#8230;often differing&#8230;memories. Triangulate from different anecdotes. They&#8217;d build complex, elaborate, cases to explain why their version was correct.</p><p>The point wasn&#8217;t just to have the answer. It was sparring, reasoning, a collective act of <em>not knowing</em>.</p><p>There&#8217;s little space for that these days. Search engines combined with smart phones got rid of that factual uncertainty. In a couple of taps the questions about the year, the goalscorer, are all settled. Outside of &#8220;phones banned&#8221; quizzes, those old pub arguments died quietly.</p><p>AI has taken it even further. AI doesn&#8217;t just give you a factual answer, it constructs reasoning. It helps you build arguments, pull together your logic, helps to give you the counterpoints before you&#8217;ve even finished your argument! It&#8217;s another polishing away of the friction of not knowing.</p><p>It gives us a strange inversion. The unknown itself is now increasingly unknown. We encounter it rarely, and it can feel intolerable when it happens.</p><div><hr></div><h3>The rise of certainty</h3><p>Certainty is the dominant mood of the age. If we have instant facts, we think our access means mastery. If AI lets us generate arguments on demand, we think that fluency means conviction. And it&#8217;s created an expectation for certainty - politically, culturally, personally.</p><p>We can see it everywhere. Politics punishes doubt. We expect leaders to speak with an unwavering confidence, even when the future is in always in doubt. Hot takes are great media tools, because they project certainty and provoke equally heated response. And on a social platform, it&#8217;s really easy to lose the faith of your audience with a momentary hesitation.</p><p>We expect certainty. And, then, we cling to it, too - whether it&#8217;s real or imagined. Our competing certainties harden into distinct camps. We entrench ourselves in polarized convictions rather than share uncertainty. And we don&#8217;t debate in a space of doubt&#8230;where we might learn, or be convinced. We shout at each other from behind barricades where a challenge feels like an existential threat.</p><h3>Simple answers are seductive</h3><p>Certainty is appealing, because it&#8217;s simple. </p><p>A confident statement, a decisive stance, some solution that comes in a neat and understandable package. We might meet these with a real sense of relief. The world is complex and messy, certainty pushes that into clean, straight lines and containers.</p><p>But the awkward fact remains; the world is still complex and messy. So a simple certainty is false when reality is layered, tangled, couched in contingencies. Politics is a web of different trade-offs needed to actually govern, not easy answers. We can&#8217;t define a &#8220;fix&#8221; for systems of interlocking feedback loops. Even our own individual identity is often shifting and multifaceted.</p><p>But the media ecosystem feeds the idea of simplicity. Ten word answers are strong. Politicians who speak with nuance are branded weak, or deliberately misunderstood. Hot take shouting matches beat out considered, provisional, analysis. It&#8217;s reinforced by social platforms seeking the fastest, definitive, viral soundbite. Certainty presents as simplicity, and gets amplified.</p><p>It reinforces that complexity is something to be feared or avoided. Ambiguity is failure and doubt feels like weakness. </p><p>Simplicity is a story we tell ourselves. The world has never been simple.</p><h3>The value of doubt</h3><p>Uncertainty isn&#8217;t just a lack of something, though. It can be an active space. And a space that we&#8217;d benefit from.</p><p>If we don&#8217;t know something, then we have to fall back on our reasoning. We&#8217;re forced to work with incomplete information, and construct provisional stories and answers. We test and revise them. It might teach us a little humility, because we&#8217;re forced to recognize that we only have a partial understanding, that we might be wrong, or that there are valid competing opinions.</p><p>Doubt can fuel a conversation. Certainty ends it. We close the claim, finish the debate. If our position is entrenched then there&#8217;s nothing more to say or do. And, no doubt, it can be really satisfying in the moment. You had the last word, you &#8220;won&#8221; the argument. But it&#8217;s locking us into brittle positions.</p><p>Certainty turns the fluidity of ideas into rigid identities. Doubt, uncertainty, keeps us flexible and able to shift and adjust.</p><p>Certainty feels stronger, doubt is what keeps us alive and learning.</p><h3>Can we recover uncertainty?</h3><p>We can&#8217;t undo search engines and AI. And we wouldn&#8217;t want to, they&#8217;re just too useful. So the question becomes how we live alongside them, and how we might maintain a human space for uncertainty even when the answer might be right there.</p><p>There&#8217;s some personal options. If we chose to let the question linger longer before our immediate response is to look it up. Or try to build comfort in ending a conversation unresolved. Treat doubt as something that helps you exercise your brain.</p><p>We can live in cultural spaces that are defined by their uncertainty, or subjectivity. If we seek art, myth, stories, we&#8217;re interacting with forms that don&#8217;t have neat closure, that don&#8217;t just resolve into simple fact. Topics and mediums that treat ambiguity as a celebrated fact of human life.</p><p>And in our relations with others, we can be more open. Invite uncertainty back into a debate. Instead of seeking to &#8220;win&#8221;, think about what you might not know, whether there are doubts worth keeping alive. If you&#8217;re using AI, don&#8217;t make it an arbiter. Make it a foil for your own thinking, something that acts as a partner to bounce ideas off, and test you.</p><h3>Doubt is a gift</h3><p>In a world which demands instant certainty, uncertainty might be the rarest thing left. Uncertainty isn&#8217;t ignorance, or indecision. It&#8217;s making room for doubt. Doubt is something that can nurture more, create growth, not a failure to be pushed back.</p><p>Our tools don&#8217;t stop us rediscovering our uncertainty (even if they might tempt us). But we can finish our own thoughts instead of reflexively letting them do so for us. Uncertainty is a gap in knowledge, yes. It&#8217;s also the space where doubt, imagination, and the human mind are at their most open.</p><p>Certainty feels powerful but closes doors. Rediscovering uncertainty makes us human.</p><div class="subscription-widget-wrap-editor" data-attrs="{&quot;url&quot;:&quot;https://www.robin-cannon.com/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe&quot;,&quot;language&quot;:&quot;en&quot;}" data-component-name="SubscribeWidgetToDOM"><div class="subscription-widget show-subscribe"><div class="preamble"><p class="cta-caption"><strong>Subscribe for essays on design, technology, and culture - plus original fiction.</strong></p></div><form class="subscription-widget-subscribe"><input type="email" class="email-input" name="email" placeholder="Type your email&#8230;" tabindex="-1"><input type="submit" class="button primary" value="Subscribe"><div class="fake-input-wrapper"><div class="fake-input"></div><div class="fake-button"></div></div></form></div></div>]]></content:encoded></item><item><title><![CDATA[Inequality, weaponized]]></title><description><![CDATA[How soft transhumanism could still be something we all own]]></description><link>https://www.robin-cannon.com/p/inequality-weaponized</link><guid isPermaLink="false">https://www.robin-cannon.com/p/inequality-weaponized</guid><dc:creator><![CDATA[Robin Cannon]]></dc:creator><pubDate>Tue, 23 Sep 2025 14:01:41 GMT</pubDate><enclosure url="https://substack-post-media.s3.amazonaws.com/public/images/2de1b483-28eb-4935-95a9-884626e07a84_1024x1024.jpeg" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>The future of body and mind is already here.</p><p>It costs a minimum of a few million dollars. It looks like a pale man drinking his son&#8217;s blood on TikTok.</p><p>Bryan Johnson isn&#8217;t evil. He might not even be wrong. But he&#8217;s become a stand-in, a visual example, of the modern myth of transhumanism. He&#8217;s on a very visible quest to extend his life through medical tracking, cellular rejuvenation, using algorithms to optimize everything in his life. </p><p>So he gets ridiculed a lot. But underneath that ridicule it also feels like there&#8217;s something strangely defensive. Underneath all the jokes about blood and hundred-pill breakfasts, there are some quieter and more unnerving questions.</p><ul><li><p>What if he&#8217;s right&#8230;but what he&#8217;s doing is only available to the super-rich?</p></li><li><p>Is the next phase of human evolution here, but being gatekept by cost and power?</p></li></ul><p>Transhumanism isn&#8217;t sci-fi anymore. It&#8217;s about health and strength. It&#8217;s also about gender identity. About whether people can make more choices to live longer and feel better, or if the system decides who gets access.</p><div><hr></div><h3>Transhumanism&#8217;s PR problems</h3><p>When people hear transhumanism, maybe they&#8217;re thinking of neural nets, robotic bodies, cloned bodies and drug glands. We talk about the term as any easy shorthand for a libertarian billionaire-run dystopia. Immortality and escape pods for their egos, and damn the rest of us.</p><p>We see it all the time in fiction.</p><p><em>Altered Carbon </em>sees the rich download their consciousness into new bodies, the rest of us are left to rot.</p><p><em>Elysium</em> gave us a world where healing is literally &#8220;out-of-this-world&#8221; - orbital and off-limits to all but the most privileged.</p><p>And <em>Orphan Black</em> (a favorite of mine) portrayed a situation where identity itself could be patented, monitored, and used as a weapon.</p><p>These stories aren&#8217;t necessarily exaggerations, but they are warnings. In the real world, the technology is starting to catch up to the fictional portrayals. But the power structures of those portrayals are already here.</p><p>So you don&#8217;t need a cortical implant to be able to see the future. You just need that access to an exclusive clinical trial. Or a concierge private health service. Or a prescription that isn&#8217;t covered by insurance.</p><blockquote><p>The risk isn&#8217;t about some people becoming &#8220;more than human&#8221;, it&#8217;s about the rest of us being left behind.</p></blockquote><div><hr></div><h3>Maybe we laugh so we don&#8217;t need to ask</h3><p>Bryan Johnson isn&#8217;t really the problem. He might even be a case study of the visible discomfort society seems to feel about this topic.</p><p>He&#8217;s open-source. He self-tracks, maintains clear protocols. He doesn&#8217;t hide his body-as-a-dashboard approach behind NDAs, patents, or exclusive access. In some ways he&#8217;s the opposite, a compulsive sharer. He&#8217;s said that he&#8217;s experimenting, going above and beyond the sensible, in order to find scalable, affordable interventions.</p><p>But we&#8217;ve reduced him to memes. A vampire. A narcissist. A weird man-machine.</p><blockquote><p><strong>Visible transformation - especially when it&#8217;s extreme - makes us uneasy.</strong></p></blockquote><p>Johnson&#8217;s body doesn&#8217;t really signal <em>health</em>. It&#8217;s really about his trying to gain <em>control</em>. He wants to control his time, by making himself younger. To manage all his biometric data, personal and lineage-based. Create an intensely quantified version of himself. People flinch at that.</p><p>Buy why do we think of Johnson like that? Why is he weird, when there&#8217;s a near-trillion dollar wellness industry telling us how to optimize ourselves?</p><p>It&#8217;s easier to mock things than to worry about our own lack of access. And to cast scorn upon something that we&#8217;re worried we might one day be asked to do. </p><p>So self-modification is treated as vanity, at best, or madness. And we only approach some level of comfort when it&#8217;s invisible, or gradual.</p><p>That mockery is exactly what the entrenched powers want. It creates a wall around that transformation, it&#8217;s elite, absurb, off-limits.</p><div><hr></div><h3>We&#8217;re already practising soft-transhumanism</h3><p>Not all of those transformations look like sci-fi.</p><p>It might be as simple as a patch on your skin. Or an injection into your stomach. Or maybe just an app on a device on your wrist that lets you check how you&#8217;re sleeping.</p><p><strong>Soft-transhumanism</strong> is about slow evolutions, rather than radical changes. But it&#8217;s still achieved through intervention or augmentation. Less about replacing some or all of the human, but fine-tuning it.</p><ul><li><p><strong>GLP-1</strong> weight loss drugs like Ozempic. They&#8217;re not just supporting weight loss. They&#8217;re actively rewiring hunger, pleasure, self regulation. They directly change how a brain reacts to food.</p></li><li><p><strong>Wearables</strong> like Fitbit are more than just step-trackers. They&#8217;re creating a performance report for your physical exertion, your heartbeat, your REM sleep. They give you a body dashboard.</p></li><li><p><strong>Hormone therapy</strong> reshapes bodies and identities in ways that can be genuinely life-saving, and so often misunderstood.</p></li><li><p><strong>AI co-pilots</strong> sit alongside us as a digital augmentation. They might help cognitive range, translation, creativity. They&#8217;re the very early edge of outsourcing tasks for the mind.</p></li></ul><p>None of these things are speculation - they exist now. They happen in bedrooms, pharmacies, on our bodies, on our browsers. It&#8217;s very real, and also very uneven.</p><p>You don&#8217;t strictly need to be wealthy to participate, but it helps. It helps a lot.</p><p>Which makes even the soft version of transhumanism become a fault line.</p><p>Some people are changing their lives. Other people are being told it&#8217;s cheating. And others are being told it&#8217;s not for them at all.</p><div><hr></div><h3>Who gets to choose?</h3><p>When we get new technology, it often promises liberation, and then ends up mirroring the social hierarchies that already exist.</p><p>GLP-1 drugs are miracle solutions. Or you might be &#8220;a fat person&#8221; who&#8217;s using them &#8220;wrong&#8221;. The cultural frame around drugs has always shifted depending on who&#8217;s using them.</p><p>Gender-affirming care is life-saving, but transgender people are criminalized, pathologized, their lived experience denied. </p><p>Biohacking is chic tech-bro edginess in Silicon Valley, and weirdly suspect when it&#8217;s being practiced outside that wealthy, white, bubble.</p><p>We have two problems. Inequality of access. But also inequality of <em>permission</em>. </p><p>Who&#8217;s going to be allowed to change? Whose body is a &#8220;worthy&#8221; project? Is it desirable to evolve these people, but dangerous for these ones?</p><p>Who gets to make those kind of decisions?</p><p>These aren&#8217;t abstract questions. They&#8217;re political. They&#8217;re structural - weighed down by society&#8217;s preconceptions. And they are having a distinct impact in how we deploy and subsidize transhumanist tools. Or whether we withhold and stigmatize them instead.</p><h3>We could mock the tools, or claim them for ourselves</h3><p>Don&#8217;t reject transhumanism. That doesn&#8217;t protect or serve us. It just entrenches the divide that already exists.</p><p>If we start treating augmentation as an unnatural thing, call it weird, call it unethical, then we&#8217;re abandoning the playing field entirely. But by not playing, we&#8217;re just letting the rich and powerful take control of the boundaries of this evolution. And then they&#8217;ll sell some version of it back to us at a big markup.</p><p>But maybe there&#8217;s another way.</p><ul><li><p><strong>Demand access</strong>. Metabolic care, hormone treatment, prosthetics, neuro-enhancement. These are essential, not luxury.</p></li><li><p><strong>Push for transparency</strong>. Support open-source research, experimentation efforts led by patients not corporations. Community driven initiatives.</p></li><li><p><strong>Make consent sacred</strong>. Choice stays with the individual. Not politicians, employers, or insurers.</p></li><li><p><strong>Normalize it</strong>. Don&#8217;t hide bodily transformation as a failure, or vanity, or loss, or something &#8220;wrong&#8221;.</p></li></ul><p>By and large, we&#8217;re not trying to become gods (however much some billionaires might want to be). We just want to become the best version of ourselves, and on our own terms. Nobody should be forced into that evolution alone, or challenged, or blocked.</p><h3>Can we make a common future?</h3><p>Transhumanism will develop and advance&#8230;but it&#8217;s also already here. It's unfolding, quietly, and inequitably.</p><p>We need to push back against that inequity. If not, then transhumanism becomes just another method of control. Of cementing the division. From haves and have-nots, to optimized and obsolete.</p><p>It doesn&#8217;t, and can&#8217;t, be that way.</p><p>We should be demanding a version of transhumanism that&#8217;s built based on <strong>care</strong>. Something where there&#8217;s shared <strong>agency</strong> rather than corporate gatekeeping. Make it not selfish, but based on a desire for <strong>collective evolution</strong>.</p><p>Our bodies don&#8217;t have to be somebody&#8217;s battleground. They could be a workshop, our sanctuary, our true selves.</p><p>The future can be brighter by being wider. Wide enough for all of us to fit inside.</p><div><hr></div><h3>Further reading:</h3><ul><li><p>Drummond, Katie. <em><a href="https://www.wired.com/story/big-interview-bryan-johnson/">Bryan Johnson Is Going To Die.</a></em> Wired, July 2025.</p></li><li><p>Ribeiro, Celina. <em><a href="https://www.theguardian.com/books/2022/jun/04/beyond-our-ape-brained-meat-sacks-can-transhumanism-save-our-species">Beyond our &#8216;ape-brained meat sacks&#8217;: can transhumanism save our species?</a> </em>The Guardian, June 2022.</p></li><li><p>Pazzanese, Christina. <em><a href="https://news.harvard.edu/gazette/story/2024/02/how-ozempic-shaming-illuminates-complexities-of-treating-weight-problems/">How &#8216;Ozempic shaming&#8217; illuminates complexities of treating weight problems.</a> </em>The Harvard Gazette, February 2024.</p></li></ul><div class="subscription-widget-wrap-editor" data-attrs="{&quot;url&quot;:&quot;https://www.robin-cannon.com/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe&quot;,&quot;language&quot;:&quot;en&quot;}" data-component-name="SubscribeWidgetToDOM"><div class="subscription-widget show-subscribe"><div class="preamble"><p class="cta-caption"><strong>Subscribe for essays on design, technology, and culture - plus original fiction.</strong></p></div><form class="subscription-widget-subscribe"><input type="email" class="email-input" name="email" placeholder="Type your email&#8230;" tabindex="-1"><input type="submit" class="button primary" value="Subscribe"><div class="fake-input-wrapper"><div class="fake-input"></div><div class="fake-button"></div></div></form></div></div>]]></content:encoded></item><item><title><![CDATA[The thing that gets you to the thing]]></title><description><![CDATA[Why we build design systems, AI tools, and everything else]]></description><link>https://www.robin-cannon.com/p/the-thing-that-gets-you-to-the-thing</link><guid isPermaLink="false">https://www.robin-cannon.com/p/the-thing-that-gets-you-to-the-thing</guid><dc:creator><![CDATA[Robin Cannon]]></dc:creator><pubDate>Tue, 09 Sep 2025 14:02:44 GMT</pubDate><enclosure url="https://substack-post-media.s3.amazonaws.com/public/images/cb2e2e51-20d7-4de6-b162-121cdee0e6aa_1024x1024.jpeg" length="0" type="image/jpeg"/><content:encoded><![CDATA[<blockquote><p>&#8220;Computers aren&#8217;t the thing. They&#8217;re the thing that gets you to the thing.&#8221;<br>- Joe Macmillan, <em>Halt and Catch Fire</em></p></blockquote><p>I catch myself thinking about that line a lot.</p><p>I watched <em>Halt and Catch Fire </em>when it first aired (apparently one of the few!), and I&#8217;ve rewatched it multiple times since then. Season 4 is the one that stays with me. It&#8217;s quiet, but it&#8217;s also set in the early 1990s, so it resonates with my teenage years. That time, CRT monitors and dial-up modem tones, connects.</p><p>There&#8217;s a point in that season when Haley starts building what will become Comet, a web directory to help people <em>find things</em>. She's driven, awkward, and thoughtful. Some it reminds me of myself at that age, and of people I knew. I wasn&#8217;t exactly like her, I see a lot of her sister Joanie, too. That brash facade and teenage desire for cool. I hovered in-between - like I imagine a lot of people do. Nerdy, shy, challenging, curious, sometimes confident at the wrong time. Maybe I&#8217;m still like that.</p><p>That early internet felt like that, too. A space where you could be multiple things at once, real or made up. Not polished or professional, not corporate, it was more raw and personal. Anyone could muck around and make a website. They weren&#8217;t sanitized, either. Little, clumsy, reflections of ourselves. A secret hideout for a shy mind, or an exaggerated avatar hoping to be edgy. We were the first people mapping out new places that didn&#8217;t exist yet, and hoping that maybe someone might show up (&#8230;which we&#8217;d measure with a handy page view counter!).</p><p><em>Halt and Catch Fire</em> captured that feeling with really startling accuracy. The show still resonates for me because, at its core, it understands that the value of technology is never the technology itself. It&#8217;s what it makes possible.</p><p>That&#8217;s why Joe&#8217;s quote - <em>&#8220;the thing that gets us to the thing&#8221;</em> - isn&#8217;t just a pithy line. It&#8217;s a philosophy that I believe in, and which I&#8217;ve tried to hold onto in professional life.</p><p>Nobody ever builds a design system for the sake of having a design system.</p><p>We build one so that people can move faster. To help teams share things more easily, duplicate less effort, ship better stuff. It lets an organization scale without losing its soul. </p><p>You build a design system because of what it <em>enables</em>. </p><p>Same with the web. Same with AI. Same with almost every piece of technology that ends up mattering. The tool isn&#8217;t the endpoint. The endpoint is people, doing things, making things, finding each other. They're not buying a hammer, they&#8217;re buying the nail in a wall to hang their picture.</p><p>It&#8217;s why I&#8217;m cautious but also hopeful about where AI might go. We&#8217;re in a place we&#8217;ve been before; new technology, new capabilities, feels a bit magical, feels a bit confusing. Maybe even a little bit threatening. And people are reaching for the wrong thing. They&#8217;re trying to make the tool the product (&#8220;it&#8217;s got AI in it!&#8221;). But AI isn&#8217;t the product. It&#8217;s not the business. It&#8217;s not even the system.</p><p>AI is a new <em>thing that gets us to the thing.</em></p><p>The danger is if we forget that. If we make the mistake (as we often do) of making the tool the point, then we&#8217;ll put together some really impressive systems that don&#8217;t serve anyone&#8217;s needs. Or maybe worse, systems that reinforce the worst of our incentives; choosing speed over thought, growth without ethics, making without caring.</p><p>So I&#8217;ll come back again to that same old line. </p><p>Nobody ever builds a design system for the sake of having a design system.</p><p>And nobody should ever build with AI just to say that they did. We want to build with AI so that people can do <em>more</em>, with <em>better</em> tools, in more <em>human</em> ways.</p><p>The tech isn&#8217;t what gives things meaning. That&#8217;s the people.</p><p>We still have a lot to be hopeful for in this current wave, to have things worth us building. We can take that chance to reconnect with that meaning. Remembering that behind all our models and systems and platforms, people are still trying to find something. Or someone. Or maybe themselves.</p><p><em>That&#8217;s what we should be building for.</em></p><div><hr></div><p><em>Next week in <strong>Field Notes</strong>, I&#8217;ll explore how this philosophy plays out in practice. Can we build systems that serve people so well they become invisible? If this piece was the why, that one will be a practical, tactical, and grounded how-to in the reality of enterprise environments.</em></p><div><hr></div><h3>Further reading:</h3><ul><li><p><em>Halt and Catch Fire</em>, AMC (2014-2017). <a href="https://www.imdb.com/title/tt2543312/">IMDB link</a>.</p></li><li><p>Gonzalez, Kathryn. <a href="https://medium.com/design-doordash/design-systems-and-infrastructure-where-design-and-engineering-meet-3e7d2908558a">Design Systems and Infrastructure &#8212; where design and engineering meet</a>. Medium, July 2018.</p></li></ul><div class="subscription-widget-wrap-editor" data-attrs="{&quot;url&quot;:&quot;https://www.robin-cannon.com/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe&quot;,&quot;language&quot;:&quot;en&quot;}" data-component-name="SubscribeWidgetToDOM"><div class="subscription-widget show-subscribe"><div class="preamble"><p class="cta-caption"><strong>Subscribe for essays on design, technology, and culture - plus original fiction.</strong></p></div><form class="subscription-widget-subscribe"><input type="email" class="email-input" name="email" placeholder="Type your email&#8230;" tabindex="-1"><input type="submit" class="button primary" value="Subscribe"><div class="fake-input-wrapper"><div class="fake-input"></div><div class="fake-button"></div></div></form></div></div>]]></content:encoded></item><item><title><![CDATA[Permission to dance]]></title><description><![CDATA[When we couldn't help but tap our feet to the beat of America's future.]]></description><link>https://www.robin-cannon.com/p/permission-to-dance</link><guid isPermaLink="false">https://www.robin-cannon.com/p/permission-to-dance</guid><dc:creator><![CDATA[Robin Cannon]]></dc:creator><pubDate>Mon, 01 Sep 2025 14:02:56 GMT</pubDate><enclosure url="https://substack-post-media.s3.amazonaws.com/public/images/7475d2b9-a3ee-48d4-9f18-f2a3252c0447_1024x1024.jpeg" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>I was in the Woodlands Mall recently, just outside Houston, when I heard it. <em>R.O.C.K. in the U.S.A. </em>through the ceiling speakers, breaking through all the regular mall hubbub. And it really hit me, I felt it in my chest. It&#8217;s not that it&#8217;s a favorite song, not one that I have a deep connection to. But the beat. It had warmth, a sheer, exuberant, joy. Not purely nostalgic, but also alive.</p><p>It&#8217;s not so much purely the sound, but the <em>tone</em>. It&#8217;s a song that isn&#8217;t bragging. It isn&#8217;t threatening. It&#8217;s not trying to &#8220;take America back&#8221; from anyone. It&#8217;s joyful and inclusive. A fun, even slightly silly, celebration of American music&#8230;and the American spirit. Not the flag-waving version, the kind that made room for anyone who wanted to sing along.</p><p>We&#8217;ve drifted a long way from that. <em>R.O.C.K. in the U.S.A.</em> isn&#8217;t just a fun, throwback, anthem. It&#8217;s a representation of a country that felt supremely confident in itself.</p><p>1980s American patriotism was loud, but it wasn&#8217;t clenched up. There was swagger and brashness, but it came with openness. Sure, that swagger was built partly on a myth. But it was a future-looking myth. Not a greatness just to be preserved, but a greatness to be shared.</p><p>That&#8217;s what made it all such a potent export. I didn&#8217;t grow up here. I was a kid in Britain. But, like so many others around the world, I grew up in America&#8217;s cultural orbit. Music, movies, slogans, stories. They weren&#8217;t always telling the truth, but they told it confidently. America wanted to be admired, and believed - deeply - that it was admirable. Yes, there are dangers in that belief, but it&#8217;s also really magnetic.</p><p>Even Ronald Reagan, for all his contradictions, granted amnesty to immigrants. Not in spite of how great America was, but because of it. There&#8217;s a quiet assumption underneath it all: <em>Why wouldn&#8217;t you want to be part of this?</em> America didn&#8217;t feel fragile. It didn&#8217;t feel like it was beset upon by every imagined threat. Its power was cultural, moral - at least in its eyes - and founded in a deep confidence.</p><p>That&#8217;s not how patriotism manifests itself today. Now it&#8217;s all shrill and suspicious. It tries to make up with volume what it lacks in strength. A flag isn&#8217;t an invitation any more, it can feel more like a warning. The mood shifted from &#8220;we&#8217;re building something&#8221; to &#8220;we&#8217;re under attack&#8221;. We&#8217;ve changed from open arms to closing the gates.</p><p>I don&#8217;t think that&#8217;s just a surface level change in tune. It comes from an underlying emotion. Patriotism today is couched in fear. It imagines a nation that&#8217;s under siege - by immigrants, by queerness, by &#8220;wokeness&#8221;, even by books and by self-reflection. Hell, even under siege from history itself - treating diversity, or critique, or simple curiosity as an existential threat.</p><p>That&#8217;s not how you behave if you&#8217;re confident. It&#8217;s insecurity trying to masquerade itself as strength.</p><p>If America is as great as all that, what are people so afraid of?</p><p>If Christianity is as strong as so many believe, why would it need laws to protect it from being questioned?</p><p>If the founding ideals of this country are so sound, why would we fear examining the extent to which we&#8217;ve lived up to them?</p><p>These weren&#8217;t such front-of-center questions in 1985, but they cut really deep in 2025.</p><p>When I hear a song like <em>R.O.C.K. in the U.S.A., </em>it&#8217;s not just nostalgia that I&#8217;m feeling. I feel <em>haunted</em>. That song isn&#8217;t just upbeat. It was hopeful&#8230;it still feels hopeful. It came from a country that believed it had a future that was worth dancing toward. That wasn&#8217;t an America that felt the need to wall itself off in order to stay great. It assumed that greatness was something you <em>shared</em>, not hoarded. </p><p>It&#8217;s a mood and a sense of America that really resonates for me. Not so much the myth of America, but the confidence to imagine that myth in the first place. And a generosity to invite others in to share it. The willingness to tell our stories out loud, including (even especially) the complicated ones, without some fear that doing so will break us.</p><p>Patriotism doesn&#8217;t have to be cruel to be strong. Cruelty suggests the exact opposite. <em>I clench my fists when I&#8217;m afraid.</em></p><p>The America that I first fell in love with wasn&#8217;t afraid of loss. It was ready, eager, to build. To play louder. To start dancing and figure out the rest as it went along.</p><p>I really miss that feeling. I don&#8217;t miss it because it was perfect - it wasn&#8217;t. I miss it because it was a belief strong enough to not be afraid. </p><p>And, even in a more cynical world, I think it&#8217;s a belief we can get back. That confidence and joy are not incompatible with truth. And that we want everyone to have the opportunity to share in it.</p><p>And a belief like that? I think that&#8217;s the most American ideal of all.</p><div><hr></div><p>The song that inspired this piece - <em><a href="https://open.spotify.com/track/0lqfBvf1Gqmmt3l5Qeirlm">R.O.C.K. in the U.S.A. - John Mellencamp</a> </em>(Spotify)</p><div class="subscription-widget-wrap-editor" data-attrs="{&quot;url&quot;:&quot;https://www.robin-cannon.com/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe&quot;,&quot;language&quot;:&quot;en&quot;}" data-component-name="SubscribeWidgetToDOM"><div class="subscription-widget show-subscribe"><div class="preamble"><p class="cta-caption"><strong>Subscribe for essays on design, technology, and culture - plus original fiction.</strong></p></div><form class="subscription-widget-subscribe"><input type="email" class="email-input" name="email" placeholder="Type your email&#8230;" tabindex="-1"><input type="submit" class="button primary" value="Subscribe"><div class="fake-input-wrapper"><div class="fake-input"></div><div class="fake-button"></div></div></form></div></div>]]></content:encoded></item><item><title><![CDATA[The loud-minority internet and the half-life of outrage]]></title><description><![CDATA[What extends it, and how we might measure and mitigate it]]></description><link>https://www.robin-cannon.com/p/the-loud-minority-internet-and-the</link><guid isPermaLink="false">https://www.robin-cannon.com/p/the-loud-minority-internet-and-the</guid><dc:creator><![CDATA[Robin Cannon]]></dc:creator><pubDate>Fri, 29 Aug 2025 15:31:48 GMT</pubDate><enclosure url="https://substack-post-media.s3.amazonaws.com/public/images/6dbbd0ef-a59a-450e-a23c-980446f6e8d6_1200x1200.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>It&#8217;s always been possible for small, intense minorities to sway a room. The internet industrialized that effect, and our own amplification - supported by handheld devices and infinite doom scrolling - sustains it. If we want healthier culture, capitalism, and politics, it helps to measure <strong>how fast outrage decays</strong> and work out ways to stop re-igniting it.</p><p>Human nature didn&#8217;t change. Infrastructure did. Our feeds reward high-arousal content - outrage, moralized language, all the performative highs and lows that we see every day online. That social feedback conditions us to produce more content that spikes, and the &#8220;<a href="https://www.pewresearch.org/internet/2014/08/26/social-media-and-the-spiral-of-silence/">spiral of silence</a>&#8221; effects hide the silent majority. Edge takes start to look like consensus. It&#8217;s a reliable illusion, the first day extremes feel like &#8220;everyone&#8221;, when they&#8217;re a long way from that.</p><p>In boardrooms, newsrooms, politics, our daily lives, we keep treating initial intensity as truth. We manage the spike, but not the subsequent decay. We shouldn&#8217;t be asking &#8220;how loud is it today?&#8221;. A better question might be &#8220;how fast does this fade if we don&#8217;t keep feeding it?&#8221;</p><h3>A better yardstick: the outrage half-life</h3><p>We don&#8217;t have to guess whether a flare-up is a real movement or just a passing moment. We can watch how quickly things cool when nobody is adding anything new. Let&#8217;s call that cooling speed the <strong>outrage half-life</strong>: the time it takes for negative attention above the baseline to drop by half.</p><p>By baseline, I mean the normal level of negative attention something gets when nothing unusual is happening. it&#8217;s a background trickle of public complaints or critical posts; &#8220;my Amazon delivery was late&#8221; or &#8220;Dodge trucks suck&#8221;. In practice&#8230;we measure negative items, try to compute a calm-period median, and subtract that. What&#8217;s left, rising and falling, is <strong>excess negative attention</strong>, not total chatter.</p><p>So the natural cooling time - absent something that re-ignites things - is the <strong>base half-life</strong> (<em>t&#189;, base</em>). But we&#8217;re rarely absent that re-ignition, quote-tweets, meme sharing, &#8220;coverage of the coverage&#8221; stories, drip-fed brand responses daily. These are maintenance events that stretch out that decay. And we can call that the <strong>amplification coefficient K</strong>.</p><blockquote><p><strong>Effective half-life = Base half-life x (1 + K)</strong></p></blockquote><p>If <strong>K = 0</strong>, then the story just fades away at its natural pace. But as <strong>K</strong> grows, then it lengthens the time <em>between<strong> </strong></em>halvings (e.g. if <em>t&#189;, base </em>is 1 day, and <strong>K</strong> is 4, then the effective half life is 5 days.</p><p>A couple of things to clarify&#8230;</p><ul><li><p><strong>t&#189;, base varies depending on the event</strong>. Some topics will cool down in hours (someone&#8217;s outfit on the red-carpet at the Met Gala). Others might take days to cool down even if <strong>K = 0</strong> (a major celebrity dies, a national policy change affects multiple people).</p></li><li><p><strong>New facts are new shocks</strong>. If genuinely new information arrives (an investigation result is released, a lawsuit is filed), then we have to reset the clock and estimate a new half-life for the new epoch. <strong>K</strong> is a measure of <em>amplification of old heat,</em> not legitimate updates.</p></li><li><p>This is a gut-check metric, not lab science. I&#8217;ve used ChatGPT to help me determine something that we might use to guide judgement, not a rigorously tested way to end arguments.</p></li></ul><h3>American Eagle x Sydney Sweeney: an example of amplification in action</h3><p>American Eagle&#8217;s &#8220;<a href="https://www.youtube.com/watch?v=AK8s3iqL99c">Sydney Sweeney has good jeans</a>&#8221; campaign came out in late July. And we got the immediate backlash over the &#8220;genes/jeans&#8221; pun. The brand posted clarifications (&#8220;<a href="https://apnews.com/article/american-eagle-sydney-sweeney-353699aaaa0660d772b94cb33932a794">it&#8217;s just about jeans</a>&#8221;). The story got reframed as a <a href="https://abcnews.go.com/GMA/Culture/trump-praises-sydney-sweeney-amid-american-eagle-jeans/story?id=124347376">culture-war proxy</a>. We had follow ups and comparisons to rival denim ads. We got <strong>multiple re-ignitions</strong>.</p><p>If we think of &#8220;meaningful public chatter&#8221; as anything above ~10% of baseline, that story stayed alive for roughly <strong>three weeks</strong>. Let&#8217;s say 23 days. The time to 10% is <strong>~3.32 half-lives</strong>, which means an <strong>effective half-life</strong> <strong>&#8776; 6.9 days</strong>. If we estimate a pretty conservative <strong>base half-life</strong> for an un-maintained brand flare up at <strong>~12-18 hours</strong>, then we end up with <strong>K &#8776; 8-13.</strong></p><p>In plain English; all of our amplification kept a two-day story alive for three weeks.</p><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!c53w!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fc35bb447-a08c-4c25-b9d7-3cb097d93bf0_1920x1080.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!c53w!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fc35bb447-a08c-4c25-b9d7-3cb097d93bf0_1920x1080.png 424w, https://substackcdn.com/image/fetch/$s_!c53w!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fc35bb447-a08c-4c25-b9d7-3cb097d93bf0_1920x1080.png 848w, https://substackcdn.com/image/fetch/$s_!c53w!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fc35bb447-a08c-4c25-b9d7-3cb097d93bf0_1920x1080.png 1272w, https://substackcdn.com/image/fetch/$s_!c53w!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fc35bb447-a08c-4c25-b9d7-3cb097d93bf0_1920x1080.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!c53w!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fc35bb447-a08c-4c25-b9d7-3cb097d93bf0_1920x1080.png" width="1456" height="819" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/c35bb447-a08c-4c25-b9d7-3cb097d93bf0_1920x1080.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:819,&quot;width&quot;:1456,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:46077,&quot;alt&quot;:&quot;Observed vs. counterfactual story length (days above ~10% of peak): American Eagle/Sweeney &#8776; 23; Counterfactual base &#8776; 2.5; GAP &#8220;unity hoodie&#8221; &#8776; 2.&quot;,&quot;title&quot;:null,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:true,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:&quot;https://www.robin-cannon.com/i/171768005?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fc35bb447-a08c-4c25-b9d7-3cb097d93bf0_1920x1080.png&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="Observed vs. counterfactual story length (days above ~10% of peak): American Eagle/Sweeney &#8776; 23; Counterfactual base &#8776; 2.5; GAP &#8220;unity hoodie&#8221; &#8776; 2." title="Observed vs. counterfactual story length (days above ~10% of peak): American Eagle/Sweeney &#8776; 23; Counterfactual base &#8776; 2.5; GAP &#8220;unity hoodie&#8221; &#8776; 2." srcset="https://substackcdn.com/image/fetch/$s_!c53w!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fc35bb447-a08c-4c25-b9d7-3cb097d93bf0_1920x1080.png 424w, https://substackcdn.com/image/fetch/$s_!c53w!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fc35bb447-a08c-4c25-b9d7-3cb097d93bf0_1920x1080.png 848w, https://substackcdn.com/image/fetch/$s_!c53w!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fc35bb447-a08c-4c25-b9d7-3cb097d93bf0_1920x1080.png 1272w, https://substackcdn.com/image/fetch/$s_!c53w!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fc35bb447-a08c-4c25-b9d7-3cb097d93bf0_1920x1080.png 1456w" sizes="100vw" loading="lazy"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a></figure></div><h3>GAP&#8217;s &#8220;unity hoodie&#8221;: the fast burn-out contrast</h3><p>You might barely remember it now. GAP posted a red/blue &#8220;unity hoodie&#8221; tweet just after the 2020 Presidential election. People made fun of it, they <strong>deleted it within hours</strong>, it had a day or two of coverage, and <a href="https://www.marketingdive.com/news/gap-yanks-call-for-unity-from-twitter-after-it-falls-flat/588470/">then it faded away</a>.</p><p>The story&#8217;s effective half-life was <strong>hours</strong>, not weeks. We were pretty close to <strong>K &#8776; 0</strong>. A fast removal, nothing durable to hang a story on, and a short tail.</p><h3>Our villain is amplification, not outrage</h3><p>There are two forces working to lengthen the outrage half-life beyond what we might consider its natural decay. <strong>Algorithmic oxygen</strong> gives it an initial push, our ranking systems love high-arousal engagements, and they keep it going because the feeds treat a late quote-tweet the same as if it were fresh content. And our own <strong>human oxygen </strong>keeps the fire going. We recirculate the artifacts across multiple platforms, we &#8220;cover the coverage&#8221; in the news, and drip-fed statements can keep fueling the cycle. </p><p>The learning effects of our digital ecosystem mean that all of those likes and shares have taught us to express more outrage for longer. That&#8217;s why it&#8217;s so easy for there to be sparking &#8220;second waves&#8221; that catch on so quickly. These aren&#8217;t adding facts, they&#8217;re adding <strong>half-life</strong>.</p><h3>Shortening the half-life (how can we reduce K?)</h3><p>If we really care about getting better outcomes, we should be trying to <strong>manage the decay</strong>, not manage the spike.</p><ul><li><p><strong>Platforms</strong> could de-weight older posts unless <em>new information<strong> </strong></em>is added. Screenshots and memes could be collected into a canonical threads, and require a &#8220;what&#8217;s new&#8221; note on re-shares after 24 hours. And plurality panel surveys could undermine the illusion we&#8217;ve created that &#8220;everyone&#8221; is mad.</p></li><li><p><strong>Institutions</strong> might adopt a 72-hour rule, where they don&#8217;t reverse course or pivote with a &#8220;we hear you&#8221; message without estimating half-life and checking real-world tasks. Don&#8217;t feed the fire with a drip feed of statements, make a single, canonical update.</p></li><li><p><strong>Press and media</strong> should minimize &#8220;coverage of coverage&#8221; stories. And if there are updates, offer them in a batch of corrections not a minute-by-minute churn.</p></li><li><p><strong>You (all of us) </strong>could choose to not boost the second wave. If you&#8217;re not adding new context, try to keep quiet. Reward people who are explaining things over those who are posting dunks and gotchas. Mute the screenshot/meme farms and the rage-quoting accounts.</p></li></ul><h3>Design for the half-life, not the headline</h3><p>The internet is really good at surfacing anger. And it&#8217;s really really good at maintaining it. All these stories have natural <strong>base half-life</strong>, some of them cool down in hours, some in days. But we manage to stretch some of those moments into months with our added amplification - <strong>K</strong> - re-shares with no new facts, drip-fed statements, coverage of the coverage. </p><p>If the base cool-down is short and nothing real is broken, we should work harder to <strong>let it end</strong>. If the half-life is stretching out, we can try to fix what&#8217;s actually broken (recognition, access, values, performance). Platforms, institutions, and people could all turn the oxygen down&#8230;if they wanted to.</p><p>Maybe with a greater awareness, a formula like this, we can be more aware. Not managing the spike, trying to manage the decay. If you&#8217;re not adding facts then you&#8217;re just adding to the half-life.</p><div><hr></div><h4>Further reading:</h4><ul><li><p>Brady, William J, et al. <em><a href="https://www.science.org/doi/10.1126/sciadv.abe5641">How social learning amplifies moral outrage expression in online social networks</a></em>. Science Advances, 2021. </p></li><li><p>Husz&#225;r, Ferenc, et al. <em><a href="https://arxiv.org/pdf/2110.11010">Algorithmic Amplication of Politics on Twitter</a>. </em>Proceedings of the National Academy of Science (PNAS), 2022.<em> </em></p></li></ul><div class="subscription-widget-wrap-editor" data-attrs="{&quot;url&quot;:&quot;https://www.robin-cannon.com/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe&quot;,&quot;language&quot;:&quot;en&quot;}" data-component-name="SubscribeWidgetToDOM"><div class="subscription-widget show-subscribe"><div class="preamble"><p class="cta-caption"><strong>Thanks for reading. Subscribe for essays on design, technology, and culture - plus original fiction.</strong></p></div><form class="subscription-widget-subscribe"><input type="email" class="email-input" name="email" placeholder="Type your email&#8230;" tabindex="-1"><input type="submit" class="button primary" value="Subscribe"><div class="fake-input-wrapper"><div class="fake-input"></div><div class="fake-button"></div></div></form></div></div>]]></content:encoded></item><item><title><![CDATA[Speculation is orientation]]></title><description><![CDATA[Why I&#8217;m expanding from work commentary into signals, stories, and imagination.]]></description><link>https://www.robin-cannon.com/p/speculation-is-orientation</link><guid isPermaLink="false">https://www.robin-cannon.com/p/speculation-is-orientation</guid><dc:creator><![CDATA[Robin Cannon]]></dc:creator><pubDate>Sat, 23 Aug 2025 15:30:28 GMT</pubDate><enclosure url="https://substack-post-media.s3.amazonaws.com/public/images/ccd0c711-27bc-4124-92dc-5109137f6c57_1024x1024.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>For anyone who&#8217;s been subscribing here for a while, the last couple of pieces probably came out of left field. It&#8217;s not by accident, I&#8217;m trying to stretch the frame of what I write about&#8230;and how I write it.</p><p>The future&#8217;s coming at us fast. Maybe too fast for us to process. Sometimes we&#8217;ll want to hold on to familiar things. Examine the tools, routines, and stories that can help us feel grounded. But speculation has its own value. I don&#8217;t mean prediction - we&#8217;re useless at that - but orientation. It helps us to adjust to changes, rehearse what different possible futures might look like, or recognize nearer signals of what&#8217;s emerging around us already.</p><p>I started here writing about things tied to work. Quite literally, this site was &#8220;design systems and things,&#8221; professional commentary and reflection on my experiences. That writing still matters to me, and I&#8217;m still going to be pursuing it. But I also want room for different kinds of exploration, writing about what might be, as well as what is.</p><p>So sometimes I&#8217;ll be writing on those more grounded topics, based on my work, expertise, and experience. Sometimes I&#8217;ll be fanciful and imaginative. I&#8217;ll be thinking sideways about what&#8217;s possible. They&#8217;re both outlets I need, as well as the spaces in between.</p><p>I&#8217;m consciously building that here. Field notes that try to explain different fragments of the present. Signal essays to try to connect hints of what&#8217;s next into some larger patterns. And somewhere for stories of imagination about futures that might never exist.</p><p>I&#8217;m not looking to predict or prescribe. But I do want to test some ideas and tease at some threads. In its earlier days the internet was a place for that kind of speculation. Since then it&#8217;s been poisoned by cynicism and commodification. Maybe I can use this place to practice it on a smaller scale.</p><p>If the future&#8217;s coming faster than we can keep up, it&#8217;s not indulgent to speculate. The more possibilities we&#8217;ve considered, the more resilient we might be when reality doesn&#8217;t quite match up.</p><p>Anyway, this might be a reset, or just a widening of the aperture. I&#8217;ll keep writing what I notice about the world of work and technology. It still excites me&#8230;perhaps more than it has for a while. But you can also expect experimentation with more imaginative writing, and letting them blend where they may.</p><p>While I&#8217;m not trying to be right about the future, I&#8217;d love to be more ready for it. This is where I&#8217;ll be doing that.</p><div class="subscription-widget-wrap-editor" data-attrs="{&quot;url&quot;:&quot;https://www.robin-cannon.com/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe&quot;,&quot;language&quot;:&quot;en&quot;}" data-component-name="SubscribeWidgetToDOM"><div class="subscription-widget show-subscribe"><div class="preamble"><p class="cta-caption"><strong>Thanks for reading. Subscribe for essays on design, technology, and culture - plus original fiction.</strong></p></div><form class="subscription-widget-subscribe"><input type="email" class="email-input" name="email" placeholder="Type your email&#8230;" tabindex="-1"><input type="submit" class="button primary" value="Subscribe"><div class="fake-input-wrapper"><div class="fake-input"></div><div class="fake-button"></div></div></form></div></div>]]></content:encoded></item></channel></rss>