For a long time, clarity has been synonymous with credibility.
In the digital age, higher resolution meant better cameras. Better cameras meant fewer distortions. Fewer distortions meant fewer lies, less confusion.
But in the age of generative media, clarity has become cheap and easy.
We have systems that produce convincing video and audio. They start at a point of high-fidelity. Being able to synthesize an infinite number of possibilities is the baseline. No pixellated faces, no cracked voices. The lighting looks right. Imperfections are optional.
The problem for institutions, and for civil society, in 2026 is not that synthetic media looks fake.
It’s that it can look better than reality.
Some fakes still ring false
Consider a (sadly not hypothetical) fake video: armed police arresting Barack Obama in the Oval Office, and dragging him out.
A sufficiently advanced generative model can render this with astonishing levels of realism. You could match the lighting with archival footage. The audio could replicate not just voices, but room acoustics. The uniforms correct down to the details in the stitching.
Despite the technical plausibility, the video was broadly rejected.
Not universally. A significant minority believed it, shared it, and amplified it. But across political lines, most viewers sensed something was off.
Not because of compression artifacts.
Not because the faces weren’t quite right.
But because it violated the narrative constraint. It didn’t fit institutional reality, or plausible procedure. It wasn’t how people believe power behaves in the real world.
Humans are far from perfect as lie detectors. We’re inconsistent and biased, often overconfident. But we are sensitive to implausibility. We rely on whether the context makes sense - on friction, inertia, institutional continuity - to judge what’s real.
We don’t judge pixels.
We evaluate if the event makes sense inside the systems we think we understand.
It’s an uneven sensitivity. And it can be misled or overridden by partisanship, repetition, or desire.
But our sensitivity to bullshit doesn’t need to fail everywhere in order to be destabilizing.
Perception alone isn’t the battleground.
Constraint is.
The deepfake inversion
Early deepfakes weren’t all that deep. They glitched. Faces slid. People had too many hands, or a weird elbow.
The next generation will fail situationally.
The most convincing synthetic media is not going to be the most spectacular. It will aim to be mundane. It will be modest. It’s going to try to mimic constraint.
We can already see that in some fraud attempts:
Slightly distorted audio
Compressed video
Footage looks handheld rather than cinematic
Attackers are learning that imperfection now increases, rather than reduces, plausibility.
Perfection triggers suspicion. Constraint is more convincing.
Constraint as authenticity
Historically, constraint has been accidental:
Film grain
Static
Limited bitrates and compression
Background noise
But these were artifacts that came from the limits of the medium.
Now, given the near-infinite generative capacity, those artifacts can be created synthetically.
We can’t rely on visual or audio imperfection as proof.
So we need a constraint that is bound to the conditions of capture.
That means changing how we think about media authenticity.
No longer:
Does this look real?
But:
Was this captured under verifiable, constrained conditions?
Authenticity becomes a question of provenance, not just of optics.
Certifying constraint
Imagine a defined and scoped “human-origin” channel for official communication.
Some possible characteristics:
Hardware-bound capture
Fixed compression profiles
Real-time signing
Mandatory artifact retention
No post-processing or enhancement
The output would be clean. But it would not be optimized for aesthetics. And it would not be flexible in its usage.
It would be certified by design.
The verifiable limitations would prove the origin.
Its visible limits become as much a feature as a flaw.
The adversarial response
This isn’t going to be a stable situation. It’s already in flux.
The best synthetic media systems will incorporate constraint modeling. They’ll simulate compression, inject sensor noise, try and mimic hardware signatures.
The arms race will go on.
When authenticity can’t rest on aesthetics and quality, it must be part of infrastructure.
Chain of capture and possession
Cryptographic signing
Hardware identity and watermarks
Transmission logs
It will become a case not of the visible imperfections, but on proving those imperfections came about because of the circumstances of physical capture.
Beyond pixels
We’re part of a broader historical pattern.
When imitation is near-effortless, authenticity migrates from the surface layer to the origin. Classical art has long operated this way. A painting’s credibility is not determined by how convincing it looks, but by its provenance - who created it, who handled it, whether the materials used align with its supposed era, whether its lineage can be traced.
A perfect forgery without history is still a forgery.
Digital media is moving in the same direction.
As synthetic generation becomes trivial, our perception collapses as a useful test. A video can be flawless and it can be false. We can be persuaded by an audio clip that’s entirely invented. Without any traceable origin, media becomes an assertion without any evidence.
Constraint cannot remain based on the aesthetic. Constraint based on social trust - news media, government sources, institutions - is eroding. Constraint must be infrastructural. A digital equivalent of checking the canvas grain and pigment composition - the trace of the capture that’s embedded in the artifact itself.
In a world where anything can be rendered, the future of authenticity isn’t higher resolution.
It may be certified imperfection.
Further reading:
Kara-Yakoubian, M. New psychology research reveals the “bullshit blind spot”. PsyPost, May 2023.
Villasenor, J. Artificial intelligence, deepfakes, and the uncertain future of truth. The Brookings Institute, Feb 2019.
Photo by JACQUELINE BRANDWAYN on Unsplash
