What are we actually buying when we buy AI?
AI reshapes labor, risk, and the cost of decision-making.
You might have been involved in a few AI purchasing conversations over the past year or two. They often follow a familiar script.
Faster delivery. Fewer people. Higher output. A competitive advantage.
There are impressive demos. Confident pricing decks. And perhaps one or two clear case studies.
Then the AI systems arrive.
Suddenly there’s a lot more process than expected. More data work. We need to do more governance work. There’s much more human oversight. An increasing number of meetings about “how to use it responsibly”. There are productivity gains in pockets, but it’s not consistent. Those headcount reductions don’t quite materialize. And your operational complexity starts to increase.
This doesn’t mean the technology doesn’t work. But it does suggest that we might have been unclear about what we were buying.
The story we tell ourselves
When organizations are talking about buying AI, they’re probably thinking of it in the same way as other improved software capabilities.
Intelligence
Automation
Creativity
Replacement for labor
Those are all fairly abstract concepts, though. They’re as much a marketing category as they are an operational one.
The value organizations extract from AI tend to cluster around different - less glamorous - things. Things that are more structural, and far more dependent on context.
What organizations are really buying
1. Velocity
The most visible benefit, and almost definitely the easiest one to sell.
AI is quick to produce outputs. It can draft, summarize, code, deliver UI variants, give you some analysis. And this is great for removing friction in the early stages of work.
But this is only going to be valuable if your surrounding system is able to absorb it.
If it can’t - and that’s the case in many organizations - that early speed is just moving a bottleneck further downstream. Review, integration, governance, legal, QA, product coherence suddenly all become more complicated.
Those extra complications may well cancel out all of that early stage acceleration. Velocity without coordination doesn’t create speed, it creates noise.
Speed is a feature. It probably isn’t the product.
2. Consistency
A quiet, unglamorous, workhorse.
AI systems can produce uniform structure. They can maintain a tone, a formatting style. AI can follow rules. And that can be incredibly beneficial in larger organizations where inconsistency can be a drag on productivity.
Uniformity of style, patterns, or language. This doesn’t always come across in a demo. But it’s a really durable value.
It’s also a value that demonstrates the vital importance of context. Consistency can only exist relative to some set of shared decisions about what “correct” looks like.
3. Reshaping (not eliminating) cost
AI doesn’t necessarily remove labor. It relocates it.
That work moves into:
Data preparation
Labeling and taxonomy definition and design
Prompt engineering
Evaluation
Policy definitions
Monitoring and exception handling
That’s a significant shift in headcount, but not a disappearance.
The cost changes form, before it changes size.
An organization that expects labor to vanish is likely to be disappointed. An organization that understands and expects to repurpose labor will be less surprised.
4. Optionality
Think of this as the hedge against being left behind.
Executives buy AI for the same kind of reasons they might invest in cloud solutions before there’s a clear direction in what they’re going to build: it avoids being trapped later.
This has value, despite the vague use cases. But it’s not the same as buying finished capacity.
A useful memory: IBM Watson
We’ve been here before.
In 2011, IBM’s Watson defeated human champions on Jeopardy! That created a powerful narrative about the arrival of general-purpose AI (or “cognitive computing” as IBM tended to say).
Many organizations rushed to adopt it - believing that they were buying intelligence - a system they could point at a domain and begin reasoning in a productive way.
What they actually bought was something else:
Large-scale data ingestion
Domain-specific training
Constructing ontologies
Labeling efforts
Ongoing tuning
Critically, the need for long-term consulting engagements
Companies might spend millions of dollars a year teaching Watson how their world worked.
That’s not necessarily a failure of the technology. But it’s a very clear mismatch in expectations.
Watson didn’t fail at being intelligent. It succeeded as a machine for encoding context - slowly, expensively, and with constant human involvement.
The consulting bill ended up not as a side effect, but as the product.
Same pattern, better demos
OK, today’s models are more flexible. We’re more than a decade further on in creating friendlier interfaces. The generality of today’s AI is much more real.
But there’s an underlying dynamic that’s still the same.
AI systems still require:
Explicit definitions of acceptable behavior
Structured representations of domain rules
Boundaries set for them
Exception handling
Ongoing evaluation
Continuous alignment with the realities of an evolving organization
The main difference today is that the demos are better at obscuring this dependency for longer.
Language models feel general, even friendly. But there is a brittleness behind the fluency. Context gaps don’t show up until systems are more embedded into production workflows, our compliance environments, or even customer-facing surfaces.
And then all that familiar work begins.
Vendors benefit from this ambiguity
This is probably an uncomfortable truth.
The more vague a buyer is about what they’re purchasing, the more benefit for the vendor.
Ambiguity allows:
Performance to be judged on impressions, not data
Diffusion of responsibility
Elastic timelines
Reframing costs as “enablement” rather than necessary maintenance
Recasting failure as an adoption challenge
“AI” is a particularly powerful label, because it’s a bundle of different values wrapped up in a single term. One word being used to describe speed, consistency, experimentation, automation, and strategic positioning.
If the buyer’s mental model is less precise, all those outcomes can remain more easily negotiable.
That’s not malice, but it is structural. And to some extent ambiguity might also benefit the buyer.
Organizations should be skeptical of any pitch that cannot answer clearly:
What are we actually paying for?
Context is the real budget line
Successful AI deployments are not determined on model quality alone.
The hidden constant is context creation and maintenance.
Formalizing decisions that had been informal
Documenting all the exceptions
Defining boundaries
Encoding organizational preferences
Stabilizing coherent vocabularies
Agreement on what “good” means
Being able to maintain these definitions as the underlying realities change
And this is slow, organizational, arguably very human work. It doesn’t scale in the same ways. It doesn’t easily show up in benchmarks.
But without it, AI systems will remain very good at producing fast output that’s locally plausible but globally incoherent.
Better buying questions
Don’t ask…
What can this model do?
Ask…
What decisions are we, as an organization, formalizing?
What labor is moving somewhere else, rather than disappearing?
Are we outsourcing our risk to our vendor? How?
What context do we need to maintain indefinitely?
What parts of our organization need to change to make this work?
Without answering those questions, it’s difficult to know if an AI system is succeeding.
The technology we’re looking at is very real.
But so is the work needed to make it actually useful.
If you don’t know what you’re buying then you can’t possibly know whether it’s working!
Further reading:
10 years ago, IBM’s Watson threatened to disrupt healthcare. What happened? Advisory Board Daily Briefing, July 2021
Article cover image by Alex Shuper on Unsplash
