All Episodes
Outcome pricing vs Usage pricing vs Seats
September 2, 202500:54:27

Outcome pricing vs Usage pricing vs Seats

with Mark Walker, New

Outcome pricing vs Usage pricing vs Seats

0:000:00

Show Notes

Mark Walker is the CEO of New, a next-generation business system for recurring revenue and consumption companies - covering CPQ, contracting, self-service, billing, and consumption tracking. Mark didn't found the company; he was brought in as CEO about three years ago after being introduced to the platform and to co-founder and CTO Tina Kung, whose engineering vision and obsessive customer commitment convinced him to get off the bench.

New's customer roster includes OpenAI, Glean, and Jasper - companies that together represent the majority of enterprise AI compute. What makes that notable is not just the logos. It's what they reveal about where pricing is headed. None of Mark's AI customers are running the same pricing experiments. That uncertainty is exactly why they chose New.

The Token Cost Problem Nobody Is Ready For

Mark opens with a pricing problem that has no historical precedent: the cost of what AI companies sell is collapsing while they're actively trying to sell it. The cost per token is dropping - fast - and it shows no signs of stabilizing. For a chief revenue officer trying to write a three-year contract, this is a genuine strategic crisis.

Imagine you're selling seats, and a competitor announces their seats are now half the price - and they're honoring that for existing customers. That has happened in traditional SaaS. It's painful. Now imagine that the underlying cost of the thing you're selling is plummeting due to forces entirely outside your control. That's the token world. The comparison Mark draws is Moore's Law, but accelerated: not just compute getting cheaper, but the pace of that cheapening accelerating.

The outcome-based companies have a structural advantage here. If you charge per resolved ticket, per signed document, per enriched contact record - your revenue is fixed while your input costs fall. Your margins expand automatically. But if you're a token-selling company, effectively selling compute, you're in a different game entirely.

Outcome-Based Pricing: Where It Works and Where It Doesn't

Outcome-based pricing is not new. DocuSign sells in envelopes - did the document get signed? That's the outcome. Background check companies sell per check. Data enrichment companies sell per enriched record. These are clean, unitary outcomes that map directly to a business result the buyer already measures. Pricing per result makes sense when the result is unambiguous.

The problem: most software doesn't produce unitary outcomes. New, for example, dramatically accelerates what companies can do - but it's not the only input to their growth. Charging a percentage of revenue would be absurd. Mark's point is that step-based or volume-based pricing (how many invoices, how many quotes) is actually closer to the Snowflake consumption model than true outcome pricing - and that's fine, but founders need to be honest about which model they're actually running.

There's also a subtler conflict of interest. If your AI handles all the easy outcomes, what's left for humans? Mark shares a story from a Fortune 1000 insurance company: their claims processors used to spend 80% of their time helping families through the hardest moments of their lives - emergency funeral funding, settlement navigation. That gave the work meaning. After AI automation, the only cases left were disputes, fraud, and ambiguity. Nobody wanted to do that all day. The company was forced to "salt" the workday - intentionally routing easy cases that the AI could handle back to humans, just to maintain morale. The outcome the software was designed to produce turned out to have costs the pricing model never accounted for.

How New Runs Incompatible Pricing Models Simultaneously

New sells the same product on completely different revenue models at the same time. Some customers buy CPQ on seats. Others buy billing as a percentage of revenue. Others pay flat per-invoice fees - because when you're sending million-dollar invoices, a percentage-of-revenue model stops making sense fast.

This flexibility exists by design. The opening question in every New sales conversation is: where do you see the value? The pricing model follows from the answer. And the commercial flexibility of being able to match pricing to value perception is what lets New have those conversations with the most sophisticated buyers in the world without being boxed in.

The other defense: data. The more data a customer's business runs through New, the more valuable the system becomes and the harder it is to rip out. Mark draws the parallel to how team-based products at OpenAI and Anthropic create switching friction - not through contractual lock-in, but because the product becomes embedded in how the business actually operates. Building toward that kind of data moat is the right long-term strategy regardless of your pricing model.

Price Against Value, Not Against Compute

Mark's core pricing principle for AI-native founders: your differentiation is not the underlying model. It's what you do with it. If you're building a tool that automates the prompting, wraps the agentic capability, and delivers a packaged result that a non-expert can use, your pricing has nothing to do with token costs. It's connected to the AE time you saved, the deal velocity you created, the expert work you replaced.

The competitive landscape - Lovable versus Bolt versus Cursor - is a useful illustration. These companies compete on differentiation, not on raw model access. Design-to-prototype workflows and hard engineering workflows are fundamentally different products even if they run on the same underlying models. If you cannot articulate what you do better than the raw model or a competitor product, you will either get eaten by the model providers or outcompeted by someone who can.

One tactic worth considering: bring-your-own-tokens (BYOT). New offers enterprise customers the option to connect their own API keys. The reason is data sensitivity - no enterprise wants to route its pricing strategy or product roadmap through a shared inference environment where that data might influence recommendations for a competitor. Security and portability are becoming moat dimensions, not just features.

The New Way of Selling: Transparent Disqualification

Mark's argument is direct: the old way of selling - relationship-based persuasion, happy path demos, reputational closing - is already dead. Not because the relationship doesn't matter. It does. But because buyers now go to ChatGPT, Claude, or Gemini before they talk to your sales team, and they ask the one question that terrifies most founders: who failed with this product and why?

They skip the happy path entirely. They already got that from your marketing. What they want is the honest failure analysis. And if your product, your reviews, your case studies, and your public presence don't have a credible answer to that question, you lose the deal before the first call.

New's response was to train their team to actively talk about why you might not want to buy New. When Nvidia came to them, Mark sent them to a competitor - Logic - because New is not built for highly configurable compute platforms. That referral was uncomfortable. It also built more trust than any product demo would have. The new way of selling is either pure PLG (enable the self-service journey, facilitate the transaction) or deeply collaborative transparency that proves you care more about the buyer's outcome than your own close rate.

The Pace of Change of the Pace of Change Has Changed

New's thesis - spelled out on the back of their company t-shirt (available at new.io/change) - is that it's not just that things are moving faster. The rate at which things speed up is itself accelerating. Mathematically: it's not velocity (v) that changed, it's the derivative of velocity. Mark's engineering team corrected him: it's dV, not X. The distinction matters.

The implication for business systems: any platform built for a stable world is now a liability. Legacy data models that cannot adapt are not competitive disadvantages - they're strategic vulnerabilities. New entrants who have never had to maintain a monolithic system may actually be better positioned than incumbents. The companies choosing New are not doing so because they have time to evaluate enterprise software. They're doing so because they cannot afford to be locked into a system that cannot keep pace with their own experiments.

New's own roadmap reflects this. The AI-facing product roadmap is approximately three months long. The core product has a clearer vision. But the specific capabilities of every feature are renegotiated every quarter, because the underlying models and APIs they build on are materially different from what they were 90 days ago. A customer accidentally went live - built their entire implementation in one overnight session during training and was at 95% before anyone realized it was a production deployment. That kind of compression is now normal.

Tools & Resources Mentioned

  • New (get-new.io) - Revenue lifecycle management platform for recurring revenue and consumption companies. CPQ, contracting, self-service, billing. Founders under $20M ARR can reach out - Mark's team will help you for free.
  • new.io/change - Free t-shirt with the dV equation and New's thesis on the pace of change.
  • Metronome - New's billing consumption partner for metered/consumption-based billing at scale.
  • Logic - Recurring billing platform Mark recommends for highly configurable compute sales (e.g., helicopter-parts-style enterprise compute). The competitor New refers clients to when New isn't the right fit.

Frameworks

Outcome-Based Pricing

A pricing model where the customer pays per discrete business result delivered - per signed document, per resolved ticket, per enriched record. Works cleanly when the software produces a single, unambiguous outcome. Breaks down when the product is one input among many to a complex business output. The structural advantage in an AI world: as compute costs fall, outcome-priced companies see margin expansion automatically.

The Salting Problem

When AI automation removes all the 'good cases' from human workers' queues, leaving only edge cases and disputes, employee experience collapses. Companies then have to intentionally route easy cases back to humans - 'salting' the workday - to maintain morale and expertise. A pricing or automation model that does not account for this cost will generate churn and capability loss that undermines the value it created.

Bring Your Own Tokens (BYOT)

An enterprise architecture pattern where customers connect their own AI API keys rather than using vendor-managed inference. Primary driver: data sensitivity - enterprises do not want pricing strategies, product roadmaps, or competitive data flowing through shared model environments. A competitive differentiation dimension becoming increasingly important in AI product design.

Transparent Disqualification

Mark Walker's framing for the new way of selling: train your team to actively discuss why a customer might not want your product. The buyer is already running their own failure analysis via AI tools before the first call. Meeting them there - with honest, specific scenarios where your product is the wrong fit - builds more trust than any happy-path demo. The counterintuitive result: more closes, because the buyers who remain are pre-qualified and confident.

dV - Pace of Change of Pace of Change

New's core thesis: it's not just that the world is moving faster, it's that the acceleration itself is increasing. Traditional planning horizons (annual roadmaps, multi-year contracts, stable pricing models) are not just suboptimal - they are strategically dangerous. The companies best positioned are those who can run experiments, change models, and adapt pricing without being locked into the last experiment they ran.

Data Moat (Switching Friction)

The competitive defense that emerges when a product becomes so embedded in how a business operates that switching costs are prohibitive - not due to contractual lock-in but because the data, configuration, and workflow intelligence built inside the product cannot be easily replicated elsewhere. OpenAI and Anthropic's team-based shared workspaces are examples: the collaboration history and embedded workflows are the moat, not the model.

FAQ

Why are all the major AI companies running different pricing experiments?

Because nobody knows the right answer yet. The pace of model capability improvement, token cost reduction, and competitive pressure from other AI providers means that any pricing model that seems optimal today may be obsolete in six months. Mark's observation is that his AI customers - collectively representing the majority of enterprise AI compute - are unanimous on one thing: they don't want to be locked into a system that commits them to a single pricing structure before the market has settled. The diversity of experiments is rational, not chaotic.

What's the difference between outcome-based pricing and consumption-based pricing?

Outcome-based pricing charges per business result - per resolved ticket, per signed contract, per verified background check. Consumption-based pricing charges per unit of activity - per token, per API call, per invoice generated. The distinction matters because consumption pricing can feel like outcome pricing but isn't. Per-invoice pricing looks like 'you got an invoice' but the customer may care about revenue collected, not documents generated. True outcome pricing requires a clear, unambiguous business event that the buyer already measures and assigns value to independently of your software.

How should an AI startup think about pricing if compute costs keep falling?

Price against your differentiated value, not against compute. If your product saves an AE two hours per day, that value doesn't change when token prices drop. The falling compute costs improve your margins - they don't change your buyer's ROI calculation. Where founders get into trouble is pricing based on token consumption without a clear value layer on top. If the buyer perceives you as reselling compute with light wrapping, you will get squeezed as model providers compete on raw capabilities. Build the value layer, price to that, and let compute cost reduction flow to your margins.

Why does New sell the same product on incompatible pricing models simultaneously?

Because different customers find value in different places. Some find value in the number of seats using CPQ. Others find value in billing efficiency and want to pay as a percentage of revenue processed. Others are sending massive invoices and will only pay flat per-document. The pricing model is a signal about where the customer perceives the value - and forcing every customer into the same model means you're leaving money on the table for customers who see the value differently, and pricing yourself out for customers who don't fit your assumed value map.

What is the 'new way of selling' and how should founders prepare for it?

Before any buyer takes a sales call, they are querying AI tools - ChatGPT, Claude, Gemini - for the failure cases of your product. They are not asking what your product does. They already know. They are asking who failed, why, and what went wrong. Founders who ignore this are walking into calls with buyers who have already formed a risk hypothesis. The counter-strategy: be the most transparent person in the room about your product's limits. Train your team to actively discuss scenarios where you are not the right fit. Proactively refer out when you are not the right answer. The buyers who remain after that conversation are your highest-quality pipeline.

What does Mark mean when he says 'nothing grows in the dark'?

The value of keeping your startup idea secret has dropped to near zero. The risk that someone steals a well-explained idea is lower than the cost of building in isolation without the community, feedback, and early champions that public sharing creates. More importantly: if you explain your idea to someone and they can talk you out of it, that's information you needed. If they can't, that's the signal that you have real conviction. Share the idea. Build the community around it. The secrecy mindset is a legacy heuristic from a world where competitive intelligence moved slowly. That world no longer exists.

How is New helping founders under $20M ARR?

Mark's explicit offer: if your company is under $20M ARR, reach out to New and they will help you with their platform. The reason is not charity - it's learning. New's best customers are the biggest, most demanding AI companies in the world, and those companies push New's capabilities in ways that make the product better. Smaller, earlier-stage founders provide a different kind of learning: what does the platform look like for a company that doesn't yet know what its pricing model should be? That early-stage product intelligence is valuable regardless of the revenue it generates.

Links & Resources