Category Archives: Metrics

Slides From My SaaS Metrics Palooza 2025 Session on Selling Work vs. Selling Software

Today, I presented at SaaS Metrics Palooza 2025 on the differences between selling work and selling software. I’d like to thank my metrics brother, Ray Rike, for inviting me to speak and I’d like to thank everyone who attended the session.

Topic covered include:

  • Defining outcomes
  • Contrasting outcomes vs. usage
  • The outcomes stack and intermediate vs. end outcomes
  • How a dating site would price based on outcomes vs. subscriptions
  • The basic trade-offs in selling subscriptions vs. outcomes
  • How to capture value created and share it between the vendor and customer
  • How selling outcomes can (radically) expand the total available market (TAM)
  • Jevon’s Paradox and what happens when we make things radically cheaper
  • Selling virtual humans vs. jobs-to-be-done
  • A long list of links to references for additional reading

You can download a PDF of the slides here. You should be able to see a recording of the session here. (Frankly, I’m not 100% sure that link will work, but you can try.) And I’ve embedded the slides below.

Slides from Balderton Webinar on Aligning Product and GTM Using Customer Value Metrics

Today Dan Teodosiu, Thor Mitchell, and I hosted a Balderton webinar entitled Aligning Product and Go-To-Market (GTM) Using Customer Value Metrics. We are all executives in residence (EIRs) at Balderton — Dan covers technology, Thor covers product, and I cover go-to-market — and, in a display of cross-functional walking-the-talk, we came together to present this session on alignment.

The session was based on an article Dan and I wrote, by the same title, which was published on the Balderton site last month and about which I wrote here. The purpose of this post is to share the slides from that webinar which are available here and embedded below.

Thank you to everyone who attended the session and who asked questions in advance or in the chat. I’m sorry that we didn’t have the time to answer each question, but if you drop one into the comments below, I’ll do my best to answer it here and/or ask Dan or Thor to weigh in as well. I’m not aware if Balderton is going to make a video of the session available, but if they do I’ll revise this post and put a link here.

“All Models Are Wrong, Some Are Useful.”

“I have a map of the United States … actual size. It says, Scale: 1 mile = 1 mile. I spent last summer folding it. I also have a full-size map of the world. I hardly ever unroll it.” — Stephen Wright (comedian)

Much as we build maps as models of the physical world, we build mathematical models all the time in the business world. For example:

These models can be incredibly useful for planning and forecasting. They are, however, of course, wrong. They’re imperfect at prediction. They ignore important real-world factors in their desire for simplification, often relying on faith in offsetting errors. Reality rarely lands precisely where the model predicted. Which brings to mind this famous quote from the British statistician George Box.

“All models are wrong. Some are useful.” — George Box

It’s one of those quotes that, if you get it, you get it. (And then you fall in love with it.) Today, I’m hoping to bring more people into the enlightened fold by discussing Box’s quote as it pertains to three everyday go-to-market (GTM) models.

First, it’s why we don’t want models to be too precise and/or too complex. They’re not supposed to be exact. They’re not supposed to model everything, they’re supposed to be simplified. They’re just models. They’re supposed to be more useful than exact.

For example, in finance, if we need to make a precise budget that handles full GAAP accounting treatment then we do that. We map every line to a general ledger (GL) account, do GAAP treatment of revenue and expense, model depreciation and allocations, et cetera. It’s a backbreaking exercise. And when you’re done, you can’t really play with it to learn and to understand. It’s precise, but it’s unwieldy — a bit like Stephen Wright’s full-scale map of the US. It’s useful if you need to bring a full-blown budget to the board for approval, but not so useful if you’re trying to understand the interplay between sales productivity, sales ramping, and sales turnover. You’d be far better off looking at a sales bookings capacity model.

To take a different example, it’s why business school teaches you discounted cashflow (DCF) analysis for capital budgeting. DCF basically throws out GAAP and asks, what are the cashflow impacts of this project? The assumption being that if the DCFs work out, then it’s a good investment and that will eventually show up in improved GAAP results. Notably — and I was really confused by this when I first learned capital budgeting — they don’t teach you to build a 20-year detailed GAAP budget with different capital project assumptions and then do scenario analysis. Instead, they strip everything else away and ask, what are the cashflow impacts of this project versus that one?

In the rest of this post, I’ll explore Box’s quote as it relates to the three SaaS GTM models I discussed in the introduction. We’ll see that it applies quite differently to each.

Sales Bookings Capacity Models

These models calculate sales bookings based on sales hiring and staffing (including attrition), sales productivity, and sales ramping (i.e., the productivity curve new sellers follow as they spend their first few quarters at the company). Given those variables and assuming some support resources and ratios (e.g., AE/SDR), they pop out a series of quarterly bookings numbers.

While simple, these models are usually pretty precise and thus can be used for both planning and forecasting (e.g., predicting the bookings number based on actual sales bookings capacity). Thus, these are a lot useful and usually only a little wrong. In fact, some CEOs, including some big name ones I know, walk around with an even simpler version of this model in their heads: new bookings = k * (the number of sellers) where that number might be counted at the start of the year or the end of Q1. (This is what can lead to the sometimes pathological CEO belief that hiring more sellers directly leads to bookings, but hiring anything else does not, or at least only indirectly.)

Marketing Inverted Funnel Models

These models calculate the quarterly demand generation (demandgen) budget given sales booking targets, a series of conversion rates (e.g., MQL to SAL, SAL to SQL, SQL to won), and assumed phase lags between conversion points. They effectively run the sales funnel backwards, saying if we need this many deals, then we need this many SQLs, this many SALs, this many MQLs, and this many leads at various preceding time intervals.

If you’re selling anything other than toothbrushes, these models are wrong. Why? Because SaaS applications, particularly in enterprise, are high-consideration purchases that involve multiple people over sometimes prolonged periods of time. (At Salesforce, we won a massive deal on my product where the overlay rep had been chasing the deal for years, including time at his prior employer.)

These models are wrong because they treat non-linear, over-time behavior as a linear funnel. I liken the reality of the high funnel more to a popcorn machine: you’re never sure which kernel is going to pop, when, but if you add this many kernels and this much heat, then some percentage of them normally pops within N quarters. These models are a lot wrong — from first principles, by not just a little bit — but they are also a lot useful.

I think they work because of offsetting errors theory, which requires the company to be on a relatively steady growth trajectory. Sure, we’re modeling that last quarter’s MQLs are this quarter’s opportunities, and that’s not right (because many are from the quarter before that), but — as long as we’re not growing too fast or, more importantly, changing growth trajectory — that will tend to come out in the wash.

Note that if you wanted to, you could always build a more sophisticated model that took into account MQL aging — or today use an AI tool that does that for you — but you’ll still always be faced with two facts: (1) the trade-offs between model complexity and usefulness and (2) that even the more sophisticated model will still break when the growth trajectory changes or reality otherwise changes out from underneath the model. Thus, I always try to build pretty simple models and then be pretty careful in interpretation of them. Think: what’s going to break this model if it changes?

Marketing Attribution Models

I try not to write much about marketing attribution because it’s quicksand, but I’ll reluctantly dip my toe today. Before proceeding, I encourage you to take a moment to buy a Marketing Attribution is Fake News mug which is a practical, if passive-aggressive, vessel from which to drink your coffee during the next QBR or board meeting.

Marketing attribution is the attempt to assign credit for marketing-generated opportunities (itself another layer of attribution problem) to the marketing channels that generated them. In English, let’s assume we all agree that marketing generated an opportunity. But that opportunity was created at a company where 15 people over the prior 6 quarters had engaged in some marketing program in some way — e.g., clicking an ad, attending a webinar, downloading a white paper, talking to us at a conference, etc.

There are typically two levels of reduction: first, we identify one primary contact from the pool of 15 and second, we identify one marketing program that we decide gets the credit for the opportunity. Typically, people use last-touch attribution, assigning credit to the last program the primary contact engaged with before the opportunity was created. This will overcredit lower-funnel programs (e.g., executive dinners) and undercredit higher-funnel programs (e.g., clicking on an ad). Some people use first-touch attribution, reversing the problem to over-credit higher-funnel programs and under-credit lower-funnel ones. Knowing that both of those problems aren’t great, some send complexity to the rescue, using points-based attribution where each touch by each person scores one or more points, and you add up those points and then allocate credit across channels or programs on a pro rata basis. This is notionally more accurate, but the relative point assignments can be arbitrary and the veil of calculation confusion generally erodes trust in the system.

The correct way, in my humble opinion, to do attribution analysis is to approach it with humility, view it as a triangulation problem, and to make sure people absolutely understand what you’re showing them before you show it (e.g., “we’ll be looking at marketing channel performance using last-touch based attribution on the next slide and before I show it, I want to ensure that everyone understands the limits of interpretation of this approach.”) Then follow any attribution-based performance analysis with some reverse-touch analysis where you show all the touches over the prior two years, deal by deal, for a small set of deals chosen by the CRO in order to demonstrate the messy, ground-level reality of prospect interactions over time. Simply put, it’s the CMO’s job to decide how to allocate resources in this very squishy world, to make those decisions (e.g., do we do tradeshow X and do we spend $Y) in active discussion with the CRO as their partner and with a full understanding of the available data and the limitations on its interpretability. The board or the e-staff simply can’t effectively back-seat drive this process by looking at one table and saying, “OMG, tradeshow oppties cost $25K each, let’s not do any more tradeshows!” If only the optimization problem were that simple.

But, back to the Box quote. How does it apply to attribution? These models are a lot wrong, at best a little useful, and even potentially dangerous. Hence my recommendations about disclaiming the data before showing it, using triangulation to take different bearings on reality, and doing reverse-touch analysis to immediately re-ground anyone floating in a cloud of last-touch-based over-simplification.

Note that the existence of next-generation, full-funnel attribution tools such as Revsure, doesn’t radically change my viewpoint here because we are talking about the fundamental principles of models. They’re always wrong — especially when trying to model something as complex as the interactions of 20 over people at a customer with 5 people and 15 marketing programs at a company, all while those people are talking to their friends and reading blogs and seeing billboards from a vendor. I believe tools like Revsure can take the models from a lot wrong to a little wrong, and ergo improve them from potentially dangerous to useful. But you should still show the reverse-touch analysis to keep people grounded.

And Box’s quote still applies: “All models are wrong. Some are useful.” And what a lovely quote it is.

A CEO’s High-Level Guide to GTM Troubleshooting

I’ve written about this topic a lot over the years, but never before integrated my ideas into a single high-level piece that not only provides a solution to the problem, but also derives it from first principles. That’s what I’ll do today. If you’re new to this topic, I strongly recommend reading the articles I link to throughout the post.

Scene: you’re consistently having trouble hitting plan. Finance is blaming sales. Sales is blaming marketing. Marketing is blaming the macro environment. Everyone is blaming SDRs. Alliances is hiding in a foxhole hoping no one remembers to blame them. E-staff meetings resemble a cage fight from Beyond Thunderdome, but it’s a tag-team match with each C-level tapping in their heads of operations when they need a break. Numbers are flying everywhere. The shit is hitting the proverbial fan.

The question for CEOs: what do I do about this mess? Here’s my answer.

First:

  • Avoid the blame game. That sounds much easier than it is because blame can vary from explicit to subtle and everyone’s blame sensitivity ears are set to eleven. Speak slowly, carefully, and factually when discussing the situation. You might wonder why everyone is pointing fingers, and the reason might well be you.
  • Solve the problem. Keep everyone focused on solving the problem going forward. Use blameless statements of fact when discussing historical data. For example, say “when we start with less than 2.5x pipeline coverage, we almost always miss plan” as opposed to “when marketing fails on pipeline generation, we miss plan unless sales does their usual heroic job in pipeline conversion.”)

Then reset the pipeline discussion by constantly reminding everyone of these three facts:

  • How do you make 16 quarters in a row? One at a time.
  • How do you make one quarter? Start with sufficient pipeline coverage.
  • And then convert it at your target conversion rate.

This reframes the problem into making one quarter — the right focus if you’ve missed three in a row.

  • This will force a discussion of what “sufficient” means
  • That is generally determined by inverting your historical week 3 pipeline conversion rates
  • And adjusting them as required, for example, to account for the impacts of big deals or other one-time events
  • This may in turn reveal a conversion rate problem, where actual conversion rates are either below targets and/or simply not viable to produce a sales model that hits the board’s target customer acquisition cost (CAC) ratio. For example, you generally can’t achieve a decent CAC ratio with a 20% conversion rate and 5x pipeline coverage requirement. In this case, you will need to balance your energy on improving both conversion rates and starting coverage. While conversion rates are largely a sales team issue, there is nevertheless plenty that marketing and alliances can do to help: marketing through targeting, tools, enablement, and training; alliances through delivering higher-quality opportunities that often convert at higher rates than either inbound or SDR outbound.

It also says you need to think about each and every quarter. This leads to three critical realizations:

  • That you must also focus on future pipeline, but segmented into quarters, and not on some rolling basis
  • That you need to forecast pipeline (e.g., for next quarter, if not also the one after that)
  • That you need some mechanism for taking action when that forecast is below target

The last point should cause you to create some meeting or committee where the pipeline forecast is reviewed and the owners of each of the four to six pipeline sources (i.e., marketing, AE outbound, SDR outbound, alliances, community, PLG) can discuss and then take remedial measures.

  • That body should be a team of senior people focused on a single goal: starting every quarter with sufficient pipeline coverage.
  • It should be chaired by one person who must be seen as wearing two hats: one as their functional role (e.g., CMO) and the other as head of the pipeline task force. That person must be empowered to solve problems when they arise, even when they cross functions.
  • Think: “OK, we’re forecasting 2.2x starting coverage for next quarter instead of 2.5x, which is a $2M gap. Who can do what to get us that $2M?”
  • If that means shifting resources, they shift them (e.g., “I’ll defer hiring one SDR to free up $25K to spend on demandgen”).
  • If that means asking for new resources, they ask (e.g., I’ll tell the CEO and CFO that if we can’t find $50K, then we think we’ve got no chance of hitting next quarter’s starting coverage goals).
  • If that means rebalancing the go-to-market team, they do it. For example, “we’ve only got enough pipeline to support 8 AEs and we’ve got 12. If we cut two AEs, we can use that money to invest in marketing and SDRs to support the remaining 10.”
  • Finally, if you need to focus on both pipeline coverage and conversion rates, then this same body, in part two of the meeting, can review progress on actions design to improve conversion.

Teamwork and alignment is not about behaving well in meetings or only politely backstabbing each other outside them. It’s about sitting down together to say, “well, we’re off plan, and what are we going to do about it?” And doing so without any sacred cows in the conversation. Just as no battle plan survives first contact with the enemy, no pipeline plan survives first contact with the market. That’s why you need this group and that’s what it means to align sales, marketing, alliances, and SDRs on pipeline goals. It’s the translation of the popular saying, “pipeline generation is a team sport.”

Notice that I never said to heavily focus on individual pipeline generation (“pipegen”) targets. Yes, you need them and you should set and track them, but we must remember the purpose of pipegen is to hit starting pipeline coverage goals. So just as we shouldn’t overly focus on other upstream metrics — from dials to alliances-meetings to MQLs — we shouldn’t overly focus on pipegen targets to the point where they become the end, not the means. While pipegen is certainly closer to starting coverage than MQLs or dials, it is nevertheless an enabler, in this case, one step removed.

Yes, tracking upstream metrics is important and for marketing I’d track both MQLs and pipegen (via oppty count, not dollars), but I’d neither pop champagne nor tie the CMO to the whipping post based on either MQLs or pipegen alone.

Don’t get me wrong — if your model’s correct, it should be impossible to consistently hit starting pipeline coverage targets while consistently failing on pipegen goals. But in any given quarter, maybe the AEs are short and marketing covers or marketing’s short and alliances covers. The point is that if the company hits the starting coverage goal, we’re happy with the pipeline machine and if we don’t, we’re not. Regardless of whether individual pipeline source X or Y hit their pipegen goals in a quarter. Ultimately, this point of view drives better teamwork because there’s no shame in forecasting a light result against target or shame in asking for help to cover it.

Finally, I’d note an odd situation I sometimes see that looks like this:

  • Sales consistently achieves bookings targets, but just by a hair
  • Marketing consistently underachieves pipeline targets

For example, sales consistently converts pipeline at 25% off 4x coverage and that 25% conversion rate is just enough to hit plan. But, because the CRO likes cushion, he forces the CMO to sign up for 5x coverage. Marketing then consistently fails to deliver that 5x coverage, delivering 4x coverage instead.

This is an unhealthy situation because sales is consistently succeeding while marketing is consistently failing. If you believe, as I do, that if sales is consistently hitting plan then, definitionally marketing has provided everything it needs to (from pipeline to messaging to enablement), then you can see how pathological this situation is. Sales is simply looking out for itself at the expense of marketing. That’s good for the company in the short term because you’re consistently hitting plan, but bad in the long term because there will be high turnover in the marketing department that should impede their ability to deliver sufficient pipeline in the future.

For more on this topic, please listen to our podcast episode of SaaS Talk with the Metrics Brothers entitled: Top-Down GTM Troubleshooting, Dave’s Method.

How to Calculate Cost Per Opportunity

My marketing professor once said, The answer to every marketing question is, “It depends.” Thus, the important part is knowing on what.

So, how do you calculate the cost/opportunity? Well, it depends! On what? On the specific question you’re trying to answer. When people ask about cost/opportunity, they usually have one of two things in mind:

  • An efficiency question — e.g., how efficiently does marketing spend convert into sales opportunities (oppties)?
  • A cost question — e.g., how much it would cost to get 50 more oppties if we needed them

Knowing which question you’re being asked has a big impact on how to calculate the answer. Let’s illustrate this by looking at this typical marketing budget, which is allocated roughly 45/45/10 across people, programs, and technology:


If this marketing team generated 1,000 oppties, then the average total marketing cost/oppty is $9,000 = $9M/1K oppties. You might argue that’s a good overall marketing efficiency metric and try to benchmark it. But those benchmarks will be hard to find.

Why?

Because there’s a better overall marketing efficiency metric: the marketing customer acquisition cost (CAC) ratio = (last-quarter marketing expense)/(this-quarter new ARR). Why is the marketing CAC a better marketing efficiency metric than average total marketing cost/oppty?

  • It’s more standard. While relatively few startups break their CAC ratio in two parts, virtually every startup already calculates CAC ratio or CAC payback period (CPP). People are familiar with the concept and the math mostly already done — just back out the sales expense.
  • There is less room for calculation debates. While neither total cost/oppty or marketing CAC is hard to calculate, because marketing CAC is a derivative of CAC, some nagging questions are already answered for you – e.g., Is it all marketing or just a part? Is it GAAP expense or cash expense? Answers: look at how you calculate your CAC ratio for guidance.
  • The phase shift. The CAC ratio compares last quarter’s expense to this quarter’s new ARR in an attempt to better match expenses and results.
  • There are more benchmark data sets. I can think of about ten sources for CAC ratio data (not all of which make the sales/marketing split). I can think of approximately zero for average total marketing cost/oppty. You can’t benchmark a metric without good data sets to compare against.

So if someone’s asking you about marketing efficiency by looking at average total marketing cost/oppty, I’d politely redirect them to the marketing CAC ratio.

But say they’re looking at cost. Specifically, that the company is forecasting a pipeline generation shortfall of about 50 oppties and the CEO asks marketing: How much money will it take for you to generate 50 more?

Is $9,000 * 50 = $450,000 even correct?

The answer is no. To get 50 more oppties, you don’t need to hire 5% more marketers, boost the CMO’s salary by 5%, up the PR agency retainer by 5%, increase the userconf budget by 5%, spend 5% more on billboards, or increase tech infra spending by 5%. Thus, you should not multiply the average total marketing cost of an oppty by the number of oppties. You should multiply the incremental cost of an oppty by 50.

And the best answer we have here, at our fingertips, for the incremental cost of an oppty is the average demandgen programs cost/oppty. In our example, that’s $3,250. So, to generate 50 more oppties would cost $162,500. That’s good news because it’s a whole lot less than $450,000 and because it’s correct.

In short, cost/oppty = total demandgen cost / number of oppties.

This begs a potential rathole question which I call the low-hanging fruit problem. Most demandgen marketers argue that picking oppties out of the market is like picking apples out of a tree. First, you pick the easy ones, which doesn’t cost much. But the more apples you need, the higher up the tree you have to go. That is, the cost of picking the 1,000th apple is a lot higher than the cost of picking the first one. That is, the average cost of picking 1,000 apples is less than the incremental cost of getting one more.

While I think there’s some truth to this argument — and a lot of truth when it comes to paid search — you can’t let yourself slide into an analytical rathole. As CMO, a key part of your job is to always know the incremental cost of generating 50 more opportunities. Because — as veteran CMOs know well — either or both of these things happen with some frequency:

  • There is an oppty shortfall and someone asks how much money you need to fill it. You should answer instantly.
  • There is a money surplus and on day 62 of the quarter the CFO approaches you, asking if you can productively spend $100K this quarter. The answer should always be, “yes” and you should start deploying the money the next day.

That’s what you might call “agile marketing.” And you get agile by doing the math in advance and having the incremental spending plan in your pocket, waiting for the day when someone asks.

To make things easy, unless and until you have a spending plan that answers the cost of getting 50 more oppties, just use your average demandgen cost/oppty and uplift it by 25% to adjust for the low-hanging fruit problem. That way you can answer the boss quickly and you’ve left yourself some room.

Let’s close this out by raising a common objection to using demandgen costs only. It sounds something like this:

If I use demandgen cost only, someone might say that I’m understating the true cost of a marketing-generated opportunity and I’m going to get in trouble.

Well, that certainly can happen. People can accuse you of anything. There are two ways to avoid this.

  • Speak precisely. If asked, say “the average demandgen cost of an oppty is $3,250.” And, “the incremental cost of getting 50 more will be around $4,050.” (An approximately 25% uplift.)
  • Use footnotes. If making slides, always put definitions in the footer. So, if a row is labeled “cost/oppty” then make a footnote that explains that it’s demandgen cost only. Better yet, label the row “demandgen cost/oppty” and use the footnote to explain why that’s a better proxy for an incremental cost — which is the thing most people are worried about.

And finally, remind them if they want to discuss overall marketing efficiency, they should change slides and look at the marketing CAC ratio, which does proudly include every penny of marketing expense. And if you’re really, really good, ask them to skip to the slide that shows the sales/marketing expense ratio and discuss that.