Let’s say you’re a CEO. You don’t come from a marketing background. At every quarterly business review (QBR) and board meeting, your marketing head presents a chart like this:
What happens next? More than likely, after 10 or 15 minutes of effectively random probes into this minefield of numbers, you do what any good CEO would under the circumstances. You say:
“Next slide, please.”
To paraphrase Thoreau, the mass of CEOs lead lives of quiet marketing desperation. Slides like this are why. What’s wrong with this slide? 
Well, to get this out of my system, there are a number of what I’d call mechanical problems:
- It mixes different time periods as the reader scans across columns making it difficult to spot trends. Better to group quarters and years on the right.
- It has excess precision. Too many digits are unnecessary and impede comprehension. Better to show pageviews by the thousand, demandgen by the kilodollar ($K), cost/MQL without the pennies, and conversion rates to the percentage point, not the basis point.
- It contains too many rows. Even if they’re all of interest (and they aren’t), it’s simply too much.
- It fails to use formatting, such as commas, to make figures more easily grasped.
These details aren’t nits . Particularly if you’re a finance or ops person (e.g., saleops, marketingops), your job is to present data in a way that is clear, consistent, and comprehensible. In short, your job is to “light shit up” when there are problems. This slide does anything but.
More importantly, there are what I’d call conceptual problems with the slide:
- It’s a sea of numbers that drowns the reader in data, making it impossible to find insights. To paraphrase the old saw, “all these trees are making it hard for me to see the forest.”
- It’s supposed to be a summary of the funnel for a board meeting or QBR. This summary doesn’t summarize.
- It contains numerous rows that are not appropriate for such a summary and serve only to cognitively overload the reader.
- Worst yet, it omits rows of high potential interest. Specifically, unit cost (e.g., cost/oppty) rows that can help readers understand the viability of the business model .
In the above table, I tried to hide a big problem floating in that sea of numbers. Did you find it? Did the slide help you do so?
Before transforming the table into something more useful, let’s talk briefly about what we’re going to do. Three simple things:
- Take hops down the funnel instead of steps. Instead of looking at each conversion rate as we descend, we will look only at MQLs, stage 1 and stage 2 oppties, closed/won deals, and associated conversion rates between them. Any problems involving intermediate conversion rates between those hops will usually show up in those numbers, anyway .
- Add cost information. Ultimately, the business cares about how much things cost, not just what the rates are compared to benchmarks and to history.
- Be sensitive to cognitive overload, both in terms of the size of the table and the total number of digits we’re going to put before the reader.
In addition, I’m going to keep website unique visitors not because it strictly helps the funnel analysis, but simply because I think it’s a good leading indicator , and I’m going to add information about new ARR booked and the average sales price (ASP). In the end, the point of all this marketing is to bring in new ARR. Finally I’m going to add highlighting .
Here’s our chart, simplified and transformed :
Here you can see a few important things that are not even present in the original chart:
- Demandgen cost per deal has increased from $6.8K to $10.1K
- Demandgen cost per stage-2 oppty has stayed remarkably constant at $2.2K
- The stage2-to-close rate has dropped by a third, from 33% to 22%
- The new ARR ASP (average sales price) has dropped from $33K to $26K, about 22%
Thus, while we are generating stage 2 oppties at the same cost, they are closing both at a much lower rate and for less value. We can finally see what’s going on. We have a mid-to- low funnel problem in converting oppties to deals and in closing those deals at our historical value. Note that this analysis doesn’t tell us precisely what the problem is, but it does tell us where to go look. For that reason, I refer to this kind of chart as a smoke detector .
As part of the next-level investigation we might actually go back to the original chart. When I built the exercise, I tried to confine the problem to a single row, demo to shortlist conversion, which drops nearly monotonically across the year.
To understand why demo-to-shortlist is falling, I’d start asking sales questions, listening to demo calls, and speaking with prospects (who both kept and excluded us after the demo) to try and understand why we decreasingly reach the short list. Generically, I’d look to possible explanations such as:
- A new demo script, that is perhaps less compelling than the old one
- A new demo methodology, perhaps we’ve moved to a less customized boiler room approach to save money
- A change in demo staffing, perhaps putting more junior SCs on demos or having sales take over basic demos
- A new competitor in the market, who perhaps neutralizes some our once-differentiating features
- A loss of market leadership, such that we are decreasingly seen as a must-evaluate product
The great irony of this example is that while I was trying to type numbers that didn’t vary that much (using mental math) across most rows, I failed pretty badly at so doing. My intent was to have every rate stay roughly constant while demo-to-shortlist fell by around 25 percentage points across the year. However, when I look at the data after the fact:
- Meeting-to-SQL fell by more than 20 percentage points across the year
- This was somewhat offset by MQL-to-appointment rising 17.5 percentage points across the year
So if this were real data, I’d have to go investigate those changes, too.
The point of this post is not that the next-level analysis and detailed step-by-step conversion rates are useless. The point is that unless you summarize (e.g., by analyzing hops) and map to business metrics that executives care about (e.g., cost/deal) that you will lose your audience (and maybe yourself) in the process.
And remember, we’d addressed just one form of funnel complexity in this example. Marketing-inbound funnel analysis. We haven’t looked across pipeline sources (e.g., partner, outbound, sales). We haven’t touched on attribution or marketing channel analysis. But when we approach those problems, we should do it the same way. Keep it simple. Come at it top down. Peel back the onion for the audience.
The spreadsheet I used for this post can be found on Scribd or Google Drive.
# # #
 Let put aside of the question of whether it should be a chart. Yes, there certainly is a time and place for charts, but in my experience, they are far too often a waste of space, using an entire screen to show 12 data points. (This always reminds me of the Hyderabadi taxi driver who once told me that lines on the roadway were a waste of paint.) Conversely, I’ve never met a board who can’t handle a well-prepared table full of numbers. Let’s just stipulate here that a table is the right answer, and then make the best of that table, which is really the purpose of this post.
 “They’re important,” the author screams into the void. My reputation notwithstanding, it’s not for obsessive-compulsive reasons, it’s for comprehensibility. (Or perhaps, I’m obsessive about comprehensibility!)
 For example, if your demandgen cost/opportunity is $4K and your close rate is 25%, then your demandgen cost/deal is $16K. If, continuing the example, demandgen is 50% of your total marketing cost and sales & marketing contribute equally to your CAC, then you are spending $64K in total S&M cost per deal. If your ARR ASP (average sales price) is $32K, then your CAC ratio will be around 2.0. If your ARR ASP is $128K, then your CAC ratio will be around 0.5. I say “around” because I presume you’re not operating at steady state and certain accounting conventions (e.g., amortizing commissions in sales expense) can cause variations with this back-of-the-envelope CAC ratio approach.
 Unless they magically happen to offset each other, as coincidentally largely happened when I created my synthetic data set (which you see if you read to the end of the post). Thus, this is not to say that no one should ever look at step-by-step conversion rates. It is to say that they have no business in a C-level summary.
 I think every marketer should track and share unique visitors. It’s a good leading indicator, if only loosely coupled to the demandgen funnel. It can be benchmarked against the competition (if somewhat imprecisely) and should be. The first time you do so is often sobering.
 You could argue this is cheating and that I could easily improve the wall of numbers chart by adding highlighting. While highlighting could quickly take you to the problem row, it’s not always the case that one row is so clearly responsible. (I contained the problem to one row here to make my life easier in making the slide, not because I think it’s common in reality, where stage defnitions are rarely so clear and used so consistently.)
 In addition to many other changes, I’m switching to my preferred nomenclature of stage-1 and stage-2 opportunity as opposed to SAL, SQL/SAO and such. Also, please note that at the risk of complexifying the chart, I’m separating stage1 and stage 2 oppties (instead of, say, just looking at stage 2s) because that is often the handoff point between SDRs and sales which makes it worth closely monitoring.
 Much as an employee engagement survey tells you, “there’s a management problem in product management,” but doesn’t tell you precisely what it is. But you know where to go to start asking questions.
Hi Dave! As always, great post. Two questions:
1) Do you use only marketing-attributed opps/wins here? Or, all opps/wins. The former is allegedly cleaner but the latter is more reflective of total influence (esp in a popcorn machine, ABX world).
2) Any best practices for tracking web unique visitors? E.g. do you try to separate just organic, or try to exclude existing customers and job candidates.
Hi Sandi, great to hear from you.
1. I was kind of sidestepping that question here on purpose, which I suppose is a tacit assumption that marketing is generating the vast majority of the opps (which is true at some companies I work with). If that’s not true, then we need to start segmenting things (e.g., by pipeline source) and that can lead to attribution wars. For early stage companies where you’re really trying to validate the business model, attribution doesn’t matter as much. As you get better and want to optimize then you’re going to need to split by pipeline source (e.g, inbound, outbound, sales, partner, allbound) and apply the same simplification principles here. Note that the upfunnel stuff varies a lot (e.g., partners tend to have you s1 or s2 oppties so the funnel kind of starts there)
2. On this one, I’ve got nothing special to offer on dark/direct traffic or on filtering out job seekers, login-ers, bots, etc. Best practice is to consistently screen them out but IMHO consistency is more important than perfection on offsetting errors theory.