Category Archives: Demandgen

How to Simplify Your Marketing Funnel:  Seeing the Unit Cost Forest for the Conversion Rates Trees

Let’s say you’re a CEO.  You don’t come from a marketing background.  At every quarterly business review (QBR) and board meeting, your marketing head presents a chart like this:

What happens next?  More than likely, after 10 or 15 minutes of effectively random probes into this minefield of numbers, you do what any good CEO would under the circumstances.  You say:

“Next slide, please.”

To paraphrase Thoreau, the mass of CEOs lead lives of quiet marketing desperation. Slides like this are why. What’s wrong with this slide? [1]

Well, to get this out of my system, there are a number of what I’d call mechanical problems:

  • It mixes different time periods as the reader scans across columns making it difficult to spot trends.  Better to group quarters and years on the right.
  • It has excess precision.  Too many digits are unnecessary and impede comprehension.  Better to show pageviews by the thousand, demandgen by the kilodollar ($K), cost/MQL without the pennies, and conversion rates to the percentage point, not the basis point.
  • It contains too many rows.  Even if they’re all of interest (and they aren’t), it’s simply too much.
  • It fails to use formatting, such as commas, to make figures more easily grasped.

These details aren’t nits [2].  Particularly if you’re a finance or ops person (e.g., saleops, marketingops), your job is to present data in a way that is clear, consistent, and comprehensible.  In short, your job is to “light shit up” when there are problems.  This slide does anything but.

More importantly, there are what I’d call conceptual problems with the slide:

  • It’s a sea of numbers that drowns the reader in data, making it impossible to find insights.  To paraphrase the old saw, “all these trees are making it hard for me to see the forest.”
  • It’s supposed to be a summary of the funnel for a board meeting or QBR.  This summary doesn’t summarize.
  • It contains numerous rows that are not appropriate for such a summary and serve only to cognitively overload the reader.
  • Worst yet, it omits rows of high potential interest.  Specifically, unit cost (e.g., cost/oppty) rows that can help readers understand the viability of the business model [3].

In the above table, I tried to hide a big problem floating in that sea of numbers. Did you find it? Did the slide help you do so?

Before transforming the table into something more useful, let’s talk briefly about what we’re going to do. Three simple things:

  • Take hops down the funnel instead of steps.  Instead of looking at each conversion rate as we descend, we will look only at MQLs, stage 1 and stage 2 oppties, closed/won deals, and associated conversion rates between them. Any problems involving intermediate conversion rates between those hops will usually show up in those numbers, anyway [4].
  • Add cost information.  Ultimately, the business cares about how much things cost, not just what the rates are compared to benchmarks and to history.
  • Be sensitive to cognitive overload, both in terms of the size of the table and the total number of digits we’re going to put before the reader.

In addition, I’m going to keep website unique visitors not because it strictly helps the funnel analysis, but simply because I think it’s a good leading indicator [5], and I’m going to add information about new ARR booked and the average sales price (ASP). In the end, the point of all this marketing is to bring in new ARR. Finally I’m going to add highlighting [6].

Here’s our chart, simplified and transformed [7]:

Here you can see a few important things that are not even present in the original chart:

  • Demandgen cost per deal has increased from $6.8K to $10.1K
  • Demandgen cost per stage-2 oppty has stayed remarkably constant at $2.2K
  • The stage2-to-close rate has dropped by a third, from 33% to 22%
  • The new ARR ASP (average sales price) has dropped from $33K to $26K, about 22%

Thus, while we are generating stage 2 oppties at the same cost, they are closing both at a much lower rate and for less value.  We can finally see what’s going on. We have a mid-to- low funnel problem in converting oppties to deals and in closing those deals at our historical value. Note that this analysis doesn’t tell us precisely what the problem is, but it does tell us where to go look. For that reason, I refer to this kind of chart as a smoke detector [8].

As part of the next-level investigation we might actually go back to the original chart. When I built the exercise, I tried to confine the problem to a single row, demo to shortlist conversion, which drops nearly monotonically across the year.

To understand why demo-to-shortlist is falling, I’d start asking sales questions, listening to demo calls, and speaking with prospects (who both kept and excluded us after the demo) to try and understand why we decreasingly reach the short list. Generically, I’d look to possible explanations such as:

  • A new demo script, that is perhaps less compelling than the old one
  • A new demo methodology, perhaps we’ve moved to a less customized boiler room approach to save money
  • A change in demo staffing, perhaps putting more junior SCs on demos or having sales take over basic demos
  • A new competitor in the market, who perhaps neutralizes some our once-differentiating features
  • A loss of market leadership, such that we are decreasingly seen as a must-evaluate product

The great irony of this example is that while I was trying to type numbers that didn’t vary that much (using mental math) across most rows, I failed pretty badly at so doing. My intent was to have every rate stay roughly constant while demo-to-shortlist fell by around 25 percentage points across the year. However, when I look at the data after the fact:

  • Meeting-to-SQL fell by more than 20 percentage points across the year
  • This was somewhat offset by MQL-to-appointment rising 17.5 percentage points across the year

So if this were real data, I’d have to go investigate those changes, too.

The point of this post is not that the next-level analysis and detailed step-by-step conversion rates are useless. The point is that unless you summarize (e.g., by analyzing hops) and map to business metrics that executives care about (e.g., cost/deal) that you will lose your audience (and maybe yourself) in the process.

And remember, we’d addressed just one form of funnel complexity in this example. Marketing-inbound funnel analysis. We haven’t looked across pipeline sources (e.g., partner, outbound, sales). We haven’t touched on attribution or marketing channel analysis. But when we approach those problems, we should do it the same way. Keep it simple. Come at it top down. Peel back the onion for the audience.

The spreadsheet I used for this post can be found on Scribd or Google Drive.

# # #

Notes

[1] Let put aside of the question of whether it should be a chart.  Yes, there certainly is a time and place for charts, but in my experience, they are far too often a waste of space, using an entire screen to show 12 data points. (This always reminds me of the Hyderabadi taxi driver who once told me that lines on the roadway were a waste of paint.) Conversely, I’ve never met a board who can’t handle a well-prepared table full of numbers.  Let’s just stipulate here that a table is the right answer, and then make the best of that table, which is really the purpose of this post.

[2] “They’re important,” the author screams into the void.  My reputation notwithstanding, it’s not for obsessive-compulsive reasons, it’s for comprehensibility.  (Or perhaps, I’m obsessive about comprehensibility!) 

[3] For example, if your demandgen cost/opportunity is $4K and your close rate is 25%, then your demandgen cost/deal is $16K.  If, continuing the example, demandgen is 50% of your total marketing cost and sales & marketing contribute equally to your CAC, then you are spending $64K in total S&M cost per deal.  If your ARR ASP (average sales price) is $32K, then your CAC ratio will be around 2.0. If your ARR ASP is $128K, then your CAC ratio will be around 0.5. I say “around” because I presume you’re not operating at steady state and certain accounting conventions (e.g., amortizing commissions in sales expense) can cause variations with this back-of-the-envelope CAC ratio approach.

[4] Unless they magically happen to offset each other, as coincidentally largely happened when I created my synthetic data set (which you see if you read to the end of the post). Thus, this is not to say that no one should ever look at step-by-step conversion rates. It is to say that they have no business in a C-level summary.

[5] I think every marketer should track and share unique visitors. It’s a good leading indicator, if only loosely coupled to the demandgen funnel. It can be benchmarked against the competition (if somewhat imprecisely) and should be. The first time you do so is often sobering.

[6] You could argue this is cheating and that I could easily improve the wall of numbers chart by adding highlighting. While highlighting could quickly take you to the problem row, it’s not always the case that one row is so clearly responsible. (I contained the problem to one row here to make my life easier in making the slide, not because I think it’s common in reality, where stage defnitions are rarely so clear and used so consistently.)

[7] In addition to many other changes, I’m switching to my preferred nomenclature of stage-1 and stage-2 opportunity as opposed to SAL, SQL/SAO and such. Also, please note that at the risk of complexifying the chart, I’m separating stage1 and stage 2 oppties (instead of, say, just looking at stage 2s) because that is often the handoff point between SDRs and sales which makes it worth closely monitoring.

[8] Much as an employee engagement survey tells you, “there’s a management problem in product management,” but doesn’t tell you precisely what it is. But you know where to go to start asking questions.

How Much Should You Bet on Educating the Market?

Using the Marketing Fundamental Tension Quadrant to Map Your Demandgen and Communications Strategy

Years ago I wrote a post on what I call the fundamental tension in marketing:  the gap between what we want to say and what our audience wants to hear.

For example, let’s say we’re a supply chain software company.  Our founders are super excited about our AI/ML-based algorithms for demand prediction.  Our audience, on the other hand, barely understands AI/ML [1] and wants to hear about reducing the cost of carrying inventory and matching marketing programs to inventory levels [2].

How then should we market our supply chain software?  Let’s use the following quadrant to help.

Let’s map AI/ML as a marketing message onto this framework.  Do we care about it?  Yes, a lot.  Does our audience?  No.  We’re in Box 4:  we care and they don’t, so we conclude that must therefore educate (as we might dangerously consider them) the unwashed in order to make them care about AI/ML.  We can write a white paper entitled, The Importance of AI/ML in Supply Chain Systems.  We can run a webinar with the same title.  By the way, should we expect a lot of people to attend that webinar?  No.  Why?  Because no one cares.

Market education is hard.  That’s not to say you shouldn’t do it, but realize that you are trying, in a world of competing priorities, to add one to the list and move it up to the top.  It can be done:  digital transformation is widely viewed as business priorities today.  But that took an enormous amount of work from almost the entire software industry.  Your one startup isn’t going to change the VP of Supply Chain’s priorities overnight.

Every good demandgen leader knows it’s far easier to start with things the audience already cares about and then bridge to things your company wants to talk about.  Using the movie theatre metaphor of the prior post, you put “Reduce Inventory Costs” on the marquee and you feature “AI/ML” in a lead role in the movie.

How do you determine those priorities?  I’ll scream it:  MARKET RESEARCH.  You find existing and/or run proprietary market studies targeting your business buyers, asking about their priorities.  Then you create marketing campaigns that bridge from buyer priorities to your messages.  If you’re lucky, you’re in Box 2 and everything aligns without the bridge.  But most software marketers should spend the majority of their time in Box 1, bridging between what’s important to the audience and what’s important to the company.

If you fail to build the bridge in Box 1 you’ll have a webinar full of people of who won’t buy anything.  If you put all your investment into Box 4 you’ll run a lot of empty webinars.

The number one mistake startup marketers make is that they try educate the market on too many things.  You need to care about AI/ML.  And reporting.  And, oh by the way, analytics.  And CuteName.  And features 5, 6, and 7.  And, no, no we’re not feature-driven marketing because we remember to mention benefits somewhere.  We are evangelists.  We are storytellers!

But you’re telling stories that people don’t want to hear.

My rule is simple:  every startup should have one — and only one — Box 4 message and supporting campaigns.  Sticking with our example:

  • We should have a superb white paper on the importance of AI/ML in supply chain systems.
  • We should make claims in our PR boilerplate and About Us page related to our pioneering AI/ML in supply chain systems.
  • We should run a strong analyst relations (AR) program to get thought leaders on board with the importance of AI/ML in supply chain.
  • We should commit to this message for, by marketing standards, an extraordinarily long time; it’s literally a decade-long commitment.  So choose it wisely.

To blast through 30 years of personal industry history:  for Oracle it was row-level locking; for BusinessObjects, the semantic layer; for Endeca, the MDEX engine; for MongoDB, NoSQL [3]; for Salesforce, SaaS (branded as No Software); for Anaplan, the hypercube; for GainSight, customer success; and for Alation, the data catalog [4].

To net out the art of enterprise software marketing, it’s:

  • Stay out of Box 3
  • If you’re lucky, you’re in your Box 2 [5].  Talk about what you want to say because it’s what they want to hear.
  • Spend most of your time in Box 1, bridging from what they want to hear to what you want to say.  This keeps butts in seats at programs and primes them towards your selling agenda.
  • Make one and only one bet in Box 4, use AR to help evangelize it, and produce a small number of very high quality deliverables to tell the story.

# # #

Notes

[1] Much as I barely understand a MacPherson strut, despite having been subjected to hearing about it by years of feature-driven automotive marketing.

[2] In other words, “sell what’s on the truck.”  An old example, but likely still true:  the shirt color worn by the model in a catalog typically gets 5x the orders of any other color; so why not do color selection driven by inventory levels instead of graphic design preferences?

[3] Or, as I always preferred, MyNoSQL, simultaneously implying both cheap and easy (MySQL) and document-oriented (NoSQL).  By the way, this claim is somewhat less clear to me than the proceeding two.

[4]  The more the company is the sole pioneer of a category, the more the evangelization is about the category itself.  The more the company emerges as the leader in a competitive market, the more the evangelization is about the special sauce.  For example, I can’t even name a GainSight competitor so their message was almost purely category evangelical.  Alation, by comparison, was close to but not quite a sole pioneer so I wrestled with saying “machine-learning data catalog” (which embeds the special sauce), but settled on data catalog because they were, in my estimation, the lead category pioneer.  See my FAQ for disclaimers as I have relationships past or present with many of the companies mentioned.

[5]  Any space-pioneering application is probably in Box 2.  Any technology platform is almost always in Box 3 or 4.  Any competitive emerging space probably places you in Box 1 — i.e., needing to do a lot of bridging from more generic buyer needs to your special sauce for meeting them.

How To Build a Marketing Machine, Presentation from a Balderton Capital Meetup

I hopped over to London a few weeks back to visit my friends at Balderton Capital (where I’m now working as an EIR) and during the visit we decided to host a meetup for portfolio company founders, CEOs, and CMOs to discuss the question of how to build a marketing machine.

We based the meetup on the presentation I recently delivered at SaaStock EMEA of the same title, but in a pretty compressed twenty-minute format.  This time, we took closer to 40 minutes and had some fun conversation and Q&A thereafter.

This is a hot topic today because in this era of high growth, flush funding, and rapid scaling, just about everyone I know is trying to turn their marketing into a machine so they can push the levers forward and grow, grow, grow.  This presentation talks about how to do that.

The slides from the presentation are embedded below.  You can find a video of a private recording of the presentation on the Balderton website.

The Four Sources of Pipeline and The Balance Across Them

I’ve mentioned this idea a few times of late (e.g., my previous post, my SaaStock EMEA presentation) [1] and I’ve had some follow-up questions from readers, so I thought I’d do a quick post on the subject.

Back in the day at Salesforce, we called pipeline sources “horsemen,” a flawed term both for its embedded gender pronoun and its apocalyptic connotation.  Nevertheless, for me it did serve one purpose — I always remembered there were four of them.

Today, I call them “pipeline sources” but I’ve also heard them referred to as “pipegen sources” (as in pipeline generation) and even “revenue engines” which I think is an over-reach, if not a well intentioned one [2].

While you can define them in different ways, I think a pretty standard way of defining the pipeline sources is as follows:

  • Marketing, also known as “marketing/inbound.”  Opportunities generated as a result of people responding to marketing campaigns [3].
  • SDRs, also known as “SDR/outbound,” to differentiate these truly SDR-generated oppties from marketing/inbound oppties that are also processed by SDRs, but not generated by them [4].
  • Alliances [5].  Opportunities referred to the company by partners, for example, when a regional system integrator brings the company into a deal as a solution for one of its customers.
  • Sales, also known as “sales/outbound,” when a quota-carrying salesrep does their own prospecting, typically found in named-account territory models, and develops an opportunity themselves.

Product-led growth (PLG) companies should probably have a fifth source, product, but I won’t drill into PLG in this post [5A].

Attribution issues (i.e., who gets credit when an opportunity is developed through multiple touches with multiple contacts over multiple quarters [6] [7]) are undoubtedly complex.  See note [8] not for the answer to the attribution riddle, but for my advice on best dealing with the fact that it’s unanswerable.

Now, for the money question:  what’s the right allocation across sources?  I think the following are reasonable targets for a circa $50M enterprise SaaS company for mix of oppties generated by each source (all targets are plus-or-minus 10%):

  • Marketing:  60%
  • SDR/outbound:  10%
  • Alliances:  20%
  • Sales/outbound:  10%

Now, let’s be clear.  This can vary widely.  I’ve seen companies where marketing generates 95% of the pipeline and those where it generates almost none.  SDR/outbound makes the most sense in a named-account sales model, so I personally wouldn’t recommend doing outbound for outbound’s sake [9] [10].  Alliances is often under 20%, because the CEO doesn’t give them a concrete oppty-generation goal (or because they’re focused more on managing technology alliances).  Sales/outbound only makes sense for sellers with named-account territories, despite old-school sales managers’ tendency to want everyone prospecting as a character-building exercise.

And let’s not get so focused on the mix that we forget about the point:  cost-effective opportunity generation (ultimately revealed in the CAC ratio) with broad reach into the target market.

Now, for a few pro tips:

  • Assign the goal as a number of oppties, not a percentage.  For example, if you want 60% from marketing and have an overall goal of 100 oppties, do not set marketing’s goal at 60%, tell them you want 60 oppties.  Why?  Because if the company only generates 50 oppties during the quarter and marketing generates 35 of those, then marketing is popping champagne for generating 70% of the oppties (beating the 60% goal), while they are 15 oppties short of what the company actually needed.
  • Use overallocation when spinning up new pipeline sources.  Say you’ve just created an RSI alliances team and want them generating 10% of oppties.  By default, you’ll drop marketing’s target from 70% to 60% and marketing will build a budget to generate 60% (of say 100) oppties, so 60 oppties.  If they need $3K worth of marketing to generate an oppty, then they’ll ask for $180K of demandgen budget.  But what if alliances flames out?  Far better to tell marketing to generate 70 oppties, give them $210K in budget to do so and effectively over-assign oppty generation to an overall goal of 110 when you need 100.  This way, you’re covered when the new and presumably unpredictable pipeline generation source is coming online [11].

# # #

Notes

[1] Video forthcoming if I can get access to it.

[2]  The good intentions are to keep everyone focused on revenue.  The over-reach is they’re not really engines, more fuel sources.  I am a big believer in the concept of “revenue engines,” but I use the term to refer to independent business units that have an incremental revenue target and succeed or fail in either an uncoupled or loosely coupled manner.  For example, I’d say that geographic units (e.g., Americas, EMEA), channels (e.g., OEM, VAR, enterprise sales, corporate sales), or even product lines (depending on the org) are revenue engines.  The point of having revenue engines is diversification, as with airplanes, they can sputter (or flame-out) independently.  (As one aviation pioneer was reputed to have said:  “why do I only fly four-engine planes across the Atlantic?  Because they don’t make five-engine planes.”)

[3]  I will resist the temptation to deep dive into the rabbit hole of attribution and say two things:  (a) you likely have an attribution mechanism in place today and (b) that system is invariably imperfect so you should make sure you understand how it works and understand its limitations to avoid making myopic decisions.  For example, if an oppty is created after several people downloaded a white paper, a few attended a webinar, an SDR had been doing outreach in the account, the salesperson met a contact on the train, and a  partner was trying to win business in the account, who gets the credit?  It’s not obvious how to do this correctly and if your system is “one oppty, one source” (as I’d usually recommend over some point allocation system), there will invariably be internal jockeying for the credit.

[4]  SDRs are often split inbound vs. outbound not only to ease the tracking but because the nature of the work is fundamentally different.  Hybrid SDR roles are difficult for this reason, particularly in inbound-heavy environments where there is always more inbound work to do.

[5]  My taxonomy is that there are two types of “partners” — “channels” who sell our software and “alliances” who do not.  In this case (where we’re talking about pipeline generation for our direct salesforce), I am speaking of alliance partners, who typically work in a co-sell relationship and bring the company into oppties as a result.  In the case of channels, the question is one of visibility:  are the channels giving us visibility into their oppties (e.g., in our CRM) as you might find with RSIs or are they simply forecasting a number and mailing us a royalty check as you might find with OEMs.

[5A]  Product meaning trials (or downloads in open source land), which effectively become the majority top-of-funnel lead source for PLG companies.  This begs the question:  who drives people to do those trials (typically marketing and/or word of mouth)

[6]  One simple, common example:  a person downloads a white paper they found via through a search advertisement five quarters ago, ends up in our database, receives our periodic newsletter, and then is developed by an SDR through an outreach sequence.  Who gets the credit for the opportunity?  Marketing (for finding them in the first place and providing a baseline nurture program via the newsletter) or SDR/outbound (for developing them into an oppty)?   Most folks would say SDR in this case, but if your company practices “management by reductio ad absurdum” then someone might want to shut down search advertising because it’s “not producing” whereas the SDRs are.  Add some corporate politics where perhaps sales is trying to win points for showing how great they are at managing SDRs after having taken them from marketing and things can get … pretty icky.

[7] Another favorite example:  marketing sponsors a booth at the Snowflake user conference and we find a lead that develops into an opportunity.  Does marketing get the credit (because it’s a marketing program) or alliances (because Snowflake’s a partner).  Add some politics where the alliances team has been seen as underperforming and really needs the credit, and things can get again yucky and confusing, leading you away from the semi-obvious right answer:  marketing, because they ran a tradeshow booth and got a lead.  If you don’t credit marketing here, you are disincenting them from spending money at partner conferences (all I, no RO.)  The full answer here is, IMHO, to credit marketing with being the source of oppty, to track influence ARR by partner so we know how much of our business happens with which partners, and to not incent the technology alliances group with opportunity creation targets.  (Oppty creation, however, should be an important goal for the regional and/or global system integrator alliances teams.)

[8]  My recommended solution here is two-fold:  (a) use whatever attribution mechanism you want, ensuring you understand its limitations, and (b) perform a win-touch analysis at every QBR where a reasonably neutral party like salesops presents the full touch history for a set of representative deals (and/or large) deals won in the prior quarter.  This pulls everyone’s heads of our their spreadsheets and back into reality — and should ease political tensions as well.

[9]  Having an SDR convince someone to take a meeting usually results in a higher no-show rate and a lower overall conversion rate than setting up meetings with people who have engaged with our marketing or our partners already.

[10]  Put differently, you should stalk customers only when you’re quite sure they should buy from you, but they haven’t figured that out yet.

[11] And yes there’s no free lunch here.  Your CAC will increase because you’re paying to generate 110 oppties when you only need 100.  But far better to have the CAC kick up a bit when you’re starting a new program than to miss the number because the pipeline was insufficient.

The Top Two, High-Level Questions About Sales (and Associated Metrics)

“The nice thing about metrics is that there are so many to choose from.” — Adapted from Grace Hopper [1]

“Data, data everywhere.  Nor any drop to drink.” — adapted from Samuel Taylor Coleridge [2]

In a world where many executives are overwhelmed with sales and marketing metrics — from MQL generation to pipeline analysis to close-rates and everything in between — I am writing this post in the spirit of kicking it back up to the CXO-level and answering the question:  when it comes to sales, what do you really need to worry about?

I think can burn it all down to two questions:

  • Are we giving ourselves the chance to hit the number?
  • Are we hitting the number?

That’s it.  In slightly longer form:

  • Are we generating enough pipeline so that we start every quarter with a realistic chance to make the number?
  • Are we converting enough of that pipeline so that we do, in fact, hit the number?

Translating it to metrics:

  • Do we start every quarter with sufficient pipeline coverage?
  • Do we have sufficient pipeline conversion to hit the number?

Who Owns Pipeline Coverage and How to Measure It?
Pipeline coverage is a pretty simple concept:  it’s the dollar value of the pipeline with a close date in a given period divided by the new ARR target for that period.  I have written a lot of pretty in-depth material on managing the pipeline in this blog and I won’t rehash all that here.

The key points are:

  • There are typically four major pipeline generation (pipegen) sources [3] and I like setting quarterly pipegen goals for each, and doing so in terms of opportunity (oppty) count, not pipeline dollars.  Why?  Because it’s more tangible [4] and for early-stage oppties one is simply a proxy for the other — and a gameable one at that [5].
  • I loathe looking at rolling-four-quarter pipeline both because we don’t have rolling-four-quarter sales targets and because doing so often results in a pipeline that resembles a Tantalean punishment where all the deals are two quarters out.
  • Unless delegated, ownership for overall pipeline coverage boomerangs back on the CEO [6].  I think the CMO should be designated the quarterback of the pipeline and be responsible for both (a) hitting the quarterly goal for marketing-generated oppties and (b) forecasting day-one, next-quarter pipeline and taking appropriate remedial action — working across all four sources — to ensure it is adequate.
  • A reasonable pipeline coverage ratio is 3.0x, though you should likely use your historical conversion rates once you have them. [7]
  • Having sufficient aggregate pipeline can mask a feast-or-famine situation with individual sellers, so always keep an eye on the opportunity histogram as well.  Having enough total oppties won’t help you hit the sales target if all the oppties are sitting with three sellers who can’t call everyone all back.
  • Finally, don’t forget the not-so-subtle difference between day-one and week-three pipeline [8].  I like coverage goals focused on day-one pipeline coverage [9], but I prefer doing analytics (e.g., pipeline conversion rates) off week-three snapshots [10].

Who Owns Pipeline Conversion and How to Measure and Improve It?
Unlike pipeline coverage, which usually a joint production of four different teams, pipeline conversion is typically the exclusive the domain of sales [11].  In other words, who owns pipeline conversion?  Sales.

My favorite way to measure pipeline conversion is take a snapshot of the current-quarter pipeline in week 3 of each quarter and then divide the actual quarterly sales by the week 3 pipeline.  For example, if we had $10M in current-quarter new ARR pipeline at the start of week 3, and closed the quarter out with $2.7M in new ARR, then we’d have a 27% week 3 pipeline conversion rate [12].

What’s a good rate?  Generally, it’s the inverse of your desired pipeline coverage ratio.  That is, if you like a 3.0x week 3 pipeline coverage ratio, you’re saying you expect a 33% week 3 pipeline conversation rate.  If you like 4.0x, you’re saying you expect 25% [13].

Should this number be the same as your stage-2-to-close (S2TC) rate?  That is, the close rate of sales-accepted (i.e., “stage 2” in my parlance) oppties.  The answer, somewhat counter-intuitively, is no.  Why?

  • The S2TC rate is count-based, not ARR-dollar-based, and can therefore differ.
  • The S2TC rate is typically cohort-based, not milestone-based — i.e., it takes a cohort of S2 oppties generated in some past quarter and tracks them until they eventually close [14].

While I think the S2TC rate is a better, more accurate measure of what percent of your S2 oppties (eventually) close, it is simply not the same thing as a week-3 pipeline conversion rate [15].  The two are not unrelated, but nor are they the same.

There are a zillion different ways to improve pipeline conversion rates, but they generally fall into these buckets:

  • Generate higher-quality pipeline.  This is almost tautological because my definition of higher-quality pipeline is pipeline that converts at a higher rate.  That said, higher-quality generally means “more, realer” oppties as it’s well known that sellers drop the quality bar on oppties when pipeline is thin, and thus the oppties become less real.  Increasing the percent of pipeline within the ideal customer profile (ICP) is also a good way of improving pipeline quality [16] as is using intent data to find people who are actively out shopping.  High slip and derail percentages are often indicators of low-quality pipeline.
  • Make the product easier to sell.  Make a series of product changes, messaging/positioning changes, and/or create new sales tools that make it easier to sell the product, as measured by close rates or win rates.
  • Make seller hiring profile improvements so that you are hiring sellers who are more likely to be successful in selling your product.  It’s stunning to me how often this simple act is overlooked.  Who you’re hiring has a huge impact on how much they sell.
  • Makes sales process improvements, such as adopting a sales methodology, improving your onboarding and periodic sales training, and/or separating out pipeline scrubs from forecast calls from deal reviews [17].

Interestingly, I didn’t add “change your sales model” to the list as I mentally separate model selection from model execution, but that’s admittedly an arbitrary delineation.  My gut is:  if your pipeline conversion is weak, do the above things to improve execution efficiency of your model.  If your CAC is high, re-evaluate your sales model.  I’ll think some more about that and maybe do a subsequent post [18].

In conclusion, let’s zoom it back up and say:  if you’ve got a problem with your sales performance, there are really only two questions you need to focus on.  While we (perhaps inadvertently) demonstrated that you can drill deeply into them — those two simple questions remain:

  • Are we giving ourselves the chance to hit the number?
  • Are we hitting it?

The first is about pipeline generation and coverage.  The second is about pipeline conversion.

# # #

Notes

[1]  The original quip was about standards:  “the nice thing about standards is that you have so many to chose from.”

[2]  The original line from The Rime of the Ancient Mariner was about water, of course.

[3]  I remember there are four because back in the day at Salesforce they were known, oddly, as the “four horsemen” of the pipeline:  marketing, SDR/outbound, alliances, and sales.

[4]  Think:  “get 10 oppties” instead of “get $500K in pipeline.”

[5]  Think:  ” I know our ASP is $50K and our goal was $500K in pipeline, so we needed 10 deals, but we only got 9, so can you make one of them worth $100K in the pipeline so I can hit my coverage goal?”  Moreover, if you believe that oppties should be created with $0 value until a price is socialized with the customer, the only thing you can reasonably measure is oppty count, not oppty dollars.  (Unless you create an implied pipeline by valuing zero-dollar oppties at your ASP.)

[6]  Typically the four pipeline sources converge in the org chart only at the CEO.

[7]  And yes it will vary across new vs. expansion business, so 3.0x is really more of a blended rate.  Example:  a 75%/25% split between new logo and expansion ARR with coverage ratios of 3.5x and 1.5x respectively yields a perfect, blended 3.0 coverage ratio.

[8]  Because of two, typically offsetting, factors:  sales clean-up during the first few weeks of the quarter which tends to reduce pipeline and (typically marketing-led) pipeline generation during those same few weeks.

[9]  For the simple reason that we know if we hit it immediately at the end of the quarter — and for the more subtle reason that we don’t provide perverse disincentives for cleaning up the pipeline at the start of the quarter.  (Think:  “why did your people push all that stuff out the pipeline right before they snapshotted it to see if I made my coverage goal?”)

[10]  To the extent you have a massive drop-off between day 1 and week 3, it’s a problem and one likely caused by only scrubbing this-quarter pipeline during pipeline scrubs and thus turning next-quarter into an opportunity garbage dump.  Solve this problem by doing pipeline scrubs that scrub the all-quarter pipeline (i.e., oppties in the pipeline with a close date in any future quarter).  However, even when you’re doing that it seems that sales management still needs a week or two at the start of every quarter to really clean things up.  Hence my desire to do analytics based on week 3 snapshots.

[11] Even if you rely on channel partners to make some sales and have two different sales organizations as a result, channel sales is still sales — just sales using a different sales model one where, in effect, channel sales reps function more like direct sales managers.

[12]  Technically, it may not be “conversion” as some closed oppties may not be present in the week 3 pipeline (e.g., if created in week 4 or if pulled forward in week 6 from next quarter).  The shorter your sales cycle, the less well this technique works, but if you are dealing with an average sales cycle of 6-12 months, then this technique works fine.  In that case, in general, if it’s not in the pipeline in week 3 it can’t close.  Moreover, if you have a long sales cycle and nevertheless lose lots of individual oppties from your week 3 pipeline that get replaced by “newly discovered” (yet somehow reasonably mature oppties) and/or oppties that inflate greatly in size, then I think your sales management has a pipeline discipline problem, either allowing or complicit in hiding information that should be clearly shown in the pipeline.

[13]  This assumes you haven’t sold anything by week 3 which, while not atypical, does not happen in more “linear” businesses and/or where sales backlogs orders.  In these cases, you should look at to-go coverage and conversion rates.

[14]  See my writings on time-based close rates and cohort- vs. milestone-based analysis.

[15] The other big problem with the S2TC rate is that it can only be calculated on a lagging basis.  With an average sales cycle of 3 quarters, you won’t be able to accurately measure the S2TC rate of oppties generated in 1Q21 until 4Q21 or 1Q22 (or even later, if your distribution has a long tail — in which case, I’d recommend capping it at some point and talking about a “six-quarter S2TC rate” or such).

[16]  Provided of course you have a data-supported ICP where oppties at companies within the ICP actually do close at a higher rate than those outside.  In my experience, this is usually not the case, as most ICPs are more aspirational than data-driven.

[17]  Many sales managers try to run a single “weekly call” that does all three of these things and thus does each poorly.  I prefer running a forecast call that’s 100% focused on producing a forecast, a pipeline scrub that reviews every oppty in a seller’s pipeline on the key fields (e.g., close date, value, stage, forecast category), and deal reviews that are 100% focused on pulling a team together to get “many eyes” and many ideas on how to help a seller win a deal.

[18] The obvious counter-argument is that improving pipeline conversion, ceteris paribus, increases new ARR which reduces CAC.  But I’m sticking by my guns for now, somewhat arbitrarily saying there’s (a) improving efficiency on an existing sales model (which does improve the CAC), and then there’s (b) fixing a CAC that is fundamentally off because the company has the wrong sales model (e.g., a high-cost field sales team doing small deals).  One is about improving the execution of a sales model; the other is about picking the appropriate sales model.