Category Archives: GTM

“All Models Are Wrong, Some Are Useful.”

“I have a map of the United States … actual size. It says, Scale: 1 mile = 1 mile. I spent last summer folding it. I also have a full-size map of the world. I hardly ever unroll it.” — Stephen Wright (comedian)

Much as we build maps as models of the physical world, we build mathematical models all the time in the business world. For example:

These models can be incredibly useful for planning and forecasting. They are, however, of course, wrong. They’re imperfect at prediction. They ignore important real-world factors in their desire for simplification, often relying on faith in offsetting errors. Reality rarely lands precisely where the model predicted. Which brings to mind this famous quote from the British statistician George Box.

“All models are wrong. Some are useful.” — George Box

It’s one of those quotes that, if you get it, you get it. (And then you fall in love with it.) Today, I’m hoping to bring more people into the enlightened fold by discussing Box’s quote as it pertains to three everyday go-to-market (GTM) models.

First, it’s why we don’t want models to be too precise and/or too complex. They’re not supposed to be exact. They’re not supposed to model everything, they’re supposed to be simplified. They’re just models. They’re supposed to be more useful than exact.

For example, in finance, if we need to make a precise budget that handles full GAAP accounting treatment then we do that. We map every line to a general ledger (GL) account, do GAAP treatment of revenue and expense, model depreciation and allocations, et cetera. It’s a backbreaking exercise. And when you’re done, you can’t really play with it to learn and to understand. It’s precise, but it’s unwieldy — a bit like Stephen Wright’s full-scale map of the US. It’s useful if you need to bring a full-blown budget to the board for approval, but not so useful if you’re trying to understand the interplay between sales productivity, sales ramping, and sales turnover. You’d be far better off looking at a sales bookings capacity model.

To take a different example, it’s why business school teaches you discounted cashflow (DCF) analysis for capital budgeting. DCF basically throws out GAAP and asks, what are the cashflow impacts of this project? The assumption being that if the DCFs work out, then it’s a good investment and that will eventually show up in improved GAAP results. Notably — and I was really confused by this when I first learned capital budgeting — they don’t teach you to build a 20-year detailed GAAP budget with different capital project assumptions and then do scenario analysis. Instead, they strip everything else away and ask, what are the cashflow impacts of this project versus that one?

In the rest of this post, I’ll explore Box’s quote as it relates to the three SaaS GTM models I discussed in the introduction. We’ll see that it applies quite differently to each.

Sales Bookings Capacity Models

These models calculate sales bookings based on sales hiring and staffing (including attrition), sales productivity, and sales ramping (i.e., the productivity curve new sellers follow as they spend their first few quarters at the company). Given those variables and assuming some support resources and ratios (e.g., AE/SDR), they pop out a series of quarterly bookings numbers.

While simple, these models are usually pretty precise and thus can be used for both planning and forecasting (e.g., predicting the bookings number based on actual sales bookings capacity). Thus, these are a lot useful and usually only a little wrong. In fact, some CEOs, including some big name ones I know, walk around with an even simpler version of this model in their heads: new bookings = k * (the number of sellers) where that number might be counted at the start of the year or the end of Q1. (This is what can lead to the sometimes pathological CEO belief that hiring more sellers directly leads to bookings, but hiring anything else does not, or at least only indirectly.)

Marketing Inverted Funnel Models

These models calculate the quarterly demand generation (demandgen) budget given sales booking targets, a series of conversion rates (e.g., MQL to SAL, SAL to SQL, SQL to won), and assumed phase lags between conversion points. They effectively run the sales funnel backwards, saying if we need this many deals, then we need this many SQLs, this many SALs, this many MQLs, and this many leads at various preceding time intervals.

If you’re selling anything other than toothbrushes, these models are wrong. Why? Because SaaS applications, particularly in enterprise, are high-consideration purchases that involve multiple people over sometimes prolonged periods of time. (At Salesforce, we won a massive deal on my product where the overlay rep had been chasing the deal for years, including time at his prior employer.)

These models are wrong because they treat non-linear, over-time behavior as a linear funnel. I liken the reality of the high funnel more to a popcorn machine: you’re never sure which kernel is going to pop, when, but if you add this many kernels and this much heat, then some percentage of them normally pops within N quarters. These models are a lot wrong — from first principles, by not just a little bit — but they are also a lot useful.

I think they work because of offsetting errors theory, which requires the company to be on a relatively steady growth trajectory. Sure, we’re modeling that last quarter’s MQLs are this quarter’s opportunities, and that’s not right (because many are from the quarter before that), but — as long as we’re not growing too fast or, more importantly, changing growth trajectory — that will tend to come out in the wash.

Note that if you wanted to, you could always build a more sophisticated model that took into account MQL aging — or today use an AI tool that does that for you — but you’ll still always be faced with two facts: (1) the trade-offs between model complexity and usefulness and (2) that even the more sophisticated model will still break when the growth trajectory changes or reality otherwise changes out from underneath the model. Thus, I always try to build pretty simple models and then be pretty careful in interpretation of them. Think: what’s going to break this model if it changes?

Marketing Attribution Models

I try not to write much about marketing attribution because it’s quicksand, but I’ll reluctantly dip my toe today. Before proceeding, I encourage you to take a moment to buy a Marketing Attribution is Fake News mug which is a practical, if passive-aggressive, vessel from which to drink your coffee during the next QBR or board meeting.

Marketing attribution is the attempt to assign credit for marketing-generated opportunities (itself another layer of attribution problem) to the marketing channels that generated them. In English, let’s assume we all agree that marketing generated an opportunity. But that opportunity was created at a company where 15 people over the prior 6 quarters had engaged in some marketing program in some way — e.g., clicking an ad, attending a webinar, downloading a white paper, talking to us at a conference, etc.

There are typically two levels of reduction: first, we identify one primary contact from the pool of 15 and second, we identify one marketing program that we decide gets the credit for the opportunity. Typically, people use last-touch attribution, assigning credit to the last program the primary contact engaged with before the opportunity was created. This will overcredit lower-funnel programs (e.g., executive dinners) and undercredit higher-funnel programs (e.g., clicking on an ad). Some people use first-touch attribution, reversing the problem to over-credit higher-funnel programs and under-credit lower-funnel ones. Knowing that both of those problems aren’t great, some send complexity to the rescue, using points-based attribution where each touch by each person scores one or more points, and you add up those points and then allocate credit across channels or programs on a pro rata basis. This is notionally more accurate, but the relative point assignments can be arbitrary and the veil of calculation confusion generally erodes trust in the system.

The correct way, in my humble opinion, to do attribution analysis is to approach it with humility, view it as a triangulation problem, and to make sure people absolutely understand what you’re showing them before you show it (e.g., “we’ll be looking at marketing channel performance using last-touch based attribution on the next slide and before I show it, I want to ensure that everyone understands the limits of interpretation of this approach.”) Then follow any attribution-based performance analysis with some reverse-touch analysis where you show all the touches over the prior two years, deal by deal, for a small set of deals chosen by the CRO in order to demonstrate the messy, ground-level reality of prospect interactions over time. Simply put, it’s the CMO’s job to decide how to allocate resources in this very squishy world, to make those decisions (e.g., do we do tradeshow X and do we spend $Y) in active discussion with the CRO as their partner and with a full understanding of the available data and the limitations on its interpretability. The board or the e-staff simply can’t effectively back-seat drive this process by looking at one table and saying, “OMG, tradeshow oppties cost $25K each, let’s not do any more tradeshows!” If only the optimization problem were that simple.

But, back to the Box quote. How does it apply to attribution? These models are a lot wrong, at best a little useful, and even potentially dangerous. Hence my recommendations about disclaiming the data before showing it, using triangulation to take different bearings on reality, and doing reverse-touch analysis to immediately re-ground anyone floating in a cloud of last-touch-based over-simplification.

Note that the existence of next-generation, full-funnel attribution tools such as Revsure, doesn’t radically change my viewpoint here because we are talking about the fundamental principles of models. They’re always wrong — especially when trying to model something as complex as the interactions of 20 over people at a customer with 5 people and 15 marketing programs at a company, all while those people are talking to their friends and reading blogs and seeing billboards from a vendor. I believe tools like Revsure can take the models from a lot wrong to a little wrong, and ergo improve them from potentially dangerous to useful. But you should still show the reverse-touch analysis to keep people grounded.

And Box’s quote still applies: “All models are wrong. Some are useful.” And what a lovely quote it is.

Slides from a Balderton Launched Go-To-Market Workshop

Just a quick post to share the slides I used today in a nearly 90-minute discussion with founders participating in the most recent cohort of the Balderton Launched incubator program.

Because this is focused on incubation-stage startups, we walked mostly about $0 to $1M (or $2M) issues, both how to win deals as founder (largely covered through Q&A) and how, once you’re winning them, to transition from founder-led sales to sales-led sales. I’d say there are four key parts of that process, overall.

  • Throw spaghetti at the wall and see what works
  • Identify key clusters on the wall — particularly those with smart, well-informed prospects who bought from you (anyway). We can’t scale luck or chance.
  • Hire your head of sales 6-12 months before you want to build a sales team. And then lash yourselves together so you experience all GTM leaning together. Then free them to go build a sales team and teach them what they have learned.
  • When it comes to hiring, we don’t want people learning how to the job (e.g., managing a small sales team in the prior bullet). We do want them learning how to do the job here. Beware the difference.

The slides are embedded below and downloadable as a PDF here.

Go-To-Market Troubleshooting:  Let’s Take It From The Top

So, you’re missing plan and revenue growth is down.  Well, welcome to the club.  You’re certainly not alone in these times. 

In this post, I’ll discuss what you can do about it – specifically, how you can apply some of the ideas I’ve discussed in Kellblog to troubleshoot go-to-market (GTM) performance.  I’ll focus on troubleshooting new business (“newbiz”) ARR plan attainment, the area where most companies seem to be having the most trouble [1].

Don’t Knee-Jerk Blame the Plan

The immediate temptation when missing plan is to blame the plan.  “It’s not realistic.”  “It was driven by the fundraise, not the bottom-up.”  But blaming plan is a poor place to start for two reasons.                        

First, you signed up for the plan when you submitted it to the board for approval.  Next time, if you don’t believe in a proposed plan, don’t be so quick to fold in the face of internal pressure.  Remember the old Fram oil filter commercial and think, “you can fire me now or fire me later” so if you’re asking me to sign up for a plan that I don’t think I can achieve, you might as well fire me now [2].  The need to make such difficult judgments is the price of admission to the sales leadership role.  Cop out at your own peril, because they will indeed fire you later.

Second, when you follow the approach in this post, if the plan is unachievable it will emerge from the data.  So, bite your tongue, avoid any initial temptation to blame plan, and instead go look at the funnel.

The Two Questions and Two Metrics

Recall in this post, I argued that you should ask two questions when you’re missing plan.   Every quarter: 

  1. Are we giving sales the chance to hit the number?
  2. Is sales converting enough of the pipeline to hit the number?

That’s it.  Everything comes down to these two questions.  No matter the root problem, it will be revealed in answering them.  Remember, the way to make plan for twelve consecutive quarters is one at a time.  So why not focus on next quarter?  And if you’re chronically missing plan, why not make a steady-state assumption to simplify things further? [3]

Starting with the above two questions makes things simple by breaking the entire funnel in two.  Simplifying the problem is important because you can quickly and irrecoverably descend into analytical quicksand.  When I first meet them, many companies are neck-deep in such quicksand, comparing dashboard clips, reports, and spreadsheets derived from different systems, lost in an endless sea of non-footing detail, having completely lost the business forest for the salesops trees.

Note that neither of the two above questions assigns blame.  As a consultant, I have the distinct advantage of not caring where the trouble is, making me a disinterested party, de facto impartial.  I encourage CXOs to adopt a similar approach, simply stating facts, avoiding blame (e.g., inferred causes), and acting as dispassionate analysts when analyzing GTM problems.  While you will eventually need to ask why you have certain problems, it’s always best to start with simple statements of fact, get agreement on them, and build from there.  For example:

  • “We consistently start quarters with insufficient pipeline coverage” is a blameless statement of fact.  It does not say whose job it is to generate pipeline (if that’s even been detailed out across sources) or why they are failing to do so.
  •  “We are converting a below-normal percentage of our week 3 pipeline,” less obviously, is also a blameless statement of fact.  While it’s clearly the job of sales to convert pipeline, the statement makes no assertion as to why we are seeing abnormally low conversion rates (e.g., pipeline quality, change in competitive market, sales execution).

When it comes to metrics, the first of the above questions is measured by pipeline coverage, more precisely week-3 pipeline coverage [4] [5]. The second is measured by a conversion rate, specifically week-3 pipeline conversion rate.  Notably, this is not a win rate, and please read this post to ensure that you understand why.

Are We Giving Sales a Chance to Hit the Number?

Make a chart like this one to answer this question.

Here you see, for newbiz ARR for the trailing nine quarters, week 3 pipeline dollars, week 3 pipeline coverage (pipeline/plan), ARR booked, week-3 pipeline conversion, and the pipeline coverage target implied by the week-3 conversion rate (i.e., its inverse).  Pipeline conversion rates are more interesting when viewed in conjunction with plan attainment, so I’ve added ARR plan and plan attainment as well. 

Analyzing this chart, we can see a few things:

  • From 1Q22 through 1Q23 we converted about 33% of the pipeline
  • We were also consistently hitting plan in that timeframe
  • Starting 2Q23 we started with only 2.3x coverage, converted a healthy 40% of it, but still came up short, at 91% of plan. 
  • That rough pattern continued in 3Q23 and 4Q23
  • 1Q24 started with the weakest coverage in the past nine quarters (1.9x)
  • While sales is forecasting record conversion of that pipeline (45%), we are nevertheless forecasting to land at only 86% of plan
  • I’m not sure I believe the forecast because 45% conversion is borderline unrealistic and could simply be the CRO trying their best to hold the line

I conclude that this company is starting with insufficient pipeline.  That is, they’re not giving sales a chance to hit the number.  How do I conclude that?

  • By comparison to pipeline coverage benchmarks.  3.0x is the typical pipeline coverage goal and you’ll note that in the good times (1Q22 through 1Q23) we consistently started with 3.0x+ and we consistently made plan. 
  • By comparison to pipeline conversion benchmarks.  33% is a standard conversion rate.  Here we are running at 40%+, which is best-in-class conversion.  Pipeline conversion is not the problem.
  • More importantly, by comparison to ourselves.  In our recent history, we consistently made plan when we started with 3.0x+ coverage and missed it when we started with 2.3 to 2.4x.  This quarter (1Q23) we’re starting with 1.9x, forecasting record conversion, and still only 86% of plan.

The solution to the insufficient pipeline problem is, unsurprisingly, to make a plan to generate more pipeline.

Here are some of the high-level steps in making that plan:

  • Define pipeline generation targets across the four major pipeline sources.  It’s surprising how many companies don’t start with this basic step.  For bonus points, over-allocate the goals to target 110% of what you need. [6]
  • I prefer to set these targets by opportunity count, not pipeline dollars, because I think it’s more visceral and less easily gamed [7].
  • Do a cost/oppty analysis across your pipeline sources to get an idea of how much money any given pipeline source (e.g., alliances, demandgen) would need to create, for example, 20 more oppties next quarter.  Remember to focus on variable, not average, cost [8].
  • Be sure to check with the leader of each pipeline source on their ability to absorb extra money to generate more pipeline.  If you have 12 SDRs reporting to one manager, they may need to bring in another manager before hiring 3 more SDRs.  Alternatively, sellers may have extra time on their hands and the ability to put more time into outbound.  Alliances may have a hot candidate they want to hire, but no open headcount, and could execute quickly if one were opened.  It’s not just about money; it’s about the ability to productively spend it.
  • Accept that you may be overallocated to sales versus pipeline generation.  In this case, the best solution might well be to terminate the bottom N sellers and convert the newly liberated budget to pipeline generation — so that everyone else has a chance at success.  This is painful, but sometimes necessary, and after you’ve had to do it once, you’ll be more careful to plan holistically in the future.

This all goes without saying that no pipeline analytics will work if you lack basic pipeline discipline – i.e., if you don’t have clear definitions for stages, close dates, oppty values, and forecast categories, and if you don’t regularly enforce them via periodic pipeline scrubs.

The Floating Bar Problem

Before diving into pipeline conversion, let’s address a special case of insufficient pipeline:  one where the pipeline initially looks sufficient but burns off at an above-average rate across the quarter.  You can see this by looking weekly at to-go pipeline coverage.

What’s usually happening in these cases is that some material percentage of your week-3 pipeline is effectively fake.  This happens because, when pipeline is scarce and if sellers are under pressure to each carry 3x coverage [9], they will take lower-quality opportunities into their pipeline.  For example, long-shot oppties that appear rigged for the competition, immature oppties where sellers hope to create a buying timeline, or self-nurture leads that may only become real oppies in the future.

I call the tendency to work on lower quality oppties in tough times, the “floating bar problem” because sales silently lowers (or in good times, raises) the bar for admission into the pipeline.  This is insidious because the result is fake pipeline that creates an illusion of coverage which disappears as the quarter progresses.

The solution to his problem is simple in theory, but hard in practice.

  • Sales management needs to hold the line on what gets into the pipeline, applying the same standards in tough times as good ones.
  • If sales management wants to allow sellers to work on low probability “oppties,” that’s fine but, well, get them out of the opportunity management system.  Use tasks to track work.  But only promote a lead to an oppty when it meets the standard for being an oppty.

If, for example, SDRs are passing low quality stage-1 oppties to sales that should not show up in the numbers as a reduced pipeline conversion rate.  Instead, it should show up in a higher stage-2 rejection rate.  This point is completely lost on most sales managers so please make sure you understand it.  If you maintain pipeline discipline, lower quality oppties should show up not as a reduced stage-2 to close rate, but as an increased stage-2 rejection rate.  And pipeline discipline starts at stage 2 – where sales decides to accept or reject oppties.  It’s wrong to accept sub-standard oppties, pollute the oppty management system with fake pipeline, convert little of it, miss plan, and wreck the company’s pipeline analytics in the process.

I’m not trying to prevent sales from working on whatever sales management wants them to work on.  But I am saying one thing:  whatever they are, don’t call them oppties in “my” oppty management system if they don’t meet the defined standards for oppties [10].

Is Sales Converting Enough of the Pipeline?

While it’s the job of sales to convert pipeline into ARR, that doesn’t mean sales execution is the only factor that drives conversion rates.

Here you see conversion rates plummeting, dropping by 11 percentage points between 1Q23 and 2Q23 and then by another 5 percentage points by 4Q23.  By the 1Q24 forecast, the pipeline conversion rate has been effectively cut in half from ~32% to ~16%.  Note that during the recent dark times (from 2Q23 to 1Q24) we have been starting with ~3.0x pipeline coverage, but converting so little that we’re landing in the dismal range of 47% to 65% of plan.

Let’s assume we have the operational basics covered, so this is real pipeline, validated and scrubbed by sales management, and held to consistent standards over time.  But we’re converting a lot less of it than we used to.  Thus, I conclude that the company’s problem is pipeline conversion, not pipeline coverage.

What possible factors could be driving reduce pipeline conversion rates?  Well, there are a lot of them, so we’ll talk about each.

  • Changes in averages (i.e., ceteris non paribus).  Most productivity models assume a constant average sales price (ASP) and average sales cycle length (ASC).  If ASPs go down, you will hit your count-based targets, but miss your dollar-based ones.  If ASCs increase you may preserve your eventual close rates, but stretch them out over time, reducing quarterly conversion rates and plan attainment. 
  • ASP decreases.  Typically, due to budgetary pressure and increased price competition, but also can be due to an overreliance on discounting.  Some of this is inevitable in a downturn.  You can mitigate it through pricing and packaging changes (e.g., new add-ons to preserve price and/or offset churn at renewal).
  • Slip rate increases.  When ASCs lengthen, more deals slip to the following quarter(s).  Pipeline scrubs can provide early detection and deals reviews can offer re-acceleration strategies.  The biggest risk is that these deals never close at all and simply hit no-decision or derail.
  • Win rate decreasesWin rates usually decrease when a new competitor enters the market or when an existing competitor leapfrogs your product or your market position (e.g., passes you in market share).  Competitive research, sales training, and selling the roadmap are the usual responses.  
  • An absence of big deals.  Some CROs run their business as a mix of baseline deals to hit say 60-80% of plan, topped up by big deals that provide the rest.  During a downturn those big deals may evaporate leaving only the run-rate business.  The usual response is a strategic accounts program to focus on generating big deals and a focus on pipeline generation in the run-rate business to cover the gap.
  • Pipeline substitution.  This is a subtle problem due to a change in pipeline mix, with low-converting pipeline substituting for high-converting pipeline.  This is dangerous because you “look covered” at the start of the quarter but end up below plan at the end.  Let’s drill in a bit here.

Pipeline Substitution

Not all pipeline is created equal.  Pipeline for certain products often converts at a higher rate than others.  Pipeline conversion rates typically vary by source, e.g., with outbound SDRs typically converting at a low rate and alliances converting at a high rate.  Pipeline conversion might also vary by geography, with established geographies delivering high conversion rates than emerging ones. 

See this chart for an example:

In this example, we start every quarter with $10M in pipeline.  In 1Q23 through 3Q23 we convert 25% of it, but in 4Q23 we convert only 20%.  What happened?  The pipeline mix changed.  Starting in 4Q23, we substituted $2M in high-converting pipeline (from sales/outbound and alliances) with $2M in low-converting pipeline (from SDR/outbound).  Blended pipeline conversion thus dropped from 25% to 20% as a result of this change, effectively substituting nutrient-rich pipeline for nutrient-poor pipeline while keeping the overall amount the same. 

Identifying these problems is a lot of work because you’ll need to segment pipeline by multiple variables — such as pipeline source, product, geography, business segment (e.g., enterprise vs. corporate accounts) – to get historical average conversion rates and percent mix, and then see if changes in pipeline composition are driving reductions in conversion rates.  If so, the usual solution is to re-aim your pipeline generation back to the high-converting segments.

In this post, we have shown how you can troubleshoot go-to-market problems by splitting the funnel in two and focusing on two questions:

  • Are we giving sales the change to hit the number each quarter, as measured by pipeline coverage.
  • Is sales converting enough of the pipeline to hit the number, as measured by pipeline conversion.

I’ve also provided numerous notes and links that you can use to deepen your knowledge of how to solve these problems.

# # #

Notes

[1] The same analysis approach can easily apply to expansion ARR, which should be analyzed independently via its own funnel because it typically has different conversion rates and shorter sales cycles. 

[2] Deadheads will understand that I had to resist writing, “nothing else shaking, so you might just as well.”

[3] Think:  given that we’re off rails, forget the plan for a minute and let’s analyze what do we need to do to add $4M in newbiz ARR every quarter?  This liberates you from needless, complexifying math that makes it harder to see the answer and is a great way to start in the crawl-walk-run exercise of getting back on track.

[4] More precisely, day-1, week-3, current-quarter pipeline coverage.  Snapshotting Sunday night before the start of week 3 gives you a consistent point to compare across quarters.  Waiting until the start of week 3 gives sales (more than) enough time to clean up the pipeline after the end of the prior quarter but is still early enough to be considered “starting pipeline.”  Note that you may need to apply corrections for any deals that close in the first two weeks of the quarter.  A high-class problem, at least.

[5] Or, in a monthly cadence, day-3 pipeline coverage.  See my post on the mental mapping from quarterly to monthly cadence for more on this concept.

[6] There is a cost to this type of insurance; it’s not great for your CAC ratio if you don’t end up over-performing plan (which ceteris paribus, starting with 110% of your pipeline target, you should).  But it does reduce the risk of missing plan.  To me, the correct sequence is to focus on making plan first, before focusing on efficiency — but you need to have the cash to underwrite that philosophy.

[7] For example, one big deal that masks what’s otherwise a pipeline starvation situation.  If you’re going to set targets on dollars (which typically involves using some placeholder value) then you should create the oppties with a close date far in the future (e.g., one year) that sales can pull forward once they further qualify the account.  The alternative is usually generating lots of fake pipeline that is auto-dumped into next quarter that gets pushed out in the first weeks of the quarter.  Also, see this for more on ensuring pipeline coverage by seller, and not just in aggregate.

[8] You’re not going to hire an extra CMO, an extra PR agency, and an extra product marketer to generate 20 more oppties.  Those costs are effectively fixed.

[9] And putting them under such pressure can run in diametric opposition to pipeline discipline and enforcing pipeline standards by encouraging reps to enter dubious deals as pipeline to get their manager off their backs.

[10] I say “my” oppty management system to remind people that carrying sub-standard oppties has impacts well beyond themselves and that oppty management system is the company’s property, not theirs.  For old movie fans, when speaking of the oppties in “my” oppty management system, I’m always reminded of Cool Hand Luke: “what’s your dirt doing in Boss Keen’s ditch?

The Impact of AI on Go-To-Market: Slides from my Balderton Event

Last week I hosted an event at the Balderton Capital London headquarters discussing the impact of AI on go-to-market (GTM) functions. The event was inspired by two things:

  • My aborted attempt to write an AI GTM guide, after I realized just how huge the space was and how fast it was changing. I quickly understood it’d take too long to write and it would be out of date the second it was published. But the exercise nevertheless got me started researching AI and GTM.
  • The following slide from Battery Ventures that I discussed in my 2024 predictions post. This slide argues that, thanks to AI-driven productivity improvement, you should be able to drive the same quota with a 75-person organization that previously required a 110-person organization. This got me thinking: boards are going to start asking about that 30% productivity improvement in 2H24 and what are we going to say?
What are going to say when the board asks for that 30%?

When the market is in a state of confusion and things are moving fast, it’s better to have a conversation than to write a guide. So I found two of the smartest people I know and asked them to join me on a panel:

  • Alice de Courcy, CMO of Cognism, an amazing company that’s doing some of the best solutions-oriented and thought-leadership (aka “demand generation“) marketing in Europe. Alice is also the author of Diary of First-Time CMO.
  • Firaas Rasheed, founder and CEO of Hook, a company that’s re-inventing customer success software. Firaas argues that CS software lost the plot and ended up more focused on process (e.g., QBRs and NPS surveys) than on results (e.g., churn prediction and prevention). The company’s origin story is quite compelling and told here.

After a I did brief introduction to set the stage, we focused on four high-level questions that GTM leaders are pondering:

  • What should I make of all the AI tools flooding the market?
  • What should my strategy be?
  • What are my higher-ups expecting?
  • Where should I start?

Thanks again to Alice and Firaas for joining me, and thanks to everyone who attended. The slides are available in PDF here and are embedded below. Balderton is writing up a summary of the event that, once available, I’ll link to here.

Note: both Cognism and Hook at Balderton portfolio companies.

Video of my SaaStock Presentation: Strategies and Tactics to Drive GTM Efficiency

Just a quick post to share the video of the SaaStock 2022 presentation that I delivered in Dublin last October.  The title of the session is Driving Go-To-Market Efficiency in the Coming 24 Months.

In a relatively short, 20-minute format, I cover:

  • How and why the go-to-market (GTM) functions invariably bear most of the brunt of cost pressure.
  • The dangers of Excel-induced hallucinations, where you take a funnel model and make a series of small tweaks that compound you into fantasy-land.
  • The downsides of a budget free-for-all.  Sales usually wins the budget battle, but loses the performance war.
  • How the best solution is a Three Musketeers approach (“all for one and one for all”) across sales, marketing, success, and services.  (Yes, there were actually four of them, providing an apt metaphor for the inclusion of services.)
  • And then, in a thought-consideration punch list format, scores of ideas that you can consider to help improve the efficiency of your GTM organization.

Here is the video.