Category Archives: Enterprise Software

Product Power Breakfast: Thomas Interviews Dave

(TLDR — Link to Episode #2, Thursday, April 1st at 8 am pacific.)

Well, we just finished the first episode (“Dave interviews Thomas“) of our Enterprise SaaS Product Power Breakfast series and wow, was it crazy.  In addition to our regularly scheduled interview on product management with Thomas, we had:

  • A guest appearance from the ever-brilliant Jason Lemkin, EchoSign founder, VC, and creator of SaaStr — thanks for coming!
  • A surprise cameo from Dharmesh Shah, cofounder and CTO of HubSpot (who I think Jason pulled up [1]) — thanks for coming!

While it was definitely a romp in terms of structure (or lack thereof), it was high energy, full of great content, and fun.

So, we’re going to try it again next week with Episode #2:  Thomas Interviews Dave. Topics on the agenda include:  product management, product strategy, product positioning, and product roadmaps.

Maybe he can control the room better than I did.  See you there!

# # #

Notes

[1] I was following the “it can’t be that Dharmesh” and the “let celebrities be audience members in peace” principles.

 

Congratulations to Nuxeo on its Acquisition by Hyland

It feels like the just the other day when I met a passionate French entrepreneur in the bar on the 15th floor of the Hilton Times Square to discuss Nuxeo.  I remember being interested in the space, which I then viewed as next-generation content management (which, by the way, seemed extraordinarily in need of a next generation) and today what we’d call a content services platform (CSP) — in Nuxeo’s case, with a strong digital asset management angle.

I remember being impressed with the guy, Eric Barroca, as well.  If I could check my notebook from that evening, I’m sure I’d see written:  “smart, goes fast, no BS.”  Eric remains one of the few people who — when he interrupts me saying “got it” — that I’m quite sure that he does.

To me, Nuxeo is a tale of technology leadership combined with market focus, teamwork, and leadership.  All to produce a great result.

Congrats to Eric, the entire team, and the key folks I worked with most closely during my tenure on the board:  CMO/CPO Chris McGlaughlin, CFO James Colquhoun, and CTO Thierry Delprat.

Thanks to the board for having me, including Christian Resch and Nishi Somaiya from Goldman Sachs, Michael Elias from Kennet, and Steve King.  It’s been a true pleasure working with you.

An Epitaph for Intrapreneurship

About twenty years ago, before I ran two startups as CEO and served as product-line general manager, I went through an intrapreneurship phase, where I was convinced that big companies should try to act like startups.  It was a fairly popular concept at the time.

Heck, we even decided to try the idea at Business Objects, launching a new analytical applications division called Ithena, with a mission to build CRM analytical applications on top of our platform.  We made a lot of mistakes with Ithena, which was the beginning of the end of my infatuation with the concept:

  • We staffed it with the wrong people.  Instead of hiring experts in CRM, we staffed it largely with experts in BI platforms.  Applications businesses are first and foremost about domain expertise.
  • They built the wrong thing.  Lacking CRM knowledge, they invested in building platform extensions that would be useful if one day you wanted to build a CRM analytical app.  From a procrastination viewpoint, it felt like a middle school dance.  Later, in Ithena’s wreckage, I found one of the prouder moments of my marketing career  — when I simply repositioned the product to what it was (versus what we wanted it to be), sales took off.
  • We blew the model.  They were both too close and too far.  They were in the same building, staffed largely with former parent-company employees, and they kept stock options in both the parent the spin-out.  It didn’t end up a new, different company.  It ended up a cool kids area within the existing one.
  • We created channel conflict with ourselves.  Exacerbated by the the thinness of the app, customers had trouble telling the app from the platform.  We’d have platform salesreps saying “just build the app yourself” and apps salesreps saying that you couldn’t.
  • They didn’t act like entrepreneurs.  They ran the place like big-company, process-oriented people, not scrappy entrepreneurs fighting for food to get through the week.  Favorite example:  they had hired a full-time director of salesops before they had any customers.  Great from an MBO achievement perspective (“check”).  But a full-time employee without any orders to book or sales to analyze?  Say what you will, but that would never happen at a startup.

As somebody who started out pretty enthralled with intrapreneurship, I ended up pretty jaded on it.

I was talking to a vendor about these topics the other day, and all these memories came back.  So I did quick bit of Googling to find out what happened to that intrapreneurship wave.  The answer is not much.

Entrepreneurship crushes intrapreneurship in Google Trends.  Just for fun, I added SPACs to see their relatively popularity.

Here’s my brief epitaph for intrapreneurship.  It didn’t work because:

  • Intrapreneurs are basically entrepreneurs without commitment.  And commitment, that burn the ships attitude, is key part of willing a startup into success.
  • The entry barriers to entrepreneurship, particularly in technology, are low.  It’s not that hard (provided you can dodge Silicon Valley’s sexism, ageism, and other undesirable -isms) for someone in love with an idea to quit their job, raise capital, and start a company.
  • The intrapreneurial venture is unable to prioritize its needs over those of the parent.  “As long as you’re living in my house, you’ll do things my way,” might work for parenting (and it doesn’t) but it definitely does not work for startup businesses.
  • With entrepreneurship one “yes” enables an idea, with intrapreneurship, one “no” can kill it.  What’s more, the sheer inertia in moving a decision through the hierarchy could kill an idea or cause a missed opportunity.
  • In terms of the ability to attract talent and raise capital, entrepreneurship beats intrapreneurship hands down.  Particularly today, where the IPO class of 2020 raised a mean of $350M prior to going public.

As one friend put it, it’s easy with intrapreneurship to end up with all the downsides of both models.  Better to be “all in” and redefine the new initiative into your corporate self image, or “all out” and spin it out as an independent entity.

I’m all for general mangers (GMs) acting as mini-CEOs, running products as a portfolio of businesses.  But that job, and it’s a hard one, is simply not the same as what entrepreneurs do in creating new ventures.  It’s not even close.

The intrapreneur is dead, long live the GM.

The Holy Grail of Enterprise Sales: Is a Repeatable Sales Process Enough?

(This is the third in a three-part restructuring and build-out of a previous post.  See note [1] for details.)

In the first two posts in this series, we first defined a repeatable sales process and then discussed how to prove that your sales process is repeatable.

All that was just the warm-up for the big idea in this series:  is repeatability enough?

The other day I was re-reading my favorite book on data governance (and yes I have one), Non-Invasive Data Governance by Bob Seiner.  Reading it reminded me of the Capability Maturity Model, from Carnegie Mellon’s Software Engineering Institute.

Here’s the picture that triggered my thinking:

Did you see it?  Look again.

Repeatable is level two in a five-level model.  Here we are in sales and marketing striving to achieve what our engineering counterparts would call 40% of the way there.  Doesn’t that explain a lot?

To think about what we should strive for, I’m going to switch models, to CMMI, which later replaced CMM.   While it lacks a level called “repeatable” – which is what got me thinking about the whole topic in the first place – I think it’s nevertheless a better model for thinking about sales [2].

Here’s a picture of CMMI:

I’d say that most of what I defined as a repeatable sales process fits into the CMMI model as level 3, defined.  What’s above that?

  • Level 4, quantitively managed. While most salesforces are great about quantitative measurement of the result – tracking and potentially segmenting metrics like quota performance, average sales price, expansion rates, win rates – fewer actually track and measure the sales process [3].  For example, time spent at each stage, activity monitoring by stage, conversion by stage, and leakage reason by stage.  Better yet, why just track these variables when you can act on them?  For example, put rules in place to take squatted opportunities from reps and give them to someone else [4], or create excess stage-aging reports that will be reviewed in management meetings.
  • Level 5, optimizing. The idea here is that once the process is defined and managed (not just tracked) quantitatively, then we should be in a mode where we are constantly improving the process.  To me, this means both analytics on the existing process as well as qualitative feedback and debate about how to make it better.  That is, we are not only in continual improvement mode when it comes to sales execution, but also when it comes to sale process.  We want to constantly strive to execute the process as best we can and also strive to improve the process.  This, in my estimation, is both a matter of culture and focus.  You need a culture that process- and process-improvement-oriented.  You need to take the time – as it’s often very hard to do in sales – to focus not just on results, but on the process and how to constantly improve it.

To answer my own question:  is repeatability enough?  No, it’s not.  It’s a great first step in the industrialization of your sales process, but it quickly then becomes the platform on which you start quantitative management and optimization.

So the new question should be not “is your sales process repeatable?” but “is it optimizing?”  And never “optimized,” because you’re never done.

# # #

Notes

[1] I have a bad habit, which I’ve been slowly overcoming, to accidently put real meat on one topic into an aside of a post on a different one.  After reading the original post, I realized that I’d buried the definition of a repeatable sales model and the tests for having one into a post that was really about applying CMMI to the sales model.  Ergo, as my penance, as a service to future readers, and to help my SEO, I am decomposing that post into three parts and elaborating on it during the restructuring process.

[2] The nuance is that in CMM you could have a process that was repeatable without being (formally) defined.  CMMI gets rid of this notion which, for whatever it’s worth, I think is pretty real in sales.  That is, without any formal definition, certain motions get repeated informally and through word of mouth.

[3] With the notable exception of average sales cycle length, which just about everyone tracks – but this just looks at the whole process, end to end.  (And some folks start it late, e.g., from-demo as opposed to from-acceptance.)

[4] Where squatting means accepting an opportunity but not working on it, either at all or sufficiently to keep it moving.

The Holy Grail of Enterprise Sales: Proving a Repeatable Sales Process

(This is the second in a three-part restructuring and build-out of a previous post.  See note [1] for details.)

In the prior post we introduced repeatable sales process as the Holy Grail of enterprise software sales and, unlike some who toss the term around rather casually, we defined a repeatable sales process as meaning you have six things:

  1. Standard hiring profile
  2. Standard onboarding program
  3. Standard support ratios
  4. Standard patch
  5. Standard kit
  6. Standard sales methodology

The point of this, of course, is to demonstrate that given these six standard elements you can consistently deliver a desirable, standard result.

The surprisingly elusive question is then, how to measure that?

  • Making plan?  This should be a necessary but not sufficient condition for proving repeatability.  As we’ll see below, you can make plan in healthy as well as unhealthy ways (e.g., off a small number of reps, off disproportionate expansion and weak new logo sales).
  • Realizing some percentage of your sales capacity?  I love this — and it’s quite useful if you’ve just lost or cut a big chunk of your salesforce and are ergo in the midst of a ramp reset — but it doesn’t prove repeatability because you can achieve it in both good and bad ways [2].
  • Having 80% of your salesreps at 100%+ of quota?  While I think percent of reps hitting quota is the right way to look at things, I think 80% at 100% is the wrong bar.

Why is defaulting to 80% of reps at 100%+ of quota the wrong bar?

  • The attainment percentage should vary as function of business model: with a velocity model, monthly quotas, and a $25K ARR average sales price (ASP), it’s a lot more applicable than with an enterprise model, annual quotas, and a $300K ASP.
  • 80% at 100%+ means you beat plan even if no one overperforms [3] – and that hopefully rarely happens.
  • There is a difference between annual and quarterly performance, so while 80% at 100% might be reasonable in some cases on an annual basis, on a quarterly basis it might be more like 50% — see the spreadsheet below for an example.
  • The reality of enterprise software is that performance is way more volatile than you might like it to be when you’re sitting in the board room
  • When we’re looking at overall productivity we might look at the entire salesforce, but when we’re looking at repeatability we should look at recently hired cohorts. Does 80% of your third-year reps at quota tell you as much about repeatability – and the presumed performance of new hires – as 80% of your first-year reps cohort?

Long story short, in enterprise software, I’d say 80% of salesreps at 80% of quota is healthy, providing the company is making plan.  I’d look at the most recent one-year and two-year cohorts more than the overall salesforce.  Most importantly, to limit survivor bias, I’d look at the attrition rate on each cohort and hope for nothing more than 20%/year.  What good is 80% at 80% of quota if 50% of the salesreps flamed out in the first year?  Tools like my salesrep ramp chart help with this analysis.

Just to make the point visceral, I’ll finish by showing a spreadsheet with a concrete example of what it looks like to make plan in a healthy vs. unhealthy way, and demonstrate that setting the bar at 80% of reps at 100% of quota is generally not realistic (particularly in a world of over-assignment).

If you look at the analysis near the bottom, you see the healthy company lands at 105% of plan, with 80% of reps at 80%+ of quota, and with only 40% of reps at 100%+ of quota.  The unhealthy company produces the same sales — landing the company at 105% of plan — but due to a more skewed distribution of performance gets there with only 47% of reps at 80%+ and only a mere 20% at 100%+.

In our final post in this series, we’ll ask the question:  is repeatability enough?

# # #

Notes

[1] I have a bad habit, which I’ve been slowly overcoming, to accidently put real meat on one topic into an aside of a post on a different one.  After reading the original post, I realized that I’d buried the definition of a repeatable sales model and the tests for having one into a post that was really about applying CMMI to the sales model.  Ergo, as my penance, as a service to future readers, and to help my SEO, I am decomposing that post into three parts and elaborating on it during the restructuring process.

[2] Unless you’ve had either late hiring or unexpected attrition, 80% of your notional sales capacity should roughly be your operating plan targets.  So this is point is normally subtly equivalent to the prior one.

[3] Per the prior point, the typical over-assignment cushion is around 20%

The Holy Grail of Enterprise Sales: Defining the Repeatable Sales Process

(This is the first in a three-part restructuring and build-out of the prior post.  See note [1] for details.)

The number one question go-to-market question in any enterprise software startup is:  “do you have a repeatable sales process?” or, in more contemporary Silicon Valley patois, “do you have a repeatable sales motion?”

It’s one of the key milestones in startup evolution, which proceed roughly like:

  • Do you have a concept?
  • Do you have a working product?
  • Do you have any customer traction (e.g., $1M in ARR)?
  • Have you established product-market fit?
  • Do you have a repeatable sales process?

Now, when pressed to define “repeatable sales process,” I suspect many of those asking might reply along the same lines as the US Supreme Court in defining pornography:

“I shall not today attempt further to define the kinds of material I understand to be embraced… but I know it when I see it …”

That is, in my estimation, a lot of people throw the term around without defining it, so in the Kelloggian spirit of rigor, I thought I’d offer my definition:

A repeatable sales process means you have six things:

  1. Standard hiring profile
  2. Standard onboarding program
  3. Standard support ratios
  4. Standard patch
  5. Standard kit
  6. Standard sales methodology

All of which contribute to delivering a desirable, standard result.  Let’s take a deeper look at each:

  1. You hire salesreps with a standard hiring profile, including items such as years of experience, prior target employers or spaces, requisite skills, and personality assessments (e.g., DiSC, Hogan, CCAT).
  2. You give them a standard onboarding program, typically built by a dedicated director of sales productivity, using industry best practices, one to three weeks in length, and accompanied by ongoing clinics.
  3. You have standard support ratios (e.g., each rep gets 1/2 of a sales consultant, 1/3 of an SDR, and 1/6 of a sales manager).  As you grow, your sales model should also use ratios to staff more indirect forms of support such as alliances, salesops, and sales productivity.
  4. You have a standard patch (territory), and a method for creating one, where the rep can be successful.  This is typically a quantitative exercise done by salesops and ideally is accompanied by a patch-warming program [2] such that new reps don’t inherit cold patches.
  5. You have standard kit including tools such as collateral, presentations, demos, templates.  I strongly prefer fewer, better deliverables that reps actually know how to use to the more common deep piles of tools that make marketing feel productive, but that are misunderstood by sales and ineffective.
  6. You have a standard sales methodology that includes how you define and execute the sales process.  These include programs ranging from the boutique (e.g., Selling through Curiosity) to the mainstream (e.g., Force Management) to the classic (e.g., Customer-Centric Selling) and many more.  The purpose of these programs is two-fold:  to standardize language and process across the organization and to remind sales — in a technology feature-driven world — that customers buy products as solutions to problems, i.e., they buy 1/4″ holes, not 1/4″ bits.

And, most important, you can demonstrate that all of the above is delivering some desirable standard result, which will be the topic of the next post.

# # #

Notes

[1] I have a bad habit, which I’ve been slowly overcoming, to accidently put real meat on one topic into an aside of a post on a different one.  My favorite example:  it took me ~15 years to create a post on my marketing credo (marketing exists to make sales easier) despite mentioning it in passing in numerous posts.  After reading the prior post, I realized that I’d buried the definition of a repeatable sales model and the tests for having one into a post that was really about applying CMMI to the sales model.  Ergo, as my penance, as a service to future readers, and to help my SEO, I am decomposing that post into three parts and elaborating on it during the restructuring process.

[2] I think of patch-warming as field marketing for fallow patches.  Much as field marketing works to help existing reps in colder patches, why can’t we apply the same concepts to patches that will soon be occupied?  This is an important, yet often completely overlooked, aspect of reducing rep ramping time.

The Evolution of Software Marketing: Hey Marketing, Go Get [This]!

As loyal readers know, I’m a reductionist, always trying to find the shortest, simplest way of saying things even if some degree of precision gets lost in the process and even if things end up more subtle than they initially appear.

For example, my marketing mission statement of “makes sales easier” is sometimes misinterpreted as relegating marketing to a purely tactical role, when it actually encompasses far more than that.  Yes, marketing can make sales easier through tactical means like lead generation and sales support, but marketing can also makes sales easier through more leveraged means such as competitive analysis and sales enablement or even more leveraged means such as influencer relations and solutions development or the most leveraged means of picking which markets the company competes in and (with product management) designing products to be easily salable within them.

“Make sales easier” does not just mean lead generation and tactical sales support.

So, in this reductionist spirit, I thought I’d do a historical review of the evolution of enterprise software marketing by looking at its top objective during the thirty-odd years (or should I say thirty odd years) of my career, cast through a fill-in-the-blank lens of, “Hey Marketing, go get [this].”

Hey Marketing, Go Get Leads

In the old days, leads were the focus.  They were tracked on paper and the goal was a big a pile as possible.  These were the days of tradeshow models and free beer:  do anything to get people come by the booth – regardless of whether they have any interest in or ability to buy the software.  Students, consultants, who cares?  Run their card and throw them in the pile.  We’ll celebrate the depth of the pile at the end of the show.

Hey Marketing, Go Get Qualified Leads

Then somebody figured out that all those students and consultants and self-employed people who worked at companies way outside the company’s target customer size range and couldn’t actually buy our software.  So the focus changed to get qualified leads.  Qualified first basically meant not unqualified:

  • It couldn’t be garbage, illegible, or duplicate
  • It couldn’t be self-employed, students, or consultants
  • It couldn’t be other people who clearly can’t buy the software (e.g., in the wrong country, at too small a company, in a non-applicable industry)

Then people realized that not all not-unqualified leads were the same. 

Enter lead scoring.  The first systems were manual and arbitrarily defined:  e.g., let’s give 10 points for target companies, 10 points for a VP title, and 15 points if they checked buying-within-6-months on the lead form.  Later systems got considerably more sophisticated adding both firmographic and behavioral criteria (e.g., downloaded the Evaluation Guide).  They’d even have decay functions where downloading a white paper got you 10 points, but you’d lose a point every week since if there you had no further activity. 

The problem was, of course, that no one ever did any regressions to see if A leads actually were more likely to close than B leads and so on.  At one company I ran, our single largest customer was initially scored a D lead because the contact downloaded a white paper using his Yahoo email address.  Given such stories and a general lack of faith in the scoring system, operationally nobody ever treated an A lead differently from a D lead – they’d all get “6×6’ed” (6 emails and 6 calls) anyway by the sales development reps (SDRs).  If the score didn’t differentiate the likelihood of closing and the SDR process was score-invariant, what good was scoring? The answer: not much.

Hey Marketing, Go Get Pipeline

Since it was seemingly too hard to figure out what a qualified lead was, the emphasis shifted.  Instead of “go get leads” it became, “go get pipeline.”  After all, regardless of score, the only leads we care about are those that turn into pipeline.  So, go get that.

Marketing shifted emphasis from leads to pipeline as salesforce automation (SFA) systems were increasingly in place that made pipeline easier to track.  The problem was that nobody put really good gates on what it took to get into the pipeline.  Worse yet, incentives backfired as SDRs, who were at the time almost always mapped directly to quota-carrying reps (QCRs), were paid incentives when leads were accepted as opportunities.  “Heck,” thinks the QCR, “I’ll scratch my SDR’s back in order to make sure he/she keeps scratching mine:  I’ll accept a bunch of unqualified opportunities, my SDR will get paid a $200 bonus on each, and in a few months I’ll just mark them no decision.  No harm, no foul. “Except the pipeline ends up full of junk and the 3x self-fulfilling pipeline coverage prophecy is developed.  Unless you have 3x coverage, your sales manager will beat you up, so go get 3x coverage regardless of whether it’s real or not.  So QCRs stuff bad opportunities into the pipeline which in turn converts at a lower rate which in turn increases the coverage goal – i.e., “heck, we’re only converting pipeline at 25%, so now we need 4x coverage!”  And so on.

At one point in my career I actually met a company with 100x pipeline coverage and 1% conversion rates. 

Hey Marketing, Go Get Qualified Opportunities (SQLs)

Enter the sales qualified lead (SQL). Companies realize they need to put real emphasis on someone, somewhere in the process defining what’s real and what not.  That someone ends up the QCR and it’s now their job to qualify opportunities as they are passed over and only accept those that both look real and meet documented criteria.  Management is now focused on SQLs.  SQL-based metrics, such as cost-per-SQL or SQL-to-close-rate, are created and benchmarked.  QCRs can no longer just accept everything and no-decision it later and, in fact, there’s less incentive to anyway as SDRs are no longer basically working for the QCRs, but instead for “the process” and they’re increasingly reporting into marketing to boot.  Yes, SDRs will be paid on SQLs accepted by sales, but sales is going to be held highly accountable for what happens to the SQLs they accept. 

Hey Marketing, Go Get Qualified Opportunities Efficiently

At this point we’ve got marketing focused on SQL generation and we’ve built a metrics-driven inbound SDR team to process all leads. We’ve eliminated the cracks between sales and marketing and, if we’re good, we’ve got metrics and reporting in place such that we can easily see if leads or opportunities are getting stuck in the pipeline. Operationally, we’re tight.

But are we efficient? This is also the era of SaaS metrics and companies are increasingly focused not just on growth, but growth efficiency.  Customer acquisition cost (CAC) becomes a key industry metric which puts pressure on both sales and marketing to improve efficiency.  Sales responds by staffing up sales enablement and sales productivity functions. Marketing responds with attribution as a way to try and measure the relative effectiveness of different campaigns.

Until now, campaign efficiency tended to be measured a last-touch attribution basis. So when marketers tried to calculate the effectiveness of various marketing campaigns, they’d get a list of closed deals, and allocate the resultant sales to campaigns by looking at the last thing someone did before buying. The predictable result: down-funnel campaigns and tools got all of the credit and up-funnel campaigns (e.g., advertising) got none.

People pretty quickly realized this was a flawed way to look at things so, happily, marketers didn’t shoot the propellers off their marketing planes by immediately stopping all top-of-funnel activity. Instead, they kept trying to find better means of attribution.

Attribution systems, like Bizible, came along which tried to capture the full richness of enterprise sales. That meant modeling many different contacts over a long period of time interacting with the company via various mechanisms and campaigns. In some ways attribution became like search: it wasn’t whether you got the one right answer, it was whether search engine A helped you find relevant documents better than search engine B. Right was kind of out the question. I feel the same way about attribution. Some folks feel it doesn’t work at all. My instinct is that there is no “right” answer but with a good attribution system you can do better at assessing relative campaign efficiency than you can with the alternatives (e.g., first- or last-touch attribution).

After all, it’s called the marketing mix for a reason.

Hey Marketing, Go Get Qualified Opportunities That Close

After the quixotic dalliance with campaign efficiency, sales got marketing focused back on what mattered most to them. Sales knew that while the bar for becoming a SQL was now standardized, that not all SQLs that cleared it were created equal. Some SQLs closed bigger, faster, and at higher rates than others. So, hey marketing, figure out which ones those are and go get more like them.

Thus was born the ideal customer profile (ICP). In seed-stage startups the ICP is something the founders imagine — based on the product and target market they have in mind, here’s who we should sell to. In growth-stage startups, say $10M in ARR and up, it’s no longer about vision, it’s about math.

Companies in this size range should have enough data to be able to say “who are our most successful customers” and “what do they have in common.” This involves doing a regression between various attributes of customers (e.g., vertical industry, size, number of employees, related systems, contract size, …) and some success criteria. I’d note that choosing the success criteria to regress against is harder than meetings the eye: when we say we find to find prospects most like our successful customers, how are we defining success?

  • Where we closed a big deal? (But what if it came at really high cost?)
  • Where we closed a deal quickly? (But what if they never implemented?)
  • Where they implemented successfully? (But what if they didn’t renew?)
  • Where they renewed once? (But what if they didn’t renew because of uncontrollable factor such as being acquired?)
  • Where they gave us a high NPS score? (But what if, despite that, they didn’t renew?)

The Devil really is in the detail here. I’ll dig deeper into this and other ICP-related issues one day in a subsequent post. Meantime, TOPO has some great posts that you can read.

Once you determine what an ideal customer looks like, you can then build a target list of them and enter into the world of account-based marketing (ABM).

Hey Marketing, Go Get Opportunities that Turn into Customers Who Renew

While sales may be focused simply on opportunities that close bigger and faster than the rest, what the company actually wants is happy customers (to spread positive word of mouth) who renew. Sales is typically compensated on new orders, but the company builds value by building its ARR base. A $100M ARR company with a CAC ratio of 1.5 and churn rate of 20% needs to spend $30M on sales and marketing just to refill the $20M lost to churn. (I love to multiply dollar-churn by the CAC ratio to figure out the real cost of churn.)

What the company wants is customers who don’t churn, i.e., those that have a high lifetime value (LTV). So marketing should orient its ICP (i.e., define success in terms of) not just likelihood to {close, close big, close fast} but around likelihood to renew, and potentially not just once. Defining different success criteria may well produce a different ICP.

Hey Marketing, Go Get Opportunities that Turn into Customers Who Expand

In the end, the company doesn’t just want customers who renew, even if for a long time. To really the build the value of the ARR base, the company wants customers who (1) are relatively easily won (win rate) and relatively quickly (average sales cycle) sold, (2) who not only renew multiple times, but who (3) expand their contracts over time.

Enter net dollar expansion rate (NDER), the metric that is quickly replacing churn and LTV, particularly with public SaaS companies. In my upcoming SaaStr 2020 talk, Churn is Dead, Love Live Net Dollar Expansion Rate, I’ll go into why this happening and why companies should increasingly focus on this metric when it comes to thinking about the long-term value of their ARR base.

In reality, the ultimate ICP is built around customers who meet the three above criteria: we can sell them fairly easily, they renew, and they expand. That’s what marketing needs to go get!