Kellblog’s 2017 Predictions  

New Year’s means three things in my world:  (1) time to thank our customers and team at Host Analytics for another great year, (2) time to finish up all the 2017 planning items and approvals that we need to get done before the sales kickoff (including the one most important thing to do before kickoff), and time to make some predictions for the coming year.

Before looking at 2017, let’s see how I did with my 2016 predictions.

2016 Predictions Review

  1. The great reckoning begins. Correct/nailed.  As predicted, since most of the bubble was tied up in private companies owned by private funds, the unwind would happen in slow motion.  But it’s happening.
  2. Silicon Valley cools off a bit. Partial.  While IPOs were down, you couldn’t see the cooling in anecdotal data, like my favorite metric, traffic on highway101.
  3. Porter’s five forces analysis makes a comeback. Partial.  So-called “momentum investing” did cool off, implying more rational situation analysis, but you didn’t hear people talking about Porter per se.
  4. Cyber-cash makes a rise. CorrectBitcoin more doubled on the year (and Ethereum was up 8x) which perversely reinforced my view that these crypto-currencies are too volatile — people want the anonymity of cash without a highly variable exchange rate.  The underlying technology for Bitcoin, blockchain, took off big time.
  5. Internet of Things goes into trough of disillusionment. Partial.  I think I may have been a little early on this one.  Seems like it’s still hovering at the peak of inflated expectations.
  6. Data science rises as profession. Correct/easy.  This continues inexorably.
  7. SAP realizes they are a complex enterprise application company. Incorrect.  They’re still “running simple” and talking too much about enabling technology.  The stock was up 9% on the year in line with revenues up around 8% thus far.
  8. Oracle’s cloud strategy gets revealed – “we’ll sell you any deployment model you want as long as your annual bill goes up.”  Partial.  I should have said “we’ll sell you any deployment model you want as long as we can call it cloud to Wall St.”
  9. Accounting irregularities discovered at one or more unicorns. Correct/nailed.  During these bubbles the pattern always repeats itself – some people always start breaking the rules in order to stand out, get famous, or get rich.  Fortune just ran an amazing story that talks about the “fake it till you make it” culture of some diseased startups.
  10. Startup workers get disappointed on exits. Partial.  I’m not aware of any lawsuits here but workers at many high flyers have been disappointed and there is a new awareness that the “unicorn party” may be a good thing for founders and VCs, but maybe not such a good thing for rank-and-file employees (and executive management).
  11. The first cloud EPM S-1 gets filed. Incorrect.  Not yet, at least.  While it’s always possible someone did the private filing process with the SEC, I’m guessing that didn’t happen either.
  12. 2016 will be a great year for Host Analytics. Correct.  We had a strong finish to the year and emerged stronger than we started with over 600 great customers, great partners, and a great team.

Now, let’s move on to my predictions for 2017 which – as a sign of the times – will include more macro and political content than usual.

  1. The United States will see a level of divisiveness and social discord not seen since the 1960s. Social media echo chambers will reinforce divisions.  To combat this, I encourage everyone to sign up for two publications/blogs they agree with and two they don’t lest they never again hear both sides of an issue. (See map below, coutesy of Ninja Economics, for help in choosing.)  On an optimistic note, per UCSD professor Lane Kenworthy people aren’t getting more polarized, political parties are.

news

  1. Social media companies finally step up and do something about fake news. While per a former Facebook designer, “it turns out that bullshit is highly engaging,” these sites will need to do something to filter, rate, or classify fake news (let alone stopping to recommend it).  Otherwise they will both lose credibility and readership – as well as fail to act in a responsible way commensurate with their information dissemination power.
  1. Gut feel makes a comeback. After a decade of Google-inspired heavily data-driven and A/B-tested management, the new US administration will increasingly be less data-driven and more gut-feel-driven in making decisions.  Riding against both common sense and the big data / analytics / data science trends, people will be increasingly skeptical of purely data-driven decisions and anti-data people will publicize data-driven failures to popularize their arguments.  This “war on data” will build during the year, fueled by Trump, and some of it will spill over into business.  Morale in the Intelligence Community will plummet.
  1. Under a volatile leader, who seems to exhibit all nine of the symptoms of narcissistic personality disorder, we can expect sharp reactions and knee-jerk decisions that rattle markets, drive a high rate of staff turnover in the Executive branch, and fuel an ongoing war with the media.  Whether you like his policies or not, Trump will bring a high level of volatility the country, to business, and to the markets.
  1. With the new administration’s promises of $1T in infrastructure spending, you can expect interest rates to raise and inflation to accelerate. Providing such a stimulus to already strong economy might well overheat it.  One smart move could be buying a house to lock in historic low interest rates for the next 30 years.  (See my FAQ for disclaimers, including that I am not a financial advisor.)
  1. Huge emphasis on security and privacy. Election-related hacking, including the spearfishing attack on John Podesta’s email, will serve as a major wake-up call to both government and the private sector to get their security act together.  Leaks will fuel major concerns about privacy.  Two-factor authentication using verification codes (e.g., Google Authenticator) will continue to take off as will encrypted communications.  Fear of leaks will also change how people use email and other written electronic communications; more people will follow the sage advice in this quip:

Dance like no one’s watching; E-mail like it will be read in a deposition

  1. In 2015, if you were flirting on Ashley Madison you were more likely talking to a fembot than a person.  In 2016, the same could be said of troll bots.  Bots are now capable of passing the Turing Test.  In 2017, we will see more bots for both good uses (e.g., customer service) and bad (e.g., trolling social media).  Left unchecked by the social media powerhouses, bots could damage social media usage.
  1. Artificial intelligence hits the peak of inflated expectations. If you view Salesforce as the bellwether for hyped enterprise technology (e.g., cloud, social), then the next few years are going to be dominated by artificial intelligence.  I’ve always believed that advanced analytics is not a standalone category, but instead fodder that vendors will build into smart applications.  They key is typically not the technology, but the problem to which to apply it.  As Infer founder Vik Singh said of Jim Gray, “he was really good at finding great problems,” the key is figuring out the best problems to solve with a given technology or modeling engine.  Application by application we will see people searching for the best problems to solve using AI technology.
  1. The IPO market comes back. After a year in which we saw only 13 VC-backed technology IPOs, I believe the window will open and 2017 will be a strong year for technology IPOs.  The usual big-name suspects include firms like Snap, Uber, AirBnB, and SpotifyCB Insights has identified 369 companies as strong 2017 IPO prospects.
  1. Megavendors mix up EPM and ERP or BI. Workday, which has had a confused history when it comes to planning, acquired struggling big data analytics vendor Platfora in July 2016, and seems to have combined analytics and EPM/planning into a single unit.  This is a mistake for several reasons:  (1) EPM and BI are sold to different buyers with different value propositions, (2) EPM is an applications sale, BI is a platform sale, and (3) Platfora’s technology stack, while appropriate for big data applications is not ideal for EPM/planning (ask Tidemark).  Combining the two together puts planning at risk.  Oracle combined their EPM and ERP go-to-market organizations and lost focus on EPM as a result.  While they will argue that they now have more EPM feet on the street, those feet know much less about EPM, leaving them exposed to specialist vendors who maintain a focus on EPM.  ERP is sold to the backward-looking part of finance; EPM is sold to the forward-looking part.  EPM is about 1/10th the market size of ERP.  ERP and EPM have different buyers and use different technologies.  In combining them, expect EPM to lose out.

And, as usual, I must add the bonus prediction that 2017 proves to be a strong year for Host Analytics.  We are entering the year with positive momentum, the category is strong, cloud adoption in finance continues to increase, and the megavendors generally lack sufficient focus on the category.  We continue to be the most customer-focused vendor in EPM, our new Modeling product gained strong momentum in 2016, and our strategy has worked very well for both our company and the customers who have chosen to put their faith in us.

I thank our customers, our partners, and our team and wish everyone a great 2017.

# # #

 

A Fresh Look at How to Measure SaaS Churn Rates

It’s been nearly three years since my original post on calculating SaaS renewal rates and I’ve learned a lot and seen a lot of new situations since then.  In this post, I’ll provide a from-scratch overhaul on how to calculate churn in an enterprise SaaS company [1].

While we are going to need to “get dirty” in the detail here, I continue to believe that too many people are too macro and too sloppy in calculating these metrics.  The details matter because these rates compound over time, so the difference between a 10% and 20% churn rate turns into a 100% difference in cohort value after 7 years [2].  Don’t be too busy to figure out how to calculate them properly.

The Leaky Bucket Full of ARR

I conceptualize SaaS companies as leaky buckets full of annual recurring revenue (ARR).  Every time period, the sales organization pours more ARR into the bucket and the customer success (CS) organization tries to prevent water from leaking out [3].

This drives the leaky bucket equation, which I believe should always be the first four lines of any SaaS company’s financial statements:

Starting ARR + new ARR – churn ARR = ending ARR

Here’s an example, where I start with those four lines, and added two extra (one to show a year over year growth rate and another to show “net new ARR” which offsets new vs. churn ARR):

leaky

For more on how to present summary SaaS startup financials, go here.

Half-Full or Half-Empty:  Renewals or Churn?

Since the renewal rate is simply one minus the churn rate, the question is which we should calculate?  In the past, I favored splitting the difference [4], whereas I now believe it’s simpler just to talk about churn.  While this may be the half-empty perspective, it’s more consistent with what most people talk about and is more directly applicable, because a common use of a churn rate is as a discount rate in a net present value (NPV) formula.

Thus, I now define the world in terms of churn and churn rates, as opposed to renewals and renewal rates.

Terminology: Shrinkage and Expansion

For simplicity, I define the following two terms:

  • Shrinkage = anything that makes ARR decrease. For example, if the customer dropped seats or was given a discount in return for signing a multi-year renewal [5].
  • Expansion = anything that makes ARR increase, such as price increases, seat additions, upselling from a bronze to a gold edition, or cross-selling new products.

Key Questions to Consider

The good news is that any churn rate calculation is going to be some numerator over some denominator.  We can then start thinking about each in more detail.

Here are the key questions to consider for the numerator:

  • What should we count? Number of accounts, annual recurring revenue (ARR), or something else like renewal bookings?
  • If we’re counting ARR should we think at the product-level or account-level?
  • To what extent should we offset shrinkage with expansion in calculating churn ARR? [6]
  • When should we count what? What about early and late renewals?  What about along-the-way expansion?  What about churn notices or non-payment?

Here are the key questions to consider for the denominator:

  • Should we use the entire ARR pool, that portion of the ARR pool that is available to renew (ATR) in any given time period, or something else?
  • If using the ATR pool, for any given renewing contract, should we use its original value or its current value (e.g., if there has been upsell along the way)?

What Should We Count?  Logos and ARR

I believe the two metrics we should count in churn rates are

  • Logos (i.e., number of customers). This provides a gross indication of customer satisfaction [7] unweighted by ARR, so you can answer the question:  what percent of our customer base is turning over?
  • This provides a very important indication on the value of our SaaS annuity.  What is happening to our ARR pool?

I would stay completely away from any SaaS metrics based on bookings (e.g., a bookings CAC, TCV, or bookings-based renewals rate).  These run counter to the point of SaaS unit economics.

Gross, Net, and Account-Level Churn

Let’s look at a quick example to demonstrate how I now define gross, net, and account-level churn [8].

gross-and-net-churn

Gross churn is the sum of all the shrinkage. In the example, 80 units.

Net churn is the sum of the shrinkage minus the sum of the expansion. In the example, 80-70 = 10 units.

To calculate account-level churn, we proceed, account by account, and look at the change in contract value, separating upsell from the churn.  The idea is that while it’s OK to offset shrinkage with expansion within an account that we should not do so across accounts when working at the account level [9].  This has the effect of splitting expansion into offset (used to offset shrinkage within an account) and upsell (leftover expansion after all account-level shrinkage has been offset).  In the example, account-level churn is 30 units.

Make the important note here that how we calculate you churn – and specifically how we use expansion ARR to offset shrinkage—not only affects our churn rates, but our reported upsell rates as well.  Should we proudly claim 70 units of upsell (and less proudly 80 units of churn), 30 units of churn and 20 of upsell, or simply 10 units of churn?  I vote for the second.

While working at the account-level may seem odd, it is how most SaaS companies work operationally.  First, because they charter customer success managers (CSMs) to think at the account level, working account by account doing everything they can to preserve and/or increase the value of the account.  Second, because most systems work at and finance people think at the account level – e.g., “we had a customer worth 100 units last year, and they are worth 110 units this year so that means upsell of 10 units.  I don’t care how much is price increase vs. swapping some of product A for product B.” [11]

So, when a SaaS company reports “churn ARR,” in its leaky bucket analysis, I believe they should report neither gross churn nor net churn, but account-level churn ARR.

Timing Issues and the Available to Renew (ATR) Concept

Churn calculations bring some interesting challenges such as early/late renewals, churn notices, non-payment, and along-the-way expansion.

A renewals booking should always be taken in the period in which it is received.  If a contract expires on 6/30 and the renewal is received in on 6/15 it should show up in 2Q and if received on 7/15 it should up in 3Q.

For churn rate calculations, however, the customer success team needs to forecast what is going to happen for a late renewal.  For example, if we have a board meeting on 7/12 and a $150K ARR renewal due 6/30 has not yet been happened, we need to proceed based on what the customer has said.  If the customer is actively using the software, the CFO has promised a renewal but is tied up on a European vacation, I would mark the numbers “preliminary” and count the contract as renewed.  If, however, the customer has not used the software in months and will not return our phone calls, I would count the contract as churned.

Suppose we receive a churn notice on 5/1 for a contract that renews on 6/30.  When should we count the churn?  A Bessemer SaaS fanatic would point to their definition of committed monthly recurring revenue (CMRR) [12] and say we should remove the contact from the MRR base on 5/1.  While I agree with Bessemer’s views in general — and specifically on things like on preferring ARR/MRR to ACV and TCV — I get off the bus on the whole notion of “committed” ARR/MRR and the ensuing need to remove the contract on 5/1.  Why?

  • In point of fact the customer has licensed and paid for the service through 6/30.
  • The company will recognize revenue through 6/30 and it’s much easier to do so correctly when the ARR is still in the ARR base.
  • Operationally, it’s defeatist. I don’t want our company to give up and say “it’s over, take them out of the ARR base.” I want our reaction to be, “so they think they don’t want to renew – we’ve got 60 days to change their mind and keep them in.” [13]

We should use the churn notice (and, for that matter, every other communication with the customer) as a way of improving our quarterly churn forecast, but we should not count churn until the contract period has ended, the customer has not renewed, and the customer has maintained their intent not to renew in coming weeks.

Non-payment, while hopefully infrequent, is another tricky issue.  What do we do if a customer gives us a renewal order on 6/30, payable in 30 days, but hasn’t paid after 120?  While the idealist in me wants to match the churn ARR to the period in which the contract was available to renew, I would probably just show it as churn in the period in which we gave up hope on the receivable.

Expansion Along the Way (ATW)

Non-payment starts to introduce the idea of timing mismatches between ARR-changing events and renewals cohorts.  Let’s consider a hopefully more frequent case:  ARR expansion along the way (ATW).  Consider this example.

expansion

To decide how to handle this, let’s think operationally, both about how our finance team works and, more importantly, about how we want our customer success managers (CSMs) to think.  Remember we want CSMs to each own a set of customers, we want them to not only protect the ARR of each customer but to expand it over time.  If we credit along-the-way upsell in our rate calculations at renewal time, we shooting ourselves in the foot.  Look at customer Charlie.  He started out with 100 units and bought 20 more in 4Q15, so as we approach renewal time, Charlie actually has 120 units available to renew (ATR), not 100 [14].  We want our CSMs basing their success on the 120, not the 100.  So the simple rule is to base everything not on the original cohort but on the available to renew (ATR) entering the period.

This begs two questions:

  • When do we count the along-the-way upsell bookings?
  • How can we reflect those 40 units in some sort of rate?

The answer to the first question is, as your finance team will invariably conclude, to count them as they happen (e.g., in 4Q15 in the above example).

The answer to the second question is to use a retention rate, not a churn rate.  Retention rates are cohort-based, so to calculate the net retention rate for the 2Q15 cohort, we divide its present value of 535 by its original value of 500 and get 107%.

Never, ever calculate a retention rate in reverse – i.e., starting a group of current customers and looking backwards at their ARR one year ago.  You will produce a survivor biased answer which, stunningly, I have seen some public companies publish.  Always run cohort analyses forwards to eliminate survivor bias.

Off-Cycle Activity

Finally, we need to consider how to address off-cycle (or extra-cohort) activity in calculating churn and related rates.  Let’s do this by using a big picture example that includes everything we’ve discussed thus far, plus off-cycle activity from two customers who are not in the 2Q16 ATR cohort:  (1) Foxtrot, who purchased in 3Q14, renewed in 3Q15, and who has not paid, and (2) George, who purchased in 3Q15, who is not yet up for renewal, but who purchased 50 units of upsell in 2Q16.

bigpic

Foxtrot should count as churn in 2Q16, the period in which we either lost hope of collection (or our collections policy dictated that collection we needed to de-book the deal). [15]

George should count as expansion in 2Q16, the period in which the expansion booking was taken.

The trick is that neither Foxtrot nor George is on a 2Q renewal cycle, so neither is included in the 2Q16 ATR cohort.  I believe the correct way to handle this is:

  • Both should be factored into gross, net, account-level churn, and upsell.
  • For rates where we include them in the numerator, for consistency’s sake we must also include them in the denominator. That means putting the shrinkage in the numerator and adding the ATR of a shrinking (or lost) account in denominator of a rate calculation.  I’ll call this the “+” concept, and define ATR+ as inclusive of such additional logos or ARR resulting from off-cycle accounts [16].

Rate Calculations

We are now in the position to define and calculate the churn rates that I use and track:

  • Simple churn = net churn / starting period ARR * 4.  Or, in English, the net change in ARR from existing customers divided by starting period ARR (multiplied by 4 to annualize the rate which is measured against the entire ARR base). As the name implies, this is the simplest churn rate to calculate. This rate will be negative whenever expansion is greater than shrinkage. Starting period ARR includes both ATR and non-ATR contracts (including potentially multi-year contracts) so this rate takes into account the positive effects of the non-cancellability of multi-year deals.  Because it takes literally everything into account, I think this is the best rate for valuing the annuity of your ARR base.
  • Logo churn = number of discontinuing logos / number of ATR+ logos. This rate tells us the percent of customers who, given the chance, chose to discontinue doing business with us.  As such, it provides an ARR-unweighted churn rate, providing the best sense of “how happy” our customers are, knowing that there is a somewhat loose correlation between happiness and renewal [16].  Remember that ATR+ means to include any discontinuing off-cycle logos, so the calculation is 1/16 = 6.3% in our example.
  • Retention = current ARR [time cohort] / time-ago ARR [time cohort]. In English, the current ARR from some time-based cohort (e.g., 2Q15) divided by the year-ago ARR from that same cohort.  Typically we do this for the one-year-ago or two-years-ago cohorts, but many companies track each quarter’s new customers as a cohort which they measure over time.  Like simple churn, this is a great macro metric that values the ARR annuity, all in.
  • Net churn = account-level churn / ATR+. This churn rate foots to the reported churn ARR in our leaky bucket analysis (which is account-level churn), which partially offsets shrinkage with expansion at an account-level, and is how most SaaS companies actually calculate churn.  While perhaps counter-intuitive, it reflects a philosophy of examining, at an account basis, what happens to value of our each of our customers when we allow shrinkage to be offset by expansion (which is what we want our CSM reps doing) leaving any excess as upsell.  This should be our primary churn metric.
  • Gross churn = shrinkage / ATR+. This churn rate is important because it reveals the difference between companies that have high shrinkage offset by high expansion and companies which simply have low shrinkage.  While net churn is powerful because it’s “all in,” any metric that enables offset can allow one thing to mask another.  Gross churn is a great metric because it simply shows the glass half-empty view:  at what rate is ARR leaking out of your bucket before offset it with refills in the form of expansion ARR.

# # #

Notes

[1] Replacing these posts in the process.

[2] The 10% churn group decays from 100 units to 53 in value after 7 years, while the 20% group decays to 26.

[3] We’ll sidestep the question of who is responsible for installed-based expansion in this post because companies answer it differently (e.g., sales, customer success, account management) and the good news is we don’t need to know who gets credited for expansion to calculate churn rates.

[4] Discussing churn in dollars and renewals in rates.

[5] For example, if a customer signed a one-year contract for 100 units and then was offered a 5% discount to sign a three-year renewal, you would generate 5 units of ARR churn.

[6] Or, as I said in a prior post, should I net first or sum first?

[7] And yes, sometimes unhappy customers do renew (e.g., if they’ve been too busy to replace you) and happy customers don’t (e.g., if they get a new key executive with different preferences) but counting logos still gives you a nice overall indication.

[8] Note that I have capitulated to the norm of saying “gross” churn means before offset and thus “net” churn means after netting out shrinkage and expansion.  (Beware confusion as this is the opposite of my prior position where I defined “net” to mean “net of expansion,” i.e., what I’d now call “gross.”)

[9] Otherwise, you just end up with a different way of calculating net churn.  The idea of account-level churn is to restrict the ability to offset shrinkage with expansion across accounts, in effect, telling your customer success reps that their job is to, contract by contract, minimize shrinkage and ensure expansion.

[10] “Offset” meaning ARR used to offset shrinkage that ends up neither churn nor upsell.

[11] While this approach works fine for most (inherently single-product) SaaS startups it does not work as well for large multi-product SaaS vendors where the failure of product A might be totally or partially masked by the success of product B.  (In our example, I deliberately had all the shrinkage coming from downsell of product A to make that point.  The product or general manager for product A should own the churn number that product and be trying to find out why it churned 80 units.)

[12] MRR = monthly recurring revenue = 1/12th of ARR.  Because enterprise SaaS companies typically run on an annual business rhythm, I prefer ARR to MRR.

[13] Worse yet, if I churn them out on 5/1 and do succeed in changing their mind, I might need to recognize it as “new ARR” on 6/30, which would also be wrong.

[14] The more popular way of handling this would have been to try and extend the original contract and co-terminate with the upsell in 4Q16, but that doesn’t affect the underlying logic, so let’s just pretend we tried that and it didn’t work for the customer.

[15] Whether you call it a de-booking or bad receivable, Foxtrot was in the ARR base and needs to come out.  Unlike the case where the customer has paid for the period but is not using the software (where we should churn it at the end of the contract), in this case the 3Q15 renewal was effectively invalid and we need to remove Foxtrot from the ARR base at some defined number of days past due (e.g., 90) or when we lose hope of collection (e.g., bankruptcy).

[16] I think the smaller you are the more important this correction is to ensure the quality of your numbers.  As a company gets bigger, I’d just drop the “+” concept whenever it’s only changing things by a rounding error.

[17] Use NPS surveys for another, more precise, way of measuring happiness.  See [7] as well.

SaaS Startup One-Slide Financials Dashboard

In the course of my board and advisory work, I get to look at a lot of software as a service (SaaS) startups financials and I’m often surprised how people choose to present their companies.

Because people — e.g., venture capital (VC) investors — judge you by the metrics you track, the order in which you track them, and how clearly you present them, I think it’s very important to put real thought into how you want to present your company’s one-slide financial and key operating metrics.

As both an author and analytics enthusiast, I also believe in minimalism and reader empathy.  We should neither bury the reader in facts nor force them to perform basic calculations that answer easily anticipated questions.

I always try to remember this Blaise Pascal quote (which is often misattributed to Mark Twain):

I would have written you a shorter letter, but I did not have time to do so.

So, in this spirit, let me offer my one-slide SaaS startup financials and key operating metrics dashboard, which captures all the key high-level questions I’d have about any enterprise SaaS company.

saas-one-slide-financial-dashboard

While this is certainly not a complete set of SaaS metrics, it provides a great summary of the state of your annual recurring revenue (ARR), your trajectory, your forecast, and your performance against plan.  Most important, perhaps, it shows that you are focused on the right thing by starting with 5 lines dedicated not to TCV, bookings, or GAAP revenue, but the key value driver for any SaaS business:  ARR.

If you like it, you can download the spreadsheet here.

The Opportunity Cost of Debating Facts

I read this New York Times editorial this morning, How the Truth Got Hacked, and it reminded me of a situation at work, back when I first joined Host Analytics some four years ago.  This line, in particular, caught my attention:

Imagine the conversation we’d be having if we weren’t debating facts.

Back when I joined Host Analytics, we had an unfortunate but not terribly unusual dysfunction between product management (PM) and Engineering (ENG).  By the time the conflict got to my office, it went something like this:

PM:  “ENG said they’d deliver X, Y, and Z in the next release and now they’re only delivering X and half of Y.  I can’t believe this and what am I going to the customers and analysts who I told that we were delivering …”

ENG:  “PM is always asking us to deliver too much and we never actually committed to deliver all of Y and we certainly didn’t commit to deliver Z.”

(For extra fun, compound this somewhat normal level of dysfunction with American vs. Indian communication style differences –including a quite subtle way of saying “no” – and you’ll see the real picture.)

I quickly found myself in a series of “he said, she said” meetings that were completely unproductive.  “We don’t write down commitments because we’re agile,” was one refrain.  In fact, while I agree that the words “commitment” and “agile” generally don’t belong in the same sentence, we were anything but agile at the time, so I viewed the statement more as a convenient excuse than an expression of true ideological conflict.

But the thing that bugged me the most was that we had endless meetings where we couldn’t even agree on basic facts.  After all, we either had a planning problem, a delivery problem, or both and unless we could establish what we’d actually agreed to deliver, we couldn’t determine where to focus our efforts.  The meetings were a waste of time.  I had no way knowing who said what to whom, we didn’t have great tracking systems, and I had no interest in email forensics to try and figure it out.  Worse yet, it seemed that two people could leave the same meeting not even agreeing on what was decided.

Imagine the conversation we’d be having if we weren’t debating facts.

In the end, it was clear that we needed to overhaul the whole process, but that would take time.  The question was, in the short term, could we do something that would end the unproductive meetings so we take basic facts in evidence and then have a productive debate at the next level?  You know, to try and make some progress on solving our problems?

I created a document called the Release Scorecard and Commitments document that contained two tables, each structured like this.

release-scorecard

At the start of each release, we’d list the major stories that we were trying to include and we’d have Engineering score their confidence in delivering each one of them.  Then, at the end of every release, PM would score how the delivery went, and the team could provide a comment.  Thus, at every post-release roadmap review, we could review how we did on the prior release and agree on priorities for the next one.  Most importantly, when it came to reviewing the prior release, we had a baseline off which we could have productive discussions about what did or did not happen during the cycle.

Suddenly, by taking the basic facts out of question, the meetings changed overnight.  First, they became productive.  Then, after we fully transitioned to agile, they became unnecessary.  In fact, I’ve since repeatedly said that I don’t need the document anymore because it was a band-aid artifact of our pre-agile world.  Nevertheless, the team still likes producing it for the simple clarity it provides in assessing how we do at laying out priorities and then delivering against them.

So, if you find yourself in a series of unproductive, “he said, she said” meetings, learn this lesson:  do something to get basic facts into evidence so you can have a meaningful conversation at the next level.

Because there is a massive opportunity cost when all you do is debate what should be facts.

EPM: Now More Than Ever

The theme of my presentation at past spring’s Host Analytics World was that EPM is needed in fair, foul, or uncertain weather.  While EPM is used differently in fair and foul weather scenarios, it is a critical navigational instrument to help pilot the business.

For example, in tougher times:

  • You’re constantly re-forecasting
  • You’re doing expense reduction modeling
  • You might do a zero-based budget (particularly popular among recently PE-acquired firms)
  • You’re likely to try and reduce capex (unless you see a quick rebound)
  • You’re probably making P&L, budget, and spend authority more centralized in order to keep tighter reins on the company.

In better times:

  • You model and compare new growth opportunities
  • You often build trended budgets more than bottom-up budgets
  • You adopt rolling forecasts
  • You increase capital investment and build for the future
  • You do more strategic initiatives planning
  • You decentralize P&L responsibility

These (and others) are all capabilities of a complete EPM suite.  The point is that you use that suite differently depending on the state of the business and the economy.

Well, now with the surprise election of our 45th President, Donald Trump, we can be certain of one thing:  uncertain times.

  • Will massive investments in infrastructure (including but not limited to, The Wall) happen and what effect will that have on economic growth and interest rates?
  • Will Trump deliver the promise 4% GDP growth that he’s promised or will the economy grow slower?
  • Will promised deregulation happen and if so will it accelerate economic growth?  What effects will deregulation have on key industries like financial services, energy, and raw materials?
  • What, as a result of this and foreign policies, will be the price of a barrel of oil in one year?  What effect will that have on key industries such as transportation?
  • Will Trump spark a trade war, increasing the price imports and reducing the purchasing power of low and middle-income consumers?  What effect might a trade war have on GDP growth?
  • What impact will all this have on financial markets and the cost and availability of capital?

I don’t pretend to know the answers to these questions.  I do know, however, that there is uncertainty about all of these questions– and dozens of others — that will directly impact businesses in their performance and planning.

If you cannot predict the future, you should at least be able to respond to it in agile way.

If your company takes 6 months to make a budget that gets changed once a year, you will be very exposed to surprise changes.  If you run on rolling forecasts, you will be far more agile.  If you have good EPM tools you will able to automate tasks like reporting, consolidation, and forecasting in order to free up time for the now much more important tasks of scenario planning and modeling.

Again, if you can’t know whether oil will be $40, $50, or $70 — you can at least have modeling out all three scenarios in advance so you can react quickly when it moves.

I’ve always been a big believer in planning and EPM.  And, in this uncertain environment, companies need EPM now more than ever.

A Key Lesson Marketers Can Learn from Donald Trump

While we won’t go into my views on the election here, I will say that all marketers and solution sellers can learn one “yuge” lesson from Donald Trump:  understanding your audience and talking to them in their terms will take you a long, long way.

I’ve always said that solution selling entails getting the customer to conclude three things:

  1. They understand my problem.
  2. They can solve my problem.
  3. I want to work with them.

I put this in reverse form (i.e., calling the company “they”) as a reminder that these are not assertions — they are conclusions.  These are three conclusions that we want the customer to draw.  Asserting them is probably one of the worst ways to get customers to conclude them.  So how might we get a customer to conclude these things?

They Understand My Problem

How might we lead someone to conclude that your organization understands their problem?

  • Hire people who have had the customer’s job and walked in their footsteps.
  • Speak to the customer in their own language about the problem.
  • Active listen to the customer, playing back what they are telling you about the problem.
  • Complete their sentences, saying “and I bet you saw this problem next.”

The ultimate goal is to get the customer to think “Holy Cow, these people might understand my problem even better than I do.”

They Can Solve My Problem

They are several ways to get someone to conclude you can solve their problem

  • Talking about similar reference customers — where similar is defined in the mind of the buyer — whose problems you have solved.
  • Bringing in staff who have worked on solving those very problems.  Telling Pearson, “oh, when we were over at McGraw-Hill we worked on the XYZ system.”
  • Filling in requirements documents but beware that these are often, dare I say “rigged,” by the vendors who got in first as they attempt to set their differentiators on the agenda.
  • Performing a prototype or proof of concept (POC) that shows how key requirements are met using your solution.

I Want To Work With Them

How do you get someone to conclude you they want to work with you?

  • Execute the basics:  show up on time, be prepared, do your homework, communicate status.  (I’m stunned how many people screw up these things and still expect to win.)
  • Be reliable.  Say what you’ll do and do what you say.  Customers want to know they can count on you.  Don’t surprise them.
  • Be personal, build relationships, get to know people, and make them understand you want their business and care about their success.

Back To Trump

Now I have always believed that the first of these tests was the most important:  getting someone to believe you understand their problem.  But Trump has taken my belief in this to a whole new level.

By driving hard on two fronts:

  • A huge dose of “I understand your problem” — with his speeches aimed at a segment of the public who feels unacknowledged and misunderstood, he energizes crowds largely by simply active listening their problems back to them.
  • With a small dose of “I want to work with him” — the whole political outsider, straight-talking guy image.

He has been able to “get the order” from a large number of Americans without providing much detail at all about the second — and one would think rationally very important — point:  the “I can solve your problems” point.  Put differently, I’d say he put nearly 100% of his eggs in the “I understand your problem” basket and virtually none in the “I can solve it” basket (i.e., a huge amount of what and a stunning lack of how when it comes to policy).

This is all more proof that by simply demonstrating that you understand the customer’s problem and by being someone the customer wants to work with, that you can get the order without actually convincing them that you can solve the problem.

In most corporate sales cycles people incorrectly assume all the importance is on the second point — can they solve the problem?  In reality, salespeople and marketers should put emphasis on all three points and on leading the customer to conclude, in this order, that:

  • They understand my problem
  • I want to work with them
  • They can solve my problem

[Reposted and slightly revised post election.]

How to Manage Your First Sales VP at a Startup

One of the hardest hires — and one of the hardest jobs — is to be the first VP of sales at a startup.  Why?

  • There is no history / experience
  • Nobody knows what works and what doesn’t work
  • The company may not have a well defined strategy so it’s hard to make a go-to-market strategy that maps to it
  • Any strategy you choose is somewhat complex because it needs to leave room for experimentation
  • If things don’t work the strong default tendency is to blame the VP of sales and sales execution, and not strategy or product.  (Your second VP of sales gets to blame product or strategy — but never your first.)

It’s a tough job, no doubt.  But it’s also tough for a founder or new CEO to manage the first sales VP.

  • The people who sign up for this high-risk duty are often cocksure and difficult to manage
  • They tend to dismiss questions with experienced-based answers (i.e., well we did thing X at company Y and it worked) that make everything sound easy.
  • They tend to smokescreen issues with such dismissals in order to give themselves maximum flexibility.
  • Most founders know little about sales; they’ve typically never worked in sales and it’s not taught in (business) school.

I think the best thing a founder can do to manage this is to conceptually separate two things:

  • How well the sales VP implements the sales model agreed to with the CEO and the board.
  • Whether that model works.

For example, if your team agrees that it wants to focus on Defense as its beachhead market, but still opportunistically experiment horizontally, then you might agree with the sales VP to build a model that creates a focused team on Department of Defense (DoD) and covers the rest of the country horizontally with a enterprise/corporate split.  More specifically, you might decide to:

  • Create a team of 3 quota carrying reps (QCRs) selling to the DoD who each have 10+ years experience selling to the DoD, ideally holding top secret clearances, supported by 2 sales consultants (SCs) and 2 business development reps (BDRs) with the entire team located in a Regus office in McLean, VA and everyone living with a one-hour commute of that office.
  • Hire 2 enterprise QCRs, one for the East and one for the West, the former in McLean and the latter in SF, each calling only on $1B+ revenue companies, each supported by 1 local SC, and 2 BDRs, where the BDRs are located at corporate (in SF).  Each enterprise QCR must have 10+ years experience selling software in the company’s category.
  • Hire 2 corporate reps in SF, each sharing 1 SC, and supported by 2 BDRs calling on sub $1B revenue companies.  Each corporate rep must have 5+ years experience selling software in the category.

In addition, you would create specific hiring profiles for each role ideally expressed with perhaps 5-10 must-have and 3-5 nice-to-have criteria.

Two key questions:

  • Do we know if this is going to work?  No, of course not.  It’s a startup.  We have no customers, data, or history.  We’ve taken our best guess based on understanding the market and the customers.  But we can’t possibly know if this is going to work.
  • Can we tell if the sales VP is executing it?  Yes.  And you can hold him/her accountable for so doing.  That’s the point.

At far too many startups, the problem is not decomposed in this manner, the specifics are not spelled out, and here’s what happens instead.  The sales VP says:

The plan?  Yes, let me tell you the plan.  I’m going to put boots down in several NFL cities, real sales athletes mind you, the best.  People I’ve worked with who made $500K, $750K, or even $1M in commissions back at Siebel or Salesforce or Oracle.  The best.  We’re going to support those athletes with the best SCs we can find, and we’re going to create an inside sales and SDR team that is bar none, world-class.  We’re going to set standard quotas and ramps and knock this sonofabitch out of the park.  I’ve done this before, I’m matching the patterns, trust me, this is going to be great.

Translation:  we’re going to hire somewhere between 4 and 8 salespeople who I have worked with in the past and who were successful in other companies regardless of whether they have expertise in our space, the skills required in our space, are located where out strategy indicates they should be.  Oh, and since I know a great pharma rep, we’re going to make pharma a territory  and even though he moved to Denver after living in New Jersey, we’ll just fly him out when we need to.  Oh, and the SDRs, I know a great one in Boise and one in Austin.  Yes, and the inside reps, Joe, Joey, Joey-The-Hacksaw was a killer back in the day and even though he’s always on his bass boat and living in Michigan now, we’re going to hire him even though technically speaking our inside reps are supposed to be in SF.

This, as they say in England, is a “dog’s breakfast” of  a sales model.  And when it doesn’t work — and the question is when, not if — what has the company learned?  Precisely and absolutely zero.

If you’re a true optimist, you might say we’ve learned that a bunch of random decisions to hire old cronies scattered across the country with no regard for strategy, models, or hiring profiles, doesn’t work.  But wait a minute — you knew that already; you didn’t need to spend $10M in VC to find out.  (See my post, If We Can’t Have Repeatable Success Can We At Least Have Repeatable Failure?)

By making the model clear — and quite specific as in my example above — you can not only flush out any disagreements in advance, but you can also hold the sales VP accountable for building the model they say they are going to build.  With a squishy model, as my other example shows, you can never actually know because it’s so vague you can’t tell.

This approach actually benefits both sides

  • The CEO benefits because he/she doesn’t get pushed around into agreeing to a vague model that he/she doesn’t understand.  By focusing on specifics the CEO gets to think through the proposed model and decide whether he/she likes it.
  • The Sales VP benefits as well.  While he/she loses some flexibility because hiring can’t be totally opportunistic, on the flip side, if the Sales VP implements the agreed-to model and it doesn’t work, he/she is not totally alone and to blame.  It’s “we failed,” not “you failed.”  Which might lead to a second chance for the sales VP to implement a new model.