Category Archives: Decision Making

Handling Conflict with the “Disagree and Commit” and “New Information” Principles

In every executive team there are going to be times when people don’t agree on certain important strategic or operational decisions.  Some examples:

  • Should we split SDRs inbound vs outbound?
  • Should we map SCs to reps or pool them?
  • How should we split upsell vs new business focus in mid-market reps?
  • Should CSMs get paid on upsell or only renewals?
  • Should we put the new buzzword (e.g., AI, ML, social) into the release plan?
  • Should we change the company logo ?

The purpose of this post is to provide a framework to get decisions made and executed, without certain decisions becoming a form of weekly nagging at the e-staff meeting, a topic of discussion at every board meeting, or worst of all, a standing joke among the team.

The Disagree-and-Commit Principle

The first time I heard disagree-and-commit I thought it was corporate, doublespeak garbage.  What the heck did it mean?  I’m supposed to go to a meeting, say that I believe we should go left, get overrun by the group who eventually decides to go right, and then I’m supposed to say “sure, everybody, just kidding, let’s go right.”  How disingenuous — everybody knows I wanted to go left.  How controlling of the establishment.  How manipulative.  This is thought control!

“You may disagree, but you must conform … (wait, was that our outside voice) …  you must commit.”

(Recall my first professional job was as at a company we referred to as The People’s Republic of Ingres.)

Let’s just say I missed the point.  My older, wiser self now thinks it’s a great, but often misunderstood, rule.  (And that’s not just because now I am the establishment.)

Here’s a nice definition of disagree-and-commit from The Amazon Way via this blog post.

Leaders are obligated to respectfully challenge decisions when they disagree, even when doing so is uncomfortable or exhausting. Leaders have conviction and are tenacious. They do not compromise for the sake of social cohesion. Once a decision is determined, they commit wholly.

I always missed two things:

  • I took commit to mean change your mind (or “get your mind right” in the Cool Hand Luke sense). It actually means committing to execute the decision wholly, i.e., as if it were the one you had voted for.  You can’t undermine or sabotage the decision just to prove yourself right.  This is a great rule.  People aren’t always going to agree, but if you want to work at the company, you must execute our decisions wholeheartedly once they are made.  There is no other option.
  • The obligation to disagree.  I love this part because some people lack the courage to speak up in the meeting, and then want to passive-aggressively work against the decision and/or attempt a pocket veto by going to the person who was in charge of the meeting and saying, “well, I didn’t feel comfortable saying this in the meeting, but, ….” Such behavior creates a potential paradox for the executive in charge — particularly if she agrees with the pocket veto argument.  Does she overrule the group decision based on the new argument (and reward dysfunctional behavior) or does she stick with a decision she no longer prefers in order to avoid incenting pocket vetoes.  In my opinion, in 95% of the cases you want say, “Sorry Joe, I wish you’d said something in the meeting because that’s an interesting point, but the decision stands.” Worst case call another meeting.  Never, ever just overrule the decision.

Explicitly embracing the disagree-and-commit principle is one great way to end endless, nagging disagreements:  we met to discuss the issue, we came to a conclusion, I know you didn’t agree with it, but you need to commit to execute it wholeheartedly.  (Else we’re going to have a conversation about insubordination.)  We want a rational culture.  We debate ideas.  But we need to make and execute decisions, and you’re not going to agree with every one.

The New Information Principle

But what if the issue keeps coming up anyway?  Perhaps via periodic serious requests to reconsider the decision.  Perhaps through a series of objections coming from someone not responsible for executing the decision (so “commit” is less relevant) — but who just can’t stand the idea.  Or maybe someone has a personal ax to grind (e.g., I know we’ve talked about this before, but can we please relocate the office) and who just won’t take no for an answer.

The problem is if you always shut down these requests, then you risk creating a big problem with corporate agility.  On one hand you want to shut down the constant nagging about adding data mining capabilities from the data mining zealot. On the other hand, you don’t want to make the subject taboo because maybe your top competitor launched a new data-mining addition last month and it’s hurting you in sales.

So, the principle is simple:  if you want to re-open discussion on something we’ve already decided, do you have any new information that wasn’t available at the time we made the decision?

If the answer is no, we’re not re-opening it here, and we can do at either next quarter’s ops review or next year’s strategy offsite (pending prioritization against other topics).

If the answer is yes, find out what the new information is, and then decide if it warrants an immediate or deferred re-examination of the decision.

With this principle you can keep a firm hand against those who won’t give up on an issue while still being open to new information that might cause the need for a  valid re-examination of it.

In-Memory Analytics: The Other Kind – A Key Success Factor for Your Career

I’m not going to talk about columnar databases, compression, horizontal partitioning, SAP Hana, or real-time vs. pre-aggregated summarization in this post on in-memory analytics.  I’m going to talk about the other kind of in-memory analytics.  The kind that can make or break your career.

What do you mean, the other kind of in-memory analytics?  Quite simply, the kind you keep in your head (i.e., in human memory).  Or, better put, the kind you should be expected to keep in your head and be able to recite on demand in any business meeting.

I remember when I worked at Salesforce, I covered for my boss a few times at the executive staff meeting when he was traveling or such.  He told me:  “Marc expects everyone to know the numbers, so before you go in there, make sure you know them.”  And I did.  On the few times I attended in his place, I made a cheat sheet and studied it for an hour to ensure that I knew every possible number that could reasonably be asked.  I’d sit in the meeting, saying little, and listening to discussion not directly related to our area.  Then, boom, out of left field, Marc asked:  “what is the Service Cloud pipeline coverage ratio for this quarter in Europe?”

“3.4,” I replied succinctly.  If I hadn’t have known the number I’m sure it would been an exercise in plucking the wings off a butterfly.  But I did, so the conversation quickly shifted to another topic, and I lived to fight another day.

Frankly, I was happy to work in an organization where executives were expected to know — in their heads, in an instant — the values of the key metrics that drive their business.  I weak organizations you constantly hear “can I get back to you on that” or “I’m going to need to look that one up.”

If you want to run a business, or a piece of one,  and you want to be a credible leader — especially in a metrics-driven organization — you need to have “in-memory” the key metrics that your higher-ups and peers would expect you to know.

This is as true of CEO pitching a venture capitalist and being asked about CAC ratios and churn rates as it is of a marketing VP being asked about keywords, costs, and conversions in an online advertising program.  Or a sales manager being asked about their forecast.

In fact, as I’ve told my sales directors a time or two:  “I should be able to wake you up at 3:00 AM and ask your forecast, upside, and pipeline and you should be able to answer, right then, instantly.”

That’s an in-memory metric.  No “let me check on that.”  No “I’ll get back to you.”  No “I don’t know, let me ask my ops guy,” which always makes me think: who runs the department, you or the ops guy — and if you need to ask the ops guy all the numbers maybe he/she should be running the department and not you?

I have bolded the word “expect” four times above because this issue is indeed about expectations and expectations are not a precise science.  So, how can you figure out the expectations for which analytics you should hold in-memory?

  • Look at your department’s strategic goals and determine which metrics best measure progress on them.
  • Ask peers inside the company what key metrics they keep in-memory and design your set by analogy.
  • Ask peers who perform the same job at different companies what key metrics they track.
  • When in doubt, ask the boss or the higher-ups what metrics they expect you to know.

Finally, I should note that I’m not a big believer in the whole “cheat sheet” approach I described above.  Because that was a special situation (covering for the boss), I think the cheat sheet was smart, but the real way to burn these metrics into your memory is to track them every week at your staff meeting, watching how they change week by week and constantly comparing them to prior periods and to a plan/model if you have one.

The point here is not “fake it until you make it” by running your business in a non-metrics-focused way and memorizing figures before a big meeting, but instead to burn the metrics review into your own weekly team meeting and then, naturally, over time you will know these metrics so instinctively that someone can wake you up at 3:00 AM and you can recite them.

That’s the other kind of in-memory analytics.  And, much as I love technology, the more important kind for your career.

Kellblog’s 2017 Predictions  

New Year’s means three things in my world:  (1) time to thank our customers and team at Host Analytics for another great year, (2) time to finish up all the 2017 planning items and approvals that we need to get done before the sales kickoff (including the one most important thing to do before kickoff), and time to make some predictions for the coming year.

Before looking at 2017, let’s see how I did with my 2016 predictions.

2016 Predictions Review

  1. The great reckoning begins. Correct/nailed.  As predicted, since most of the bubble was tied up in private companies owned by private funds, the unwind would happen in slow motion.  But it’s happening.
  2. Silicon Valley cools off a bit. Partial.  While IPOs were down, you couldn’t see the cooling in anecdotal data, like my favorite metric, traffic on highway101.
  3. Porter’s five forces analysis makes a comeback. Partial.  So-called “momentum investing” did cool off, implying more rational situation analysis, but you didn’t hear people talking about Porter per se.
  4. Cyber-cash makes a rise. CorrectBitcoin more doubled on the year (and Ethereum was up 8x) which perversely reinforced my view that these crypto-currencies are too volatile — people want the anonymity of cash without a highly variable exchange rate.  The underlying technology for Bitcoin, blockchain, took off big time.
  5. Internet of Things goes into trough of disillusionment. Partial.  I think I may have been a little early on this one.  Seems like it’s still hovering at the peak of inflated expectations.
  6. Data science rises as profession. Correct/easy.  This continues inexorably.
  7. SAP realizes they are a complex enterprise application company. Incorrect.  They’re still “running simple” and talking too much about enabling technology.  The stock was up 9% on the year in line with revenues up around 8% thus far.
  8. Oracle’s cloud strategy gets revealed – “we’ll sell you any deployment model you want as long as your annual bill goes up.”  Partial.  I should have said “we’ll sell you any deployment model you want as long as we can call it cloud to Wall St.”
  9. Accounting irregularities discovered at one or more unicorns. Correct/nailed.  During these bubbles the pattern always repeats itself – some people always start breaking the rules in order to stand out, get famous, or get rich.  Fortune just ran an amazing story that talks about the “fake it till you make it” culture of some diseased startups.
  10. Startup workers get disappointed on exits. Partial.  I’m not aware of any lawsuits here but workers at many high flyers have been disappointed and there is a new awareness that the “unicorn party” may be a good thing for founders and VCs, but maybe not such a good thing for rank-and-file employees (and executive management).
  11. The first cloud EPM S-1 gets filed. Incorrect.  Not yet, at least.  While it’s always possible someone did the private filing process with the SEC, I’m guessing that didn’t happen either.
  12. 2016 will be a great year for Host Analytics. Correct.  We had a strong finish to the year and emerged stronger than we started with over 600 great customers, great partners, and a great team.

Now, let’s move on to my predictions for 2017 which – as a sign of the times – will include more macro and political content than usual.

  1. The United States will see a level of divisiveness and social discord not seen since the 1960s. Social media echo chambers will reinforce divisions.  To combat this, I encourage everyone to sign up for two publications/blogs they agree with and two they don’t lest they never again hear both sides of an issue. (See map below, coutesy of Ninja Economics, for help in choosing.)  On an optimistic note, per UCSD professor Lane Kenworthy people aren’t getting more polarized, political parties are.

news

  1. Social media companies finally step up and do something about fake news. While per a former Facebook designer, “it turns out that bullshit is highly engaging,” these sites will need to do something to filter, rate, or classify fake news (let alone stopping to recommend it).  Otherwise they will both lose credibility and readership – as well as fail to act in a responsible way commensurate with their information dissemination power.
  1. Gut feel makes a comeback. After a decade of Google-inspired heavily data-driven and A/B-tested management, the new US administration will increasingly be less data-driven and more gut-feel-driven in making decisions.  Riding against both common sense and the big data / analytics / data science trends, people will be increasingly skeptical of purely data-driven decisions and anti-data people will publicize data-driven failures to popularize their arguments.  This “war on data” will build during the year, fueled by Trump, and some of it will spill over into business.  Morale in the Intelligence Community will plummet.
  1. Under a volatile leader, who seems to exhibit all nine of the symptoms of narcissistic personality disorder, we can expect sharp reactions and knee-jerk decisions that rattle markets, drive a high rate of staff turnover in the Executive branch, and fuel an ongoing war with the media.  Whether you like his policies or not, Trump will bring a high level of volatility the country, to business, and to the markets.
  1. With the new administration’s promises of $1T in infrastructure spending, you can expect interest rates to raise and inflation to accelerate. Providing such a stimulus to already strong economy might well overheat it.  One smart move could be buying a house to lock in historic low interest rates for the next 30 years.  (See my FAQ for disclaimers, including that I am not a financial advisor.)
  1. Huge emphasis on security and privacy. Election-related hacking, including the spearfishing attack on John Podesta’s email, will serve as a major wake-up call to both government and the private sector to get their security act together.  Leaks will fuel major concerns about privacy.  Two-factor authentication using verification codes (e.g., Google Authenticator) will continue to take off as will encrypted communications.  Fear of leaks will also change how people use email and other written electronic communications; more people will follow the sage advice in this quip:

Dance like no one’s watching; E-mail like it will be read in a deposition

  1. In 2015, if you were flirting on Ashley Madison you were more likely talking to a fembot than a person.  In 2016, the same could be said of troll bots.  Bots are now capable of passing the Turing Test.  In 2017, we will see more bots for both good uses (e.g., customer service) and bad (e.g., trolling social media).  Left unchecked by the social media powerhouses, bots could damage social media usage.
  1. Artificial intelligence hits the peak of inflated expectations. If you view Salesforce as the bellwether for hyped enterprise technology (e.g., cloud, social), then the next few years are going to be dominated by artificial intelligence.  I’ve always believed that advanced analytics is not a standalone category, but instead fodder that vendors will build into smart applications.  They key is typically not the technology, but the problem to which to apply it.  As Infer founder Vik Singh said of Jim Gray, “he was really good at finding great problems,” the key is figuring out the best problems to solve with a given technology or modeling engine.  Application by application we will see people searching for the best problems to solve using AI technology.
  1. The IPO market comes back. After a year in which we saw only 13 VC-backed technology IPOs, I believe the window will open and 2017 will be a strong year for technology IPOs.  The usual big-name suspects include firms like Snap, Uber, AirBnB, and SpotifyCB Insights has identified 369 companies as strong 2017 IPO prospects.
  1. Megavendors mix up EPM and ERP or BI. Workday, which has had a confused history when it comes to planning, acquired struggling big data analytics vendor Platfora in July 2016, and seems to have combined analytics and EPM/planning into a single unit.  This is a mistake for several reasons:  (1) EPM and BI are sold to different buyers with different value propositions, (2) EPM is an applications sale, BI is a platform sale, and (3) Platfora’s technology stack, while appropriate for big data applications is not ideal for EPM/planning (ask Tidemark).  Combining the two together puts planning at risk.  Oracle combined their EPM and ERP go-to-market organizations and lost focus on EPM as a result.  While they will argue that they now have more EPM feet on the street, those feet know much less about EPM, leaving them exposed to specialist vendors who maintain a focus on EPM.  ERP is sold to the backward-looking part of finance; EPM is sold to the forward-looking part.  EPM is about 1/10th the market size of ERP.  ERP and EPM have different buyers and use different technologies.  In combining them, expect EPM to lose out.

And, as usual, I must add the bonus prediction that 2017 proves to be a strong year for Host Analytics.  We are entering the year with positive momentum, the category is strong, cloud adoption in finance continues to increase, and the megavendors generally lack sufficient focus on the category.  We continue to be the most customer-focused vendor in EPM, our new Modeling product gained strong momentum in 2016, and our strategy has worked very well for both our company and the customers who have chosen to put their faith in us.

I thank our customers, our partners, and our team and wish everyone a great 2017.

# # #

 

Managing Change: The Sailboat Tack Principle

Change is hard in business.  A few things routinely get messed up:

  • Pulling the trigger.  Think:  “wait, are we still discussing this change or did we just decide to do it.”  I can’t tell you the number of times I’ve heard that quote in meetings.  I think continuous partial attention is part of the problem.  Sometimes, it’s just straight-up confusion as the enthusiasm for a new idea ebbs and flows in a group conversation.  It can be hard to tell if we’ve decided to change or if everyone’s just excited about the idea.
  • Next-level engagement.  Think:  “wait, I know we all like this idea on the exec staff, but this decision affects a lot of people at the next level.  I need some time to bounce this off my leadership team and get their input before we go ready/fire/aim on this.”
  • Communications.  Think:  “wait, this change is a big deal and I know we just spent every minute of the three-hour meeting deciding to do it, but we need to find another hour to discuss key messaging (5W+2H) for both the internal and external audience.”
  • Anticipatory execution.  Think:  “While we had not yet finally approved the proposal for the new logo, it was doing very well in feedback and I just loathed the idea of making 5000 bags with the old logo on them, so I used the new one even though it wasn’t approved yet.”

When you screw up change a lot of bad things happen.

  • Employees get confused about the company’s strategy.  “First they said, we were doing X, and then the execs did an about-face.  I don’t understand.”
  • The external market, including your customers, get confused about what you are doing.  This is even worse.
  • You can end up with 5,000 bags that have neither your old logo nor your new logo on them.
  • You can make your management team look like the Keystone Cops in one of many ways through screwing up sequencing:  like dropping off boxes before the big move is announced, or employees finding out they’ve been laid off because their keycards stop working.

In order to avoid confusion about change and the mistakes that come with it, I’ve adopted a principle I call the “sailboat tack principle” which I use whenever we are contemplating major change.  (We can define major as any change that if poorly executed will make the management team look like clowns to employees, customers, or other stakeholders.)

If you’ve ever gone sailing you may have noticed there is a strict protocol involved in a tack.  When the skipper wants to execute a tack, he or she runs the following protocol.

Skipper:  “Ready about”

Each crew member:  “Ready”

Last crew member:  “Ready”

Skipper:  “Helm’s a lee.”

That is, the skipper does not actually begin the maneuver  until every involved crew member has indicated they are ready.  This prevents partial execution, people getting hit in the head with booms, and people getting knocked off the boat.  It also implicitly makes clear when we are discussing a possible course change (e.g., “I think we should set course that direction”) from when we are actually doing it (e.g., “Ready about”).

For those with CS degrees, the sailboat tack principle is a two-phase commit protocol, used commonly in distributed transaction processing systems.

I like the sailboat tack protocol because the extra discipline causes a few things to happen automatically.

  • People know implicitly when we’re just talking about course changes.  (Because no one is saying “OK, so do we want tack here?”)
  • People know explicitly when we are actually making the decision whether to execute change.
  • The result of that extra warning — “hey, we are about to do this” triggers numerous very healthy “wait a minute” reactions.  Wait a minute:  I need to ask my team, I need to make a communications plan, I need to examine the compensation impact, I need to think about what order we roll this out in, etc.

Why, as CEO, I Love Driver-Based Planning

While driver-based planning is a bit of an old buzzword (the first two Google hits date to 2009 and 2011 respectively), I am nevertheless a huge fan of driver-based planning not because the concept was sexy back in the day, but because it’s incredibly useful.  In this post, I’ll explain why.

When I talk to finance people, I tend to see two different definitions of driver-based planning:

  • Heavy in detail, one where you build a pretty complete bottom-up budget for an organization and play around with certain drivers, typically with a strong bias towards what they have historically been.  I would call this driver-based budgeting.
  • Light in detail where you struggle to find the minimum set of key drivers around which you can pretty accurately model the business and where drivers tend to be figures you can benchmark in the industry.  I call this driver-based modeling.

While driver-based budgeting can be an important step in building an operating plan, I am actually bigger fan of driver-based modeling.  Budgets are very important, no doubt.  We need them to run plan our business, align our team, hold ourselves accountable for spending, drive compensation, and make our targets for the year.  Yes, a good CEO cares about that as a sine qua non.

But a great CEO is really all about two things:

  • Financial outcomes (and how they create shareholder value)
  • The future (and not just next year, but the next few)

The ultimate purpose of driver-based models is to be able answer questions like what happens to key financial outcomes like revenue growth, operating margins, and cashflow given set of driver values.

I believe some CEOs are disappointed with driver-based planning because their finance team have been showing them driver-based budgets when they should have been showing them driver-based models.

The fun part of driver-based modeling is trying to figure out the minimum set of drivers you need to successfully build a complete P&L for a business.  As a concrete example I can build a complete, useful model of a SaaS software company off the following minimum set of drivers

  • Number and type of salesreps
  • Quota/productivity for each type
  • Hiring plans for each type
  • Deal bookings mix for each (e.g., duration, prepayments, services)
  • Intra-quarter bookings linearity
  • Services margins
  • Subscription margins
  • Sales employee types and ratios (e.g., 1 SE per 2 salesreps)
  • Marketing as % of sales or via a set of funnel conversion assumptions (e.g., responses, MQLs, oppties, win rate, ASP)
  • R&D as % of sales
  • G&A as % of sales
  • Renewal rate
  • AR and AP terms

With just those drivers, I believe I can model almost any SaaS company.  In fact, without the more detailed assumptions (rep types, marketing funnel), I can pretty accurately model most.

Finance types sometimes forget that the point of driver-based modeling is not to build a budget, so it doesn’t have to be perfect.  In fact, the more perfect you make it, the heavier and more complex it gets.  For example, intra-quarter bookings linearity (i.e., % of quarterly bookings by month) makes a model more accurate in terms of cash collections and monthly cash balances, but it also makes it heavier and more complex.

Like each link in Marley’s chains, each driver adds to the weight of the model, making it less suited to its ultimate purpose.  Thus, with the additional of each driver, you need to ask yourself — for the purposes of this model, does it add value?  If not, throw it out.

One of the most useful models I ever built assumed that all orders came in on the last day of quarter.  That made building the model much simpler and any sales before the last day of the quarter — of which we hope there are many — become upside to the conservative model.

Often you don’t know in advance how much impact a given driver will make.  For example, sticking with intra-quarter bookings linearity, it doesn’t actually change much when you’re looking at quarter granularity a few years out.  However, if your company has a low cash balance and you need to model months, then you should probably keep it in.  If not, throw it out.

This process makes model-building highly iterative.  Because the quest is not to build the most accurate model but the simplest, you should start out with a broad set of drivers, build the model, and then play with it.  If the financial outcomes with which you’re concerned (and it’s always a good idea to check with the CEO on which these are — you can be surprised) are relatively insensitive to a given driver, throw it out.

Finance people often hate this both because they tend to have “precision DNA” which runs counter to simplicity, and because they have to first write and then discard pieces of their model, which feels wasteful.  But if you remember the point — to find the minimum set of drivers that matter and to build the simplest possible model to show how those key drivers affect financial outcomes — then you should discard pieces of the model with joy, not regret.

The best driver-based models end up with drivers that are easily benchmarked in the industry.  Thus, the exercise becomes:  if we can converge to a value of X on industry benchmark Y over the next 3 years, what will it do to growth and margins?  And then you need to think about how realistic converging to X is — what about your specific business means you should converge to a value above or below the benchmark?

At Host Analytics we do a lot of driver-based modeling and planning internally.  I can say it helps me enormously as CEO think about industry benchmarks, future scenarios, and how we create value for the shareholders.  In fact, all my models don’t stop at P&L, they go onto implied valuation given growth/profit and ultimately calculate a range of share prices on the bottom line.

The other reason I love driver-based planning is more subtle.  Much as number theory helps you understand the guts of numbers in mathematics, so does driver-based modeling help you understand the guts of your business — which levers really matter, and how much.

And that knowledge is invaluable.