I’ve seen numerous startups try numerous ways to calculate their sales capacity. Most are too back-of-the-envelope and to top-down for my taste. Such models are, in my humble opinion, dangerous because the combination of relatively small errors in ramping, sales productivity, and sales turnover (with associated ramp resets) can result in a relatively big mistake in setting an operating plan. Building off quota, instead of productivity, is another mistake for many reasons .
Sales productivity, measured in ARR/rep, and at steady state (i.e., after a rep is fully ramped). This is not quota (what you ask them to sell), this is productivity (what you actually expect them to sell) and it should be based on historical reality, with perhaps incremental, well justified, annual improvement.
Rep hiring plans, measured by new hires per quarter, which should be realistic in terms of your ability to recruit and close new reps.
Rep ramping, typically a vector that has percentage of steady-state productivity in the rep’s first, second, third, and fourth quarters . This should be based in historical data as well.
Rep turnover, the annual rate at which sales reps leave the company for either voluntary or involuntary reasons.
Judgment, the model should have the built-in ability to let the CEO and/or sales VP manually adjust the output and provide analytical support for so doing .
Quota over-assignment, the extent to which you assign more quota at the “street” level (i.e., sum of the reps) beyond the operating plan targets
For extra credit and to help maintain organizational alignment — while you’re making a bookings model, with a little bit of extra math you can set pipeline goals for the company’s core pipeline generation sources , so I recommend doing so.
If your company is large or complex you will probably need to create an overall bookings model that aggregates models for the various pieces of your business. For example, inside sales reps tend to have lower quotas and faster ramps than their external counterparts, so you’d want to make one model for inside sales, another for field sales, and then sum them together for the company model.
In this post, I’ll do two things: I’ll walk you through what I view as a simple-yet-comprehensive productivity model and then I’ll show you two important and arguably clever ways in which to use it.
Walking Through the Model
Let’s take a quick walk through the model. Cells in Excel “input” format (orange and blue) are either data or drivers that need to be entered; uncolored cells are either working calculations or outputs of the model.
You need to enter data into the model for 1Q20 (let’s pretend we’re making the model in December 2019) by entering what we expect to start the year with in terms of sales reps by tenure (column D). The “first/hired quarter” row represents our hiring plans for the year. The rest of this block is a waterfall that ages the rep downward as we move across quarters. Next to the block ramp assumption, which expresses, as a percentage of steady-state productivity, how much we expect a rep to sell as their tenure increases with the company. I’ve modeled a pretty slow ramp that takes five quarters to get to 100% productivity.
To the right of that we have more assumptions:
Annual turnover, the annual rate at which sales reps leave the company for any reason. This drives attriting reps in row 12 which silently assumes that every departing rep was at steady state, a tacit fairly conservative assumption in the model.
Steady-state productivity, how much we expect a rep to actually sell per year once they are fully ramped.
Quota over-assignment. I believe it’s best to start with a productivity model and uplift it to generate quotas .
The next block down calculates ramped rep equivalents (RREs), a very handy concept that far too few organizations use to convert the ramp-state to a single number equivalent to the number of fully ramped reps. The steady-state row shows the number of fully ramped reps, a row that board members and investors will frequently ask about, particularly if you’re not proactively showing them RREs.
After that we calculate “productivity capacity,” which is a mouthful, but I want to disambiguate it from quota capacity, so it’s worth the extra syllables. After that, I add a critical row called judgment, which allows the Sales VP or CEO to play with the model so that they’re not potentially signing up for targets that are straight model output, but instead also informed by their knowledge of the state of the deals and the pipeline. Judgment can be negative (reducing targets), positive (increasing targets) or zero-sum where you have the same annual target but allocate it differently across quarters.
The section in italics, linearity and growth analysis, is there to help the Sales VP analyze the results of using the judgment row. After changing targets, he/she can quickly see how the target is spread out across quarters and halves, and how any modifications affect both sequential and quarterly growth rates. I have spent many hours tweaking an operating plan using this part of the sheet, before presenting it to the board.
The next row shows quota capacity, which uplifts productivity capacity by the over-assignment percentage assumption higher up in the model. This represents the minimum quota the Sales VP should assign at street level to have the assumed level of over-assignment. Ideally this figure dovetails into a quota-assignment model.
Finally, while we’re at it, we’re only a few clicks away from generating the day-one pipeline coverage / contribution goals from our major pipeline sources: marketing, alliances, and outbound SDRs. In this model, I start by assuming that sales or customer success managers (CSMs) generate the pipeline for upsell (i.e., sales to existing customers). Therefore, when we’re looking at coverage, we really mean to say coverage of the newbiz ARR target (i.e., new ARR from new customers). So, we first reduce the ARR goal by a percentage and then multiple it by the desired pipeline coverage ratio and then allocate the result across the pipeline-sources by presumably agreed-to percentages .
Building the next-level models to support pipeline generation goals is beyond the scope of this post, but I have a few relevant posts on the subject including this three-part series, here, here, and here.
Two Clever Ways to Use the Model
The sad reality is that this kind of model gets a lot attention at the end of a fiscal year (while you’re making the plan for next year) and then typically gets thrown in the closet and ignored until it’s planning season again.
That’s too bad because this model can be used both as an evaluation tool and a predictive tool throughout the year.
Let’s show that via an all-too-common example. Let’s say we start 2020 with a new VP of Sales we just hired in November 2019 with hiring and performance targets in our original model (above) but with judgment set to zero so plan is equal to the capacity model.
Our “world-class” VP immediately proceeds to drive out a large number of salespeople. While he hires 3 “all-star” reps during 1Q20, all 5 reps hired by his predecessor in the past 6 months leave the company along with, worse yet, two fully ramped reps. Thus, instead of ending the quarter with 20 reps, we end with 12. Worse yet, the VP delivers new ARR of $2,000K vs. a target of $3,125K, 64% of plan. Realizing she has a disaster on her hands, the CEO “fails fast” and fires the newly hired VP of sales after 5 months. She then appoints the RVP of Central, Joe, to acting VP of Sales on 4/2. Joe proceeds to deliver 59%, 67%, and 75% of plan in 2Q20, 3Q20, and 4Q20.
Our question: is Joe doing a good job?
At first blush, he appears more zero than hero: 59%, 67%, and 75% of plan is no way to go through life.
But to really answer this question we cannot reasonably evaluate Joe relative to the original operating plan. He was handed a demoralized organization that was about 60% of its target size on 4/2. In order to evaluate Joe’s performance, we need to compare it not to the original operating plan, but to the capacity model re-run with the actual rep hiring and aging at the start of each quarter.
When you do this you see, for example, that while Joe is constantly underperforming plan, he is also constantly outperforming the capacity model, delivering 101%, 103%, and 109% of model capacity in 2Q through 4Q.
If you looked at Joe the way most companies look at key metrics, he’d be fired. But if you read this chart to the bottom you finally get the complete picture. Joe is running a significantly smaller sales organization at above-model efficiency. While Joe got handed an organization that was 8 heads under plan, he did more than double the organization to 26 heads and consistently outperformed the capacity model. Joe is a hero, not a zero. But you’d never know if you didn’t look at his performance relative to the actual sales capacity he was managing.
Second, I’ll say the other clever way to use a capacity model is as a forecasting tool. I have found a good capacity model, re-run at the start of the quarter with then-current sales hiring/aging is a very valuable predictive tool, often predicting the quarterly sales result better than my VP of Sales. Along with rep-level, manager-level, and VP-level forecasts and stage-weighted and forecast-category-weighted expected pipeline values, you can use the re-run sales capacity model as a great tool to triangulate on the sales forecast.
You can download the four-tab spreadsheet model I built for this post, here.
# # #
 Starting with quota starts you in the wrong mental place — what you want people to do, as opposed to productivity (what they have historically done). Additionally, there are clear instances where quotas get assigned against which we have little to no actual productivity assumption (e.g., a second-quarter rep typically has zero productivity but will nevertheless be assigned some partial quota). Sales most certainly has a quota-allocation problem, but that should be a separate, second exercise after building a corporate sales productivity model on which to base the operating plan.
 A typically such vector might be (0%, 25%, 50%, 100%) or (0%, 33%, 66%, 100%) reflecting the percentage of steady-state productivity they are expected to achieve in their first, second, third, and fourth quarters of employment.
 Without such a row, the plan is either de-linked from the model or the plan is the pure output of the model without any human judgement attached. This row is typically used to re-balance the annual number across quarters and/or to either add or subtract cushion relative to the model.
 Back in the day at Salesforce, we called pipeline generation sources “horsemen” I think (in a rather bad joke) because there were four of them (marketing, alliances, sales, and SDRs/outbound). That term was later dropped probably both because of the apocalypse reference and its non gender-neutrality. However, I’ve never known what to call them since, other than the rather sterile, “pipeline sources.”
 Many salesops people do it the reverse way — I think because they see the problem as allocating quota whereas I see the the problem as building an achievable operating plan. Starting with quota poses several problems, from the semantic (lopping 20% off quota is not 20% over-assignment, it’s actually 25% because over-assignment is relative to the smaller number) to the mathematical (first-quarter reps get assigned quota but we can realistically expect a 0% yield) to the procedural (quotas should be custom-tailored based on known state of the territory and this cannot really be built into a productivity model).
 One advantages of having those percentages here is they are placed front-and-center in the company’s bookings model which will force discussion and agreement. Otherwise, if not documented centrally, they will end up in different models across the organization with no real idea of whether they either foot to the bookings model or even sum to 100% across sources.
In part I of this three-part series I introduced the idea of an inverted funnel whereby marketing can derive a required demand generation budget using the sales target and historical conversion rates. In order to focus on the funnel itself, I made the simplifying assumption that the company’s new ARR target was constant each quarter.
In part II, I made things more realistic both by quarterizing the model (with increasing quarterly targets) and accounting for the phase lag between opportunity generation and closing that’s more commonly known as “the sales cycle.” We modeled that phase lag using the average sales cycle length. For example, if your average sales cycle is 90 days, then opportunities generated in 1Q19 will be modeled as closing in 2Q19 .
There are two things I dislike about this approach:
Using the average sales cycle loses information contained in the underlying distribution. While deals on average may close in 90 days, some deals close in 30 while others may close in 180.
Focusing only on the average often leads marketing to a sense of helplessness. I can’t count the number of times I have heard, “well, it’s week 2 and the pipeline’s light but with a 90-day sales cycle there is nothing we can do to help.” That’s wrong. Some deals close more quickly than others (e.g., upsell) so what can we do to find more of them, fast .
As a reminder, time-based close rates come from doing a cohort analysis where we take opportunities created in a given quarter and then track not only what percentage of them eventually close, but when they close, by quarter after their creation.
This allows us to calculate average close rates for opportunities in different periods (e.g., in-quarter, in 2 quarters, or cumulative within 3 quarters) as well an overall (in this case, six-quarter) close rate, i.e., the cumulative sum. In this example, you can see an overall close rate of 18.7% meaning that, on average, within 6 quarters we close 18.7% of the opportunities that sales accepts. This is well within what I consider the standard range of 15 to 22%.
Previously, I argued this technique can be quite useful for forecasting; it can also be quite useful in planning. At the risk of over-engineering, let’s use the concept of time-based close rates to build an inverted funnel for our 2020 marketing demand generation plan.
To walk through the model, we start with our sales targets and average sales price (ASP) assumptions in order to calculate how many closed opportunities we will need per quarter. We then drop to the opportunity sourcing section where we use historical opportunity generation and historical time-based close rates to estimate how many closed opportunities we can expect from the existing (and aging) pipeline that we have already generated. Then we can plug our opportunity generation targets from our demand generation plan into the model (i.e., the orange cells). The model then calculates a surplus or (gap) between the number of closed opportunities we need and those the model predicts.
I didn’t do it in the spreadsheet, but to turn that opportunity creation gap into ARR dollars just multiply by the ASP. For example, in 2Q20 this model says we are 1.1 opportunities short, and thus we’d forecast coming in $137.5K (1.1 * $125K) short of the new ARR plan number. This helps you figure out if you have the right opportunity generation plan, not just overall, but with respect to timing and historical close rates.
When you discover a gap there are lots of ways to fix it. For example, in the above model, while we are generating enough opportunities in the early part of the year to largely achieve those targets, we are not generating enough opportunities to support the big uptick in 4Q20. The model shows us coming in 10.8 opportunities short in 4Q20 – i.e., anticipating a new ARR shortfall of more than $1.3M. That’s not good enough. In order to achieve the 4Q20 target we are going to need to generate more opportunities earlier in the year.
I played with the drivers above to do just that, generating an extra 275 opportunities across the year generating surpluses in 1Q20 and 3Q20 that more than offset the small gaps in 2Q20 and 4Q20. If everything happened exactly according to the model we’d get ahead of plan and 1Q20 and 3Q20 and then fall back to it in 2Q20 and 4Q20 though, in reality, the company would likely backlog deals in some way  if it found itself ahead of plan nearing the end of one quarter with a slightly light pipeline the next.
In concluding this three-part series, I should be clear that while I often refer to “the funnel” as if it’s the only one in the company, most companies don’t have just one inverted funnel. The VP of Americas marketing will be building and managing one funnel that may look quite different from the VP of EMEA marketing. Within the Americas, the VP may need to break sales into two funnels: one for inside/corporate sales (with faster cycles and smaller ASPs) and one for field sales with slower sales cycles, higher ASPS, and often higher close rates. In large companies, General Managers of product lines (e.g., the Service Cloud GM at Salesforce) will need to manage their own product-specific inverted funnel that cuts across geographies and channels. There’s a funnel for every key sales target in a company and they need to manage them all.
You can download the spreadsheet used in this post, here.
 Most would argue there are two phase lags: the one from new lead to opportunity and the one from opportunity (SQL) creation to close. The latter is the sales cycle.
 As another example, inside sales deals tend to close faster than field sales deals.
 Doing this could range from taking (e.g., co-signing) the deal one day late to, if policy allows, refusing to accept the order to, if policy enables, taking payment terms that require pushing the deal one quarter back. The only thing you don’t want to is to have the customer fail to sign the contract because you never know if your sponsor quits (or gets fired) on the first day of the next quarter. If a deal is on the table, take it. Work with sales and finance management to figure out how to book it.
The folks at Host Analytics kindly asked me to speak at their annual conference, Host Perform 2019, today in Las Vegas and I had a wonderful time speaking about one of my favorite topics: the board view of enterprise performance management (EPM) and, to some extent, companies and management teams in general.
Embedded below are the slides from the presentation.
I’m here in Vegas at the amazing Aria Hotel at Host Perform 2019, having been asked to come out and speak about one of my favorite topics – how boards see the work of finance and EPM. That speech is tomorrow at 9:00 AM and I look forward to seeing you there.
Today started with a music video, a great re-interpretation of Joan Jett’s I Love Rock N Roll. Here is my favorite lyric from the song:
I love EBITDA,
Who cares about stock-based compensation?
I love EBITDA,
So come and take the time and plan with me.
Ron Baden was subtly wearing a Life is Good cap (presumably as a tip of the proverbial hat to last year’s keynote, Burt Jacobs) as he did an introduction that covered his background – now 10 years with the company in almost as many different jobs, an introduction of the executive staff (along with 1980s photos of them), a history of EPM, and some discussion of Vector Capital’s acquisition of Host Analytics in December of last year.
He also discussed this year’s keynote speaker, Doc Hendley, founder of Wine to Water.
Ron discussed highlights of the go-forward plan, including:
International, goal to get 25% of sales from international
Channel development, goal to 33% of sales from channels
Vertical market solutions and specialist sales teams
Accelerated new product introduction
Office-of-CFO tuck-in acquisitions
He also discussed key trends that Host is seeing in EPM:
Digital capabilities (e.g., robotic process automation)
Next-wave EPM professionals
Mobile workforce support
Connected planning, getting models talking to each other
Integration of best-of-breed solutions
Ron had some fun demonstrating the granularity and context problems via a Netflix example (“who’s watching”) and a quick demonstration of Alexa’s non-fluency in finance. On the former point, the key idea is that AI/ML, for example in sales forecasting, will benefit greatly by knowing “who’s watching” (i.e., who’s selling) because much as different people like different genres of films, different sales reps have different patterns of forecasting (e.g., Sammy sandbag, Ollie optimist).
Ron also discussed the notion of the chief performance officer, as opposed to the chief financial officer – to focus the mission on improving performance, not on finance per se. In my humble opinion most of the time when people talk about creating a new “O” it’s about trying to get a seat at the table (e.g., chief information officer back in the day, the chief information security officer in recent times, and the chief data officer today).
Since the CFO already has a seat at the table, I think Ron’s more talking about reframing the role and the vision of the CFO. I agree – particularly when it comes to being able to answer questions that help improve business performance. And, I believe, that if the CFO can’t migrate to being the CAO (chief answers officer) then the chief data officer (CDO) might well do it instead via data science and operations teams. To be a bit paranoid, it’s a threat — not an existential threat, but a threat nevertheless — to the power of the office of the CFO.
Ron then showed a presumably future version of MyPlan which shows how to build task- and action-oriented EPM and how that can easily fit into a mobile device. Ron’s a big believer that while spreadsheets and grid interfaces are great, that end-users fundamentally want to accomplish tasks that are best done not via a grid, but via an end-user-optimized, task-oriented interface like MyPlan.
They then performed the usual, multi-player, slice-of-life skit-demonstration (aka, “skidemo”) which is always fun, and always a challenge to execute with so many moving parts (i.e., people, real software, prototype software, videos, scene changes, characters, and costumes). Despite a brief early wardrobe failure, the six-person team pulled it off just fine, taking the crowd “back to the future” of finance – with easy rolling forecasts that take just a minute to run and prescriptive analytics to help drive the planning process. My favorite line:
“Where we’re going, we don’t need spreadsheets!”
The keynote speaker (coincidentally named “Doc” given the skit), founder of Wine to Water, Doc Hendley then took the stage to tell his story. I won’t summarize it here, but it was genuine, moving, at times funny, and deeply compelling.
Wine to Water focuses on providing clean water and sanitation to people around the world – while awareness of this is not as high as it should be, water-borne illness is the leading cause of death for children in many countries in the undeveloped world and directly and indirectly kills over 2M children a year and incapacitates another 10M people atop that. Preventing water-borne illness not only saves lives, but it also helps increase family income, reduces school absences, and generally strengthens the local developing economy. Per the WHO, every one dollar invested yields $8 in benefits — a great cause and a great ROI.
If you’re interested in donating to Doc’s organization, Wine to Water, please go here. If you’re at the conference, remember to stop by the Wine to Water booth and build some water filters.
It’s great to be here, I look forward to seeing everyone, and hope to see you at my speech bright and early tomorrow.
Just a quick post to plug the fact that the kind folks at Host Analytics have invited me to speak at Host Perform 2019 in Las Vegas on May 20-22nd, and I’ll be looking forward to seeing many old friends, colleagues, customers, and partners on my trip out.
I’ll be speaking on the “mega-track” on Wednesday, May 22nd at 9:00 AM on one of my favorite topics: how EPM, planning, and metrics all look from the board and C-level perspectives. My official session description follows:
The Perform 2019 conference website is here and the overall conference agenda is here. If you’re interested in coming and you’ve not yet registered yet, it’s not too late! You can do so here.
I look forward to another great Perform conference this year and should be both tweeting (hashtag #HostPerform) and blogging from the conference. I look forward to seeing everyone there. And attend my session if you want to get more insight into how boards and C-level executives view reporting, planning, EPM, KPIs, benchmarks, and metrics.
This morning we announced that Vector Capital has closed the acquisition of Host Analytics. As part of that transaction I have stepped down from my position of CEO at Host Analytics. To borrow a line from The Lone Ranger, “my work is done here.” I’ll consult a bit to help with the transition and will remain a friend of and investor in the company.
A Word of Thanks
Before talking about what’s next, let me again thank the folks who made it possible for us to quintuple Host during my tenure all while cutting customer acquisition costs in half, driving a significant increase in dollar retention rates, and making a dramatic increase in net promoter score (NPS). Thanks to:
Our employees, who drove major productivity improvements in virtually all areas and were always committed to our core values of customer success, trust, and teamwork.
Our customers, who placed their faith in us, who entrusted us with their overall success and the secure handling of their enormously important data and who, in many cases, helped us develop the business through references and testimonials.
Our partners, who worked alongside us to develop the market and make customers successful – and often the most challenging ones at that.
Our board of directors, who consistently worked positively and constructively with the team, regardless of whether we were sailing in fair or foul weather.
We Laid the Groundwork for a Bright Future
When Vector’s very talented PR guy did his edits on the closing press release, he decided to conclude it with the following quote:
Mr. Kellogg added, “Host Analytics is a terrific company and it has been an honor lead this dynamic organization. I firmly believe the company’s best days are ahead.”
When I first read it I thought, “what an odd thing for a departing CEO to say!” But before jumping to change it, I thought for a bit. In reality, I do believe it’s true. Why do Host’s best days lie ahead? Two reasons.
First, we did an enormous amount of groundwork during my tenure at Host. The biggest slug of that was on product and specifically on non-functional requirements. As a fan of Greek mythology, the technical debt I inherited felt like the fifth labor of Hercules, cleaning the Augean stables. But, like Hercules, we got it done, and in so doing shored up the internals of a functionally excellent product and transformed our Hyderabad operation into a world-class product development center. The rest of the groundwork was in areas like focusing the organization on the right metrics, building an amazing demand generation machine, creating our Customers for Life organization, running a world-class analyst relations program, creating a culture based on learning and development, and building a team of strong players, all curious about and focused on solving problems for customers.
Second, the market has moved in Host’s direction. Since I have an affinity for numbers, I’ll explain the market with one single number: three. Anaplan’s average sales price is three times Host’s. Host’s is three times Adaptive’s. Despite considerable vendor marketing, posturing, positioning, haze, and confusion to the contrary, there are three clear segments in today’s EPM market.
Anaplan is expensive, up-market, and focused primarily on operational planning.
Adaptive is cheap, down-market, and focused primarily on financial planning.
Host is reasonably priced, mid-market, focused primarily on financial planning, with some operational modeling capabilities.
Host serves the vast middle where people don’t want (1) to pay $250K/year in subscription and build a $500K/year center of excellence to support the system or (2) to pay $25K/year only to be nickeled and dimed on downstream services and end up with a tool they outgrow in a few years.
Now, some people don’t like mid-layer strategies and would argue that Host risks getting caught in a squeeze between the other two competitors. That never bothered me – I can name a dozen other successful SaaS vendors who grew off a mid-market base, including within the finance department where NetSuite created a hugely successful business that eventually sold for $9.3B.
But all that’s about the past. What’s making things even better going forward? Two things.
Host has significantly improved access to capital under Vector, including the ability to better fund both organic and inorganic growth. Funding? Check.
If Workday is to succeed with its goals in acquiring Adaptive, all rhetoric notwithstanding, Adaptive will have to become a vendor able to deliver high-end, financial-focused EPM for Workday customers. I believe Workday will succeed at that. But you can’t be all things to all people; or, to paraphrase SNL, you can’t be a dessert topping and a floor wax. Similarly, Adaptive can’t be what it will become and what it once was at the same time – the gap is too wide. As Adaptive undergoes its Workday transformation, the market will switch from three to two layers, leaving both a fertile opening for Host in mid-market and a dramatically reduced risk of any squeeze play. Relatively uncontested market space? Check.
Don’t underestimate these developments. Both these changes are huge. I have a lot of respect for Vector in seeing them. They say that Michelangelo could see the statue within the block of marble and unleash it. I think Vector has clearly seen the potential within Host and will unleash it in the years to come.
I don’t have any specific plans at this time. I’m happily working on two fantastic boards already – data catalog pioneer Alation and next-generation content services platform Nuxeo. I’ll finally have time to write literally scores of blog posts currently stalled on my to-do list. Over the next few quarters I expect to meet a lot of interesting people, do some consulting, do some angel investing, and perhaps join another board or two. I’ll surely do another CEO gig at some point. But I’m not in a rush.
So, if you want to have a coffee at Coupa, a beer at the Old Pro, or – dare I date myself – breakfast at Buck’s, let me know.
I’m Dave Kellogg, consultant, independent director, advisor, and blogger focused on enterprise software startups.
I bring a unique perspective to startup challenges having 10 years’ experience at each of the CEO, CMO, and independent director levels across 10+ companies ranging in size from zero to over $1B in revenues.
From 2012 to 2018, I was CEO of cloud enterprise performance management vendor Host Analytics, where we quintupled ARR while halving customer acquisition costs in a competitive market, ultimately selling the company in a private equity transaction.
Previously, I was SVP/GM of Service Cloud at Salesforce and CEO at NoSQL database provider MarkLogic, which we grew from zero to $80M in run-rate revenues during my tenure. Before that, I was CMO at Business Objects for nearly a decade as we grew from $30M to over $1B. I started my career in technical and product marketing positions at Ingres and Versant.
I love disruption, startups, and Silicon Valley and have had the pleasure of working in varied capacities with companies including Bluecore, Cyral, FloQast, Fortella, GainSight, MongoDB, Plannuh, Recorded Future, and Tableau. I currently sit on the boards of Alation (data catalogs), Nuxeo (content management) and Profisee (master data management). I previously sat on the boards of agtech leader Granular (acquired by DuPont for $300M) and big data leader Aster Data (acquired by Teradata for $325M).
I periodically speak to strategy and entrepreneurship classes at the Haas School of Business (UC Berkeley) and Hautes Études Commerciales de Paris (HEC).