A Crack in the IPO Window: Rosetta Stone IPO

In the third IPO in April, language education provider Rosetta Stone went public this week, raising $112.5M. The IPO priced at $18, and the stock ended its first day of trading at $25.12, a 40% rise.

As one banker said to me: “this shows that investors are getting back into the business of investing.” I think that’s a good thing, not only because I run a private venture-backed company but, more importantly, because a closed IPO window (1) locks out John Q. Public from buying the shares of early- and mid-stage companies, (2) forces some companies to be sold “before their time,” potentially snubbing the lives of would-be, great independent companies, and (3) indirectly reduces the attractiveness of the venture capital machine that I’ve long argued is a highly effective engine for driving innovation in the economy.

Numbers-wise, Rosetta Stone is quite a bit above what I’d been calling the 50/50/0 IPO window that I’ve seen in the Software Equity Group’s IPO pipeline data ($50M+ in TTM revenues, 50%+ growth, 0%+ EBITDA.)

Rosetta Stone’s key numbers are:

  • 2008 revenue: $209M
  • 2008 growth: 52%
  • Adjusted EBITDA: 17% (see the S-1 for definition)

The company’s market cap was $385M and the end of the first day of trading. Their investor relations page is here. The S-1 is here.

Regulating Venture Capital? Methinks Not.

What if you went to the doctor’s office with a sore wrist and she proposed bandaging your ankle?

That’s how I feel about the government’s proposal that venture capital be regulated along with other private capital pools including hedge and private equity funds. See this Mercury News story, Venture Capital Needs Transparency Not Regulation, for background.

I’m no financial expert, but far as I can tell, the root causes of our current financial crisis are:

  • Leverage. Investment banks and hedge funds building 30:1 levered portfolios (and somehow managing to only get 8-10% returns on them). Kind of reminds you of buying on margin as a root cause for the crash of 1929.
  • Financial system interlocking and the too-big-to-fail problem. Like it or not, as a citizen and taxpayer, it does seem to me that many of these firms/funds are indeed too big to fail and the government was correct to use my/our money to stop the collapse.
  • Agency problems and excess compensation. Basically, you had very smart people who could make $10M+ per year by taking excess risk. When viewed from their perspective, put undiplomatically, who cares what happens to their employers? It doesn’t take many years (e.g., one) of $10M income to become permanently wealthy so senior managers had huge agency issues (where the interests of the owner and the agent diverge) which seemingly were left unchecked both by the companies’ own boards and by government regulators.
  • The housing bubble and the conflicts of interests among loan-originating banks, assessors, developers, and mortgage brokers. Arguably, the root cause here is the securitization of mortgages combined with the next point.
  • Conflicts of interest in the ratings system. I never knew this before, but the people who pay ratings agencies are the issuers of debt, not the buyers. This would be like Sony paying Consumer Reports to rate their new television sets. Perhaps this is how a portfolio of zero-down, floating-rate mortgages on overpriced houses in Stockton gets rated AAA.*
  • The lets-insure-each-other problem associated with credit default swaps. In a tightly interlinked system where each player is too big to fail, this strikes me as a mathematical hallucination designed to make it look like each player is taking less risk. But, in reality, it seems like a bunch of people living on the same street in Florida insuring each other against hurricanes. The question isn’t when will the insurance system fail, but indeed will it ever work — i.e., will there ever be a hurricane that wipes out only a few of the houses in the pool?
  • Lack of regulation to control / keep in check the amount of risk, leverage, ratings, and agency issues.

I’m sure I missed some and if you think I got anything wrong in this laundry list, feel free to comment. Because my primary point is that nowhere on this long list will you find anything related to venture capital.

In fact, as I’ve previously argued, venture capital looks quaint by comparison. VCs buy and hold the shares of start-up companies on 5-year, plus or minus, timeframes. No leverage. No ratings agencies. Investment professionals (e.g., foundation managers) are typically the only investors, so there’s no duping of John Q. Public.

Yes, venture returns are down over the past decade. Yes, there are probably too many VC firms and a shake-out is imminent. Yes, VCs make lots of bad investments. Yes, VC is increasingly a “hits business” where the biggest winners in the portfolio account for a disproportionate share of the returns.

Yes, VCs can make a lot of money. (And yes, I view carried interest as income and not capital gains.) And yes, there is probably an element of Fooled-by-Randomness / increasing returns inherent in the VC pecking order.

Sure, there are lots of flaws, but overall, I believe the VC system works. Much as you might say democracy is the least bad form of government, VC strikes me as the least bad way of driving innovation in the economy.

It wasn’t part of the problem, so let’s leave it out as part of the solution.


* I know the ratings problem is more subtle and involves mixing loans of various quality to stay just-within the bounds of a given creditworthiness level. Nevertheless, I’d argue a “good” rating system would differentiate between a basket of all-solid loans and a basket which mixes solid, semi-solid, and wobbly ones. As an aside and largely from a position of pure ignorance, I’m amazed that someone hasn’t raised some venture capital and tried to challenge the ratings industry with a new consumer-focused model.

Welcome Uptick in Venture Capitalist Confidence

Thanks to this story in today’s Mercury News, I noticed that Silicon Valley Venture Capitalist Confidence Index showed a welcome and rather surprising uptick in 1Q09.

The index, produced by the University of San Francisco Entrepreneurship program, has tracked venture capitalist confidence since 1Q04 and hit its all-time low (2.77 out of 5) in 4Q08 and then re-bounded somewhat in 1Q09 to 3.03.

Excerpt:

This quarter’s reading rose from the previous quarter’s reading of 2.77 (a 5 year low) and ended a five-quarter trend of new lows in confidence. This breaking of the downward trend in VC confidence provides hope for an eventual recovery in the high-growth venture environment.

In fact, several of the VCs surveyed made the increasingly popular argument that the best companies are founded in bleak times, presumably as would-be entrepreneurs either flee or are laid off from their ailing employers:

And some responding VCs see the downturn in the economy as an opportunity to build great companies. Prashant Shaw of Hummer Winblad Venture Partners shared, “In a struggling economy, the real innovators emerge. And for firms like ours who have capital, there is no better time to invest in new startups.” And Sandy Miller of Institutional Venture Partners reasoned, “While the environment seems gloomy with no end in sight we need to remember that some of the best companies have been founded and built during bleak times. True entrepreneurs will continue to find ways of moving their ideas forward. From a venture investor standpoint 2009 and 2010 should be an attractive environment for new investments though there will be little liquidity for existing investments.”

For those wanting more detail, here is the full 1Q09 venture capitalist confidence report (PDF).

Amazon Elastic MapReduce: Power to Burn, On Demand

Amazon Web Services today announced Amazon Elastic MapReduce, a new member of the Amazon web services family designed to help users process vast amounts of data using the divide-and-conquer parallel processing approach made famous by Google’s MapReduce and as implemented in the Apache Hadoop project.

Background on Hadoop (from the project site):

Here’s what makes Hadoop especially useful–

  • Scalable: Hadoop can reliably store and process petabytes.
  • Economical: It distributes the data and processing across clusters of commonly available computers. These clusters can number into the thousands of nodes.
  • Efficient: By distributing the data, Hadoop can process it in parallel on the nodes where the data is located. This makes it extremely rapid.
  • Reliable: Hadoop automatically maintains multiple copies of data and automatically redeploys computing tasks based on failures.

Hadoop implements MapReduce, using the Hadoop Distributed File System (HDFS). MapReduce divides applications into many small blocks of work. HDFS creates multiple replicas of data blocks for reliability, placing them on compute nodes around the cluster. MapReduce can then process the data where it is located. Hadoop has been demonstrated on clusters with 2000 nodes. The current design target is 10,000 node clusters.

Here’s some background on MapReduce (from Google Labs):

MapReduce is a programming model and an associated implementation for processing and generating large data sets. Users specify a map function that processes a key/value pair to generate a set of intermediate key/value pairs, and a reduce function that merges all intermediate values associated with the same intermediate key. Many real world tasks are expressible in this model, as shown in the paper.

Programs written in this functional style are automatically parallelized and executed on a large cluster of commodity machines. The run-time system takes care of the details of partitioning the input data, scheduling the program’s execution across a set of machines, handling machine failures, and managing the required inter-machine communication. This allows programmers without any experience with parallel and distributed systems to easily utilize the resources of a large distributed system.

Our implementation of MapReduce runs on a large cluster of commodity machines and is highly scalable: a typical MapReduce computation processes many terabytes of data on thousands of machines. Programmers find the system easy to use: hundreds of MapReduce programs have been implemented and upwards of one thousand MapReduce jobs are executed on Google’s clusters every day.

So Amazon Elastic MapReduce is a cloud-based service that enables you to perform highly parallel operations against large amounts of data, all in an on-demand model. This strikes me as a great offering, particularly for those organizations who have an intermittent need for large Hadoop clusters.

From the Amazon press release:

It utilizes a hosted Hadoop framework running on the web-scale infrastructure of Amazon Elastic Compute Cloud (Amazon EC2) and Amazon Simple Storage Service (Amazon S3). Using Amazon Elastic MapReduce, you can instantly provision as much or as little capacity as you like to perform data-intensive tasks for distributed applications such as web indexing, data mining, log file analysis, machine learning, financial analysis, scientific simulation, and bioinformatics research. As with all AWS services, Amazon Elastic MapReduce customers will still only pay for what they use, with no up-front payments or commitments.

Amazon says they made the offering in response to users who were already deploying Hadoop clusters on their lower-level EC2 framework — i.e., that this was an organic evolution:

“Some researchers and developers already run Hadoop on Amazon EC2, and many of them have asked for even simpler tools for large-scale data analysis,” said Adam Selipsky, Vice President of Product Management and Developer Relations for Amazon Web Services. “Amazon Elastic MapReduce makes crunching in the cloud much easier as it dramatically reduces the time, effort, complexity and cost of performing data-intensive tasks.”

I suspect this was a bad day at CloudEra, an Accel-backed startup that wants to be the RedHat of Hadoop. Perhaps, like SugarCRM in competing against Salesforce, CloudEra will soon offer an on-demand Hadoop as well. But that means supporting two business models at once and buying a lot of hardware to boot. And, I suspect, a lot more hardware than SugarCRM needs to buy to support sales automation as a service.

If At First You Don't Succeed, Should You Try, Try Again?

Check out this article in the New York Times that overturns Silicon Valley conventional wisdom about failure. Per a Harvard Business School working paper, which looked at several thousand venture-capital-backed companies from 1986 to 2003:

  • First-time entrepreneurs had a 22% chance of success
  • Already-successful entrepreneurs had a 34% chance of success (a 55% relative increase)
  • Previously-failed entrepreneurs had a 23% chance of success

That is, the lessons from having tried and failed added up to a 1% overall increase in the success rate. Surprising news for a valley in which failure is often seen as a red badge of courage.
Excerpt:

“The data are absolutely clear,” says Paul A. Gompers, a professor of business administration at the school and one of the study’s authors. “Does failure breed new knowledge or experience that can be leveraged into performance the second time around?” he asks. In some cases, yes, but overall, he says, “We found there is no benefit in terms of performance.”

The New York Times article is here. The complete working paper is here (PDF, 35 pages).