An Amazing Story about Twitter and the Japan Earthquake

Every once in a while, I have an “aha” moment where I’m blown away by an unsuspected use or combination of technologies.

Prior to yesterday, the last such moment was when I heard my son shouting in French while playing alone in his room on a new game console:  “Cache-toi derriere le rocher … tire, tire, tire!”  (“Hide beind the rock, shoot, shoot, shoot”).  Had he gone crazy, I thought?  Then it clicked.  I knew the console was Internet connected.  I knew it had a bluetooth headset.  I knew it supported multi-player games.  And I knew he spoke French.  It had just never occurred to me that it would all come together such that he’d end up playing videogames with kids in France and talking to them while so doing.

Yesterday, I had a similar moment while I was talking to a friend with family in Japan.  We discussed the recent earthquake and she told me the following story.

We were on Twitter that night and suddenly the Japanese Twittersphere lit up with tweets about the earthquake.  So we called our family and got through to them while the earthquake was still in progress.  As it got stronger the line got cut, but were nevertheless really happy that we spoke as, after that, we couldn’t get through on the phone lines for at least 12 hours.”

This blew me away.  Think about that.  Someone can tweet about an earthquake as it hits, you can get the tweet 5000 miles away and call your friend while the earthquake’s still happening.  In fact, once I really started to think about it, I realized that you can actually call your friend before the earthquake arrives if he is far enough from the epicenter.

Seismic waves travel at 4 km/second plus or minus.  I don’t know what Twitter’s latency is, but let’s assume it’s 5 seconds.  Recall that an earthquake’s duration is related to its size (i.e., big earthquakes last longer) and that a major earthquake might last 60 to 90 seconds.  Consider this scenario:

  • You are working at your computer in San Diego
  • An earthquake strikes epicentered in San Diego and you recognize that in 5 seconds
  • You tweet it
  • 5 seconds later that tweet gets to a friend in New York City, some 3000 miles away
  • Your friend calls your brother in San Luis Obispo and warns him of the earthquake, figure that takes another 10 seconds
  • At this point the waves have traveled 80 km.  They have another 180 km to go before they hit San Luis Obispo
  • You have given your friend 45 seconds advance notice of the earthquake

Recall, I’m earthquake geek since I majored in geophysics  and worked during school at the Center for Computational Seismology at Lawrence Berkeley Lab (LBL).  At LBL, one of the grad students I supported was working on a related question — could you, given the first few seconds of waves, tell if an earthquake was going to be big or little?   Was there something different about big earthquakes that you could quickly detect and then potentially alert critical facilities?  Sadly, for my friend’s dissertation, the answer was basically no.

But I think with Twitter, we’re darn close.  After every earthquake I race to Twitter to be the first to tweet it —  and I never win.  So I believe that Twitter is a near instantaneous earthquake detection system and with geocoded tweets I am certain that you can easily locate an earthquake and its size / scariness (i.e., intensity).  Think:  sentiment analysis on “OMG that was huge #EQ in SF. #scary.”

I picked San Luis Obispo in my example above for a reason.  There’s a nuclear reactor there.  Hopefully, some grad student is trying to pick up where my friend left off and instead of analyzing the first few seconds of p-waves, they’re analyzing twitter feeds instead.

[Revised:  rewrote introductory aside again for brevity and clarity]

Business Strategy and The Wrong Medicine

Let’s say you’re not feeling well, so you visit the Doctor.  You walk into her office and she says, “Hi, it’s great to see you again.  I’m going to start you on 125 mcg of Synthroid.”

You say, “What?  You have even examined me yet!”

“I’m starting you on thyroid hormones because the patient before you had Hashimoto’s disease.”

“But, how do you know what I have?”

This sounds crazy, right?  It would never happen in a Doctor’s office.  But — rather amazingly — it happens every day in business. I call it “rewind/play syndrome” (an increasingly anachronistic metaphor, I now realize) where successful, otherwise-smart business executives repeat strategies that worked in their last engagement, regardless of whether those strategies are appropriate, or even relevant, in their new one.

To make this concrete I’ll give two examples.

My first example is Ingres, an early relational database vendor that for many reasons lost out on the second biggest market opportunity of the last century, losing the RDBMS market to Oracle.  In October, 1990 ASK Computer Systems (the company that defined and dominated MRP, the predecessor category to ERP) acquired Ingres.  In a sense, the company that could have been Oracle was acquired by the company that should have been SAP.  ASK had bet their next-generation product on Ingres, developing it on both the Ingres database and its proprietary application development environment.  In a classic escalation-of-commitment error, when Ingres got into deep trouble, rather than abandoning Ingres and switching horses to Oracle, ASK chose to acquire its technology supplier instead.

Quality, process-focus,  TQM, and Deming worship were the business fashion of the day and, in the manufacturing sector at least, for very good reason.  Since ASK sold almost entirely to manufacturers they knew quality cold.  So when they showed up at Ingres , they did what they knew — implemented a total quality process.  It was a major focus for the first year of the integration. The process itself — just the templates and the forms — took about four 3-inch deep white binders.

The project struck me as impractical from the beginning.  I repeatedly voiced the concern that if we could barely muster the resources to define the process (and maintain that definition) then how in the world could we allocate enough project managers to even have a chance at executing it?

Practical though I was, in my youth I had failed to see the even bigger blunder:  the problem with Ingres wasn’t product quality.  The software was almost universally acknowledged to be superior in both functionality and performance to Oracle.  Yet more and more people bought Oracle anyway.  Why?  Because Ingres was in a landgrab market with high-switching costs and strong increasing returns of market leadership. The further Oracle got ahead the easier it was to beat Ingres.

By 1990, Oracle was already 4x larger than Ingres — the horizontal market was already lost and all the quality process in the world wasn’t going to fix that.  Ingres needed a new strategy — perhaps focused on owning a horizontal or vertical niche — not a TQM overhaul.

Needless to say, the whole thing failed.  In its last quarter as independent company the ASK Group lost $69M on sales of $87M and was subsequently sold for a pittance — $310M, less than 1x revenues — to Computer Associates (CA).

My second example is less dramatic and simply about marketing programs.  At one point in my career I worked for an executive who had been a key part of building Cadence to $1B.  As part of that great success one thing he always remembered and enjoyed was doing some very high-end marketing programs focused on a very small number of people. The concept was to give people experiences they’d never have on their own and that they would remember for a lifetime.  That’s cool.

But to baseline the discussion, note that a typical software company might spend $100 on average to generate a sales lead.  Thus, an expensive marketing program might run $500/lead and a cheap one $25.  The program I’m talking about cost $30,000/lead — 300 times more than the average program and enough, as I pointed out at the time, to buy every participant a Ford Taurus and still have money leftover.

To me, at a gut level it was just crazy — fun, but crazy.  One of my colleagues, however, cracked the code on what was going on by posing the following questions:

  • At Cadence, what percent of total revenues came from your top 10 customers?  While I can’t remember the answer, it was very high — say 70%.
  • At BusinessObjects, what percent of total revenues come from our top ten customers?  Answer, like 5 to 10% — we ran a high-volume, relatively modest deal-size business.

So it wasn’t a matter of whether — in absolute terms — it was just plain crazy to run a program that cost $30K/attendee.  At Cadence, it probably wasn’t — if your top ten customers are generating $700M/year then go ahead and drop the big bucks on the right people at those firms.  But at BusinessObjects, it made no sense.  We didn’t have that kind of business.  Again, see the rewind/play problem.

I can provide a dozen other examples, which I also sometimes refer to as an “FBI guys” problem if you remember the scene from Die Hard where the “professionals” (the FBI guys) show up in black helicopters, take control from the LAPD, and say “this is just like freaking ‘Nam.”  One RPG later, the helicopter is in flames on the ground and LAPD Chief Duane T. Robinson sheepishly says:  “We’re gonna need some more FBI guys, I guess.

Because I’ve seen this mistake happen so often and committed by so many very smart people, I must admit that I’m rather fascinated by it.  After much thought, I think that business people apply the wrong medicine for several reasons.

  • People like to do what they know.  ASK knew quality, so ASK applied quality to Ingres.
  • People instinctively repeat what made them successful.  You try convincing someone who made $50M executing a given strategy  at his last company that it’s a bad idea at this one.  (Hint:  revise your resume before doing so.)
  • People are often actually hired to repeat what made them successful.  If you look at boards and the search process, they tend to diagnose the problem and then say we want a person who can do X.  Of course, you might think that a new person would still want to make his/her own opinion of what’s indicated, but when you consider the prior point plus the board pressure to lather/rinse/repeat, you can see how it happens.
  • It’s often easier to do what you know and feel busy than step up and face the real problems that are not easy to solve.  Ingres’s real problem was huge — it had blown the market opportunity of a lifetime, needed to give up on general market leadership, and try to gain niche leadership.  That’s a tough pill to swallow.  So it’s easier to blame quality and focus on that.

It’s like saying go bandage the skinned knee when patient has a brain tumor, because at least you know what to do about the knee.  Zig Ziglar, in his oft-told story of processionary caterpillars, calls this confusing activity with accomplishment.

What can executives do to avoid this mistake?

  • Seek first to understand.  If you show up with all the answers, you’re probably just doing what worked last time.
  • Diagnose then prescribe.  Perform a situation assessment of the business and then derive strategy and tactics from the company’s situation.
  • Keep yourself honest.  Beware that rewind/play is a natural human tendency, and ask yourself — deeply and honestly — if you think you’re doing it.
  • Avoid avoidance.  Make a list of your company’s problems, including all the big nasty ones, and then make sure that your strategy isn’t the equivalent of fiddling while Rome burns.  Find the hardest nasty problems, and the biggest best opportunities, and focus your business on them.

Hint:  if you’re blaming “execution” then you’re most probably avoiding bigger, harder strategic issues.

Traits of Next-Generation BI (Business Intelligence)

I suppose it’s not surprising that on the journey to find my ideal next gig that I’ve seen a lot of next-generation business intelligence (BI) companies.  Because I’ve thus had the chance to immerse myself in the BI startup world, I thought I’d share a quick glimpse of what’s presumably the BI future.

Because some of the companies I’ve seen are still stealth, I’m not going to name any early-stage names, but simply provide a list of common traits of next-generation BI companies.

Traits of next-generation BI:

  • In memory, columnar, and compressed.  Most solutions rely on the fact that the source data for most problems can now fit in memory,  typically using a columnar and compressed format.  Some solutions are even able to perform work on the data without first decompressing it.
  • Fast.  The dream of BI — particularly for interactive analysis tools —  has always been “speed of thought” analysis.  Thanks to the above point and thanks to additional performance optimizations (e.g., to expoit CPU cache locality), this dream is becoming a reality.
  • Directly connected.  Next-generation BI tools generally connect directly to the underlying source databases (and/or the Internet) to capture data.  This means they must also have basic data integration capabilities both so they properly align data from different systems and dynamically refresh it.
  • Schema-free.   In order to accomodate semi-structured information and to be able integrate information from different sytems, next-generation BI does not require the up-front definition of a schema.  Instead, relationships among data (e.g., hierarchy) are discovered dynamically.
  • Beautiful.  While this is best exemplified by Tableau (where visualization is the principal focus) next-generation BI tools generally provide beautiful visualizations that are more powerful than the basic report and bar chart.  (Note that I named a name here because I consider Tableau mid-stage, not early-stage.)
  • Mobile.  Next-generation BI tools typically assume a brower-based client and often the need to create device-specific clients (e.g., a native iPad app) to supplement it.  Some companies focus exclusively on mobile BI.
  • Neutral.  Next-generation BI tools exploit the fact that a multi-billion dollar vacuum was created in the market when the BI leaders were consolidated and became units of IBM (e.g., Cognos) or SAP (e.g., BusinessObjects).

In many ways, next-generation BI takes us full circle back to the days of Cognos PowerPlay and its desktop-resident PowerCube (i.e., hypercube) — except that the cube is now virtual, schema-free, of effectively unlimited size, and contains no precalcuated aggregates.  But like that era, the cube in many ways obviates the data warehouse infrastructure underneath it. After all, if you can fit your entire data set in memory and dynamically calculate the answer to any question at high speed, then why do you need a data warehouse full of precalculated aggregates again?

The answer is “you do” for many cases (e.g., history, data cleansing) — but certainly not for all of them.  I thus see a “middle squeeze” on the data warehouse market in the future.

  • For most applications of normal size and analytic complexity, people will use next-generation BI on top of raw data sources, unless they have very messy data or a need for extensive history.
  • For large applications (i.e., big data) and/or high analytic complexity, people will use advanced analytic platforms (e.g, Aster Data).  This, of course, begs the question whether anyone is  working on BI tools that exploit and optimize the new, high-end analytic engines and the answer to that question is happily “yes” as well.

Teradata to Acquire Aster Data

Since I’m on the board of Aster Data I will refrain from editorial on this announcement and simply say congratulations to Teradata on buying a great company and congratulations to Aster Data, its founders Mayank Bawa, Tasso Argyros, and George Candea, its investors, and its employees on what I view as a successful win/win outcome.

Seating Chart for President Obama’s Silicon Valley Tech Titans Dinner

While I was reading this story in the Mercury News about President Obama’s dinner yesterday with a number of Silicon Valley tech titans, an odd thought occurred to me:   Boy, I’d hate to make the seating chart for that dinner!

How do you prioritize Larry Ellison, Steve Jobs, Mark Zuckerberg, Eric Schmidt, Reed Hastings?  Who gets to sit near the President?  Who has to sit far away?

So when  saw this photo in the paper, I thought I’d add some value and turn it into a seating chart for your interest and amusement.  In the end, having Obama sit between Zuckerberg and Jobs wasn’t that surprising, but the structure among the rest is still fun.  (Bear in mind the dinner was held at Doerr’s house, so he and his wife were the hosts.)