Category Archives: Content

Lazy XML Enrichment

One of my big gripes with most content-oriented software is that it requires a big bang approach (see The First Step’s a Doozy). The basic premise behind most content software is roughly:

1. If you do all this hard work to perfectly standardize the schema of your content, perfectly tag it, and possibly perfectly shred it, then

2. You can do cool stuff like content repurposing, content integration, multi-channel content delivery, and custom publishing.

The problem is, of course, that the first step is lethal. Many content software projects blow up on the launchpad because they can’t get beyond step 1. Our first customer had been stuck on step 1 for 18 months with Oracle before they found Mark Logic. (We loaded their content in a week.) At a recent Federal tradeshow, we had dinner with some folks from Booz Allen who’d been trying to load to some semi-structured message traffic data into a relational database for months. We told them to swing by our booth the next day. Our sales engineer then loaded their content over a cup of coffee while eating a muffin and built a basic application in an hour. They couldn’t believe it.

In most companies — even publishers — content is a mess. It’s in 100 different places in 15 different formats, and each defined format is usually more of an aspiration than a standard. Once, at a multi-billion dollar publisher one of our technical guys actually found this sentence in some internal documentation: “it is believed that this tag is used to …” Only folklore describes the schema.

So when it comes to the general problem of making XML more rich — i.e., having more tags that indicate more meaning — many people take the same big-bang approach. “Well, step 1 would be to put all the content into a single schema (which alone could kill you) and run it through a dozen different entity, fact, sentiment, concept, summarization “extractors” that can markup the content and fragments of it with lots of new and powerful tags (which alone could cost millions).

Again, step 1 becomes lethal.

At Mark Logic we advocate that people consider the opposite approach. Instead of:

  • Step 1: make the content perfect so you can enable any application you want to build
  • Step 2: build an application

We say:

  • Step 1: figure out the application you want to build
  • Step 2: figure out which portions of your markup need to be improved to build that application
  • Step 3: improve only that markup, sometimes manually, sometimes with extraction software, and sometimes with heuristics (i.e., rules of thumb) coded in XQuery
  • Step 4: build your application and get some business value from it
  • Step 5: repeat the process, driven by subsequent application requirements

I call this lazy XML enrichment. You could call it application-driven, as opposed to infrastructure-driven, content cleanup. I think it’s an infinitely better approach because it delivers business results faster and eliminates the risk of either never finishing the first step because it’s impossible, or having funding yanked by the business because it runs out of patience with an IT project that’s showing no ostensible progress.

At this point, I’d like to direct those of technical heart to Matt Turner’s Discovering XQuery blog where he provides a detailed post (code included) that shows an example of lazy, heuristic-based XML enrichment, here.

  • Matt’s example show lazy enrichment because the only markup he needs for his desired application is related to weapons, so that’s all he adds.
  • Matt’s example is heuristic-based because he devises a way to find weapons in XQuery, and then use XQuery to tag them as such.

How The Web Disrupts the RDBMS World

I found an interesting post on The Future of Software minisite run by the GigaOM network, best known for Om Malik and his GigaOM blog. The post is entitled “Data 2.0: How the Web disrupts our relational database world” and is written by Nitin Borwankar.

The post begins with:

The great online shift is creating massive amounts of data – whether it is videos on YouTube or social networking profiles on MySpace. And that data is stored in databases, making them the key component of the new web infrastructure. But managing that information isn’t easy

I think he nails the problem statement. The Web world is changing fast. And relational databases are having trouble keeping up.

The good news is that database management will be vastly different in the future. In fact, change has already begun; it just isn’t (cliché alert!) “evenly distributed” yet.

He then goes on to describe some leading examples of companies or problems that are pushing the relational database envelope.

  1. Yahoo’s creation of its own user management software based on BerkeleyDB
  2. Google’s MapReduce
  3. Amazon’s S3 (simple storage service) and SQS (simple queue service) which externalize operations normally done by a database.
  4. The general use of Lucene, Nutch, and Solr to do indexing of unstructured content, “something an old relational database cannot do well.”
  5. The graph-structured data problem (also known as the parts explosion problem) inherent in social networking and which remains an Achilles’ heel for relational databases

So while I generally agree with his thesis, the examples cited are basically all technology companies who are able to write their own system-level software to bypass and/or accommodate the limitations of relational databases.

My question is: what about everybody else? What are they supposed to do?

My short answer is — perhaps not shockingly — MarkLogic. At MarkLogic, we call Data 2.0 “content.”

  • We manage XML natively
  • We manage graph-structured data easily
  • We manage, search, storage and index text and XML natively

Some companies will always be able to write their own stuff to get around problems. But the reason MarkLogic exists is provide a commercial DBMS that “the rest of us” can use when managing content and building web applications with it.

See this post on top-to-bottom XML for more.

From Search to Research … and Content Applications

Here’s an interesting post on the Read/WriteWeb (RWW) blog, entitled From Search to (Re)Search, Searching for the Google Killer.

It’s definitely worth reading, and the links within it, like this one where a Hakia guy explains quite articulately why Google is unstoppable, and then unsuccessfully tries to dismiss his own arguments.

I agree with RWW that the Google killer won’t come from:

  • One-up feature companies. Engines like Clusty, which add one feature (e.g., dynamic clustering) on top of Google search.
  • Vertical search companies. While the long tail is real, I don’t believe there will be a long tail of search engines (that’s the inverse of the concept). People want relatively few tools that can reach into the long tail of content, products, and information. They don’t want a long tail of tools.
  • Human search. Unless you’re doing real research, the cost model is prohibitive.

The first time I heard the phrase “research, not search” was from Nerac CEO Kevin Bouley. Kevin’s company provides custom research services using a database of content integrated from numerous sources combined with a network of subject-matter experts (SMEs) who use an MarkLogic-based application to assemble custom research reports for clients. When Kevin says “research, not search” he means it.

Nerac uses MarkLogic as their content repository and have a built an XQuery application that enables SMEs to quickly locate information (using our XML search capabilities) and then combine and package that information into a custom research report. It’s a very cool service, and while I think of it as “research, not search” I certainly don’t think of it as human-powered search a la Cha Cha.

While I believe that from search to research is a good direction, I think there is another equally important direction that the RWW omits: from search to application, or as we say at Mark Logic “content application.”

To me, search is inherently open-ended and context-free. Applications are not. If I know you’re a professor and you want to build a custom textbook, then I can build an application that helps you do that. And yes, that application will probably include search across a corpus of content. But search is a feature in the application, not the application itself.

Or, if you’re a pathologist, I can build you an application that leverages how you work with terabytes of medical content to help you identify cancers more readily. Search might be a feature within that application, but the application itself is about helping support the process of differential diagnosis.

Content applications know who you are and what you’re trying to do. (They’re role and task aware.) And you can build them on MarkLogic. And, in my mind, only a content application has enough unfair competitive advantage to beat Google over time. A thin vertical search layer? A better algorithm? One sexy feature? No.

But an application that knows who you are, what you’re trying to to, and leverages a rich (potentially integrated and enriched) contentbase to do so? Ah, well that’s no fair. No search engine can do that.

And that’s the point.

The Relevancy Quest

In the classic book, The Innovator’s Dilemma, Clayton Christensen concludes that a key reason leading companies fail is because they spend too much energy working on sustaining innovations that continuously improve their products for their existing customers. Seemingly paradoxically, he points out that these sustaining innovations can involve very advanced and very expensive technology. That is, it’s not the nature of the technology used (e.g., advanced or simple) that causes innovation to be sustaining or disruptive — it’s who the technology is designed to serve and in what uses.

I think search vendors need to dust off their copies of The Innovator’s Dilemma. Why? Because, for the most part, they seemed wedged in the following paradigm, which I’d call the relevancy quest:

  • Search is about grunting a few keywords
  • The answer is a list of links
  • The quest is then magically inducing the most relevant links given a few grunts

And it’s not a bad paradigm. Heck, it made Google worth $140B and bought Larry and Sergey a nice 767. But can we do better?

Some folks, like the much-hyped Powerset, think so. They’re challenging the grunting part of the equation, arguing that “keyword-ese” is the problem and the solution is natural language. They seem unphased both by Ask Jeeves’ failure to dominate search and by the more than 20 years of failed attempts to provide natural language interfaces to database data, used for business intelligence (BI). As I often say, if natural language were the key to BI user interfaces, then Business Objects would have been purchased by Microsoft years ago for a pittance and Natural Language Inc.’s DataTalker would rule BI. (Instead of the other way around.)

But I respect Powerset because at least they’re challenging the paradigm and taking a different approach to the problem. And, while I sure don’t understand the cost model, I also respect guys like ChaCha because they’re challenging the paradigm, too. In ChaCha’s case, they’re delivering human-powered search where you can literally chat with a live guide who helps you refine your search.

I can also respect the social search guys, including the recently launched Mahalo, because they’re challenging the paradigm as well — using Wisdom of Crowds / Web 2.0 / Wikipedia style collaboration to created “hand-written results pages” for topics, such as the always searchable “Paris Hilton.”

The folks I have trouble understanding are those on the algorithmic relevancy quest, companies like Hakia, a semantic search vendor (interviewed here by Read/Write Web) whose schtick is meaning-based search, and who comes complete with a PageRank ™ rip-off-name algorithm called SemanticRank ™. Or Ask who recently launched a $100M advertising campaign about “the algorithm“. These people remind me of the disk drive manufacturers who invested millions in very advanced technologies for improved 8″ disk drives (to serve their existing customers) all the while missing the market for 5.25” disk drives required by different customers (i.e., PC manufacturers).

Are the Hakias of the world answering the right question? Should we be grunting keywords into search boxes and relying on SomethingRank ™ to do the best job of determining relevancy? Is the search battle of the future really about “my rank’s better than you rank” or equivalently, “my PhD’s smarter than your PhD”? Aren’t these guys fighting the last war?

As usual, I think there are separate answers for Internet and enterprise search.

On the Internet side, sure I think search engines can certainly use more “magic” to improve search relevancy. For example, they can use recent queries and a user profile to impute intent. They can use dynamic clustering and iterative query refinement (e.g., faceted navigation) to help users incrementally improve the precision of their queries.

More practically, I think vertical search and community sites are a great way of improving search results. The context of the site you’re on provides a great clue to what you’re looking for. Typing “Paris Hilton” into Expedia means you’re probably looking for a hotel, where typing it EOnLine means you’re looking for information on the jailed debutante.

Of course, there are a host of Web 2.0 style techniques to improve search like diggs and wikis which can be put to work as well.

Increasingly, our publishing and media customers are going well beyond “improving search” and changing the paradigm to “content applications” — systems that combine software and content to help specific users accomplish specific tasks. See Elsevier’s PathConsult as a concrete example.

On the enterprise search side, I think the answer is different. As I’ve often mentioned, on the enterprise side you lack the rich link structure of the web, effectively lobotomizing PageRank and robbing Google of its once-special (and now increasingly gamed and hacked) sauce.

When I look for the answer of how to improve search in an enterprise context, I look back to BI, where we have decades of history to guide us about the quest to enable end-user access to corporate data.

  • Typing SQL (once seriously considered as the answer) failed. Too complex. While SQL itself was the great enabler of the BI industry, end users could never code it.
  • Creating reports in 4GL languages failed. Too complex.
  • Having other people create reports and deliver them to end users was a begrudging success. While this created a report treadmill/backlog for IT and buried end-users in too much information, it was probably the most widely used paradigm.
  • Natural language interfaces failed. Too hard to express what you really want. Too much precision required. Too much iteration required.
  • End users using graphical tools linked directly to the database schema failed. While these tools hid the complexities of SQL, they failed to hide the complexity of the database schema.

It was only when Business Objects invented a graphical, SQL-generating tool that hid all underlying database complexity and enabled users to compose an arbitrary query that the BI market took off. Simply put, there were two keys:

1. The ability to phrase an arbitrary query of arbitrary complexity (not a highly constrained search).

2. The ability to hide the complexity of the database from the underlying user

While no one has yet built a such a tool for an arbitrary XML contentbase (and while I think building one will be hard given the lack of requirement for a defined schema), MarkLogic customers use our product every day to build content applications that generate complex queries against large contentbases, and completely hide XQuery from the end-user.

Simply put, it’s not about improving search. It’
s about delivering query. That’s the game-changer.

The High Cost of Ineffective Search

Just a quick post to a recent article on the costs associated with ineffective enterprise search.

Tidbits include:

  • According to IDC, a company with 1,000 information workers can expect more than $5M in annual wasted salary costs because of poor search.
  • A recent survey of 1,000 middle managers found that more than half the information they find during searching is useless.
  • According to Butler Group, as much as 10% of a company’s salary costs are wasted through ineffective search.
  • According to Sue Feldman of IDC, people spend 9-10 hours per week searching for information and aren’t successful 1/3 to 1/2 the time.

As I always say, there’s a reason why “enterprise search sucks” returns over 1M hits on Google, including posts from luminaries such as John Udell and Tony Byrne.

While Mark Logic is not out to solve the generic enterprise search problem, I have long believed that enterprise search, as a catgory, will become stuck between a rock and a hard place.

  • The rock is the commoditization of the low-end enterprise search market through offerings like the Google Appliance and IBM OmniFind Yahoo Edition. This will suck the money out of the low end, the generic crawl-and-index market.
  • The hard place is DBMSs — specifically, DBMS-based content applications built to help people in specific roles perform specific tasks. Some people build these applications today by trying to bolt together an enterprise search engine and a DBMS (e.g., Oracle + Verity or Lucene + MySQL), but increasing I believe people will use XML content servers (special-purpose DBMSs designed to handle content) for this purpose.

When you think about it, an inverted keyword index can only help you so much when trying to solve a problem — even if you gussy it up with taxonomies and sexy extraction technology. In the end, an application designed to solve a specific problem will trump a souped-up tool every time.