Category Archives: Content

From Search to Research … and Content Applications

Here’s an interesting post on the Read/WriteWeb (RWW) blog, entitled From Search to (Re)Search, Searching for the Google Killer.

It’s definitely worth reading, and the links within it, like this one where a Hakia guy explains quite articulately why Google is unstoppable, and then unsuccessfully tries to dismiss his own arguments.

I agree with RWW that the Google killer won’t come from:

  • One-up feature companies. Engines like Clusty, which add one feature (e.g., dynamic clustering) on top of Google search.
  • Vertical search companies. While the long tail is real, I don’t believe there will be a long tail of search engines (that’s the inverse of the concept). People want relatively few tools that can reach into the long tail of content, products, and information. They don’t want a long tail of tools.
  • Human search. Unless you’re doing real research, the cost model is prohibitive.

The first time I heard the phrase “research, not search” was from Nerac CEO Kevin Bouley. Kevin’s company provides custom research services using a database of content integrated from numerous sources combined with a network of subject-matter experts (SMEs) who use an MarkLogic-based application to assemble custom research reports for clients. When Kevin says “research, not search” he means it.

Nerac uses MarkLogic as their content repository and have a built an XQuery application that enables SMEs to quickly locate information (using our XML search capabilities) and then combine and package that information into a custom research report. It’s a very cool service, and while I think of it as “research, not search” I certainly don’t think of it as human-powered search a la Cha Cha.

While I believe that from search to research is a good direction, I think there is another equally important direction that the RWW omits: from search to application, or as we say at Mark Logic “content application.”

To me, search is inherently open-ended and context-free. Applications are not. If I know you’re a professor and you want to build a custom textbook, then I can build an application that helps you do that. And yes, that application will probably include search across a corpus of content. But search is a feature in the application, not the application itself.

Or, if you’re a pathologist, I can build you an application that leverages how you work with terabytes of medical content to help you identify cancers more readily. Search might be a feature within that application, but the application itself is about helping support the process of differential diagnosis.

Content applications know who you are and what you’re trying to do. (They’re role and task aware.) And you can build them on MarkLogic. And, in my mind, only a content application has enough unfair competitive advantage to beat Google over time. A thin vertical search layer? A better algorithm? One sexy feature? No.

But an application that knows who you are, what you’re trying to to, and leverages a rich (potentially integrated and enriched) contentbase to do so? Ah, well that’s no fair. No search engine can do that.

And that’s the point.

The Relevancy Quest

In the classic book, The Innovator’s Dilemma, Clayton Christensen concludes that a key reason leading companies fail is because they spend too much energy working on sustaining innovations that continuously improve their products for their existing customers. Seemingly paradoxically, he points out that these sustaining innovations can involve very advanced and very expensive technology. That is, it’s not the nature of the technology used (e.g., advanced or simple) that causes innovation to be sustaining or disruptive — it’s who the technology is designed to serve and in what uses.

I think search vendors need to dust off their copies of The Innovator’s Dilemma. Why? Because, for the most part, they seemed wedged in the following paradigm, which I’d call the relevancy quest:

  • Search is about grunting a few keywords
  • The answer is a list of links
  • The quest is then magically inducing the most relevant links given a few grunts

And it’s not a bad paradigm. Heck, it made Google worth $140B and bought Larry and Sergey a nice 767. But can we do better?

Some folks, like the much-hyped Powerset, think so. They’re challenging the grunting part of the equation, arguing that “keyword-ese” is the problem and the solution is natural language. They seem unphased both by Ask Jeeves’ failure to dominate search and by the more than 20 years of failed attempts to provide natural language interfaces to database data, used for business intelligence (BI). As I often say, if natural language were the key to BI user interfaces, then Business Objects would have been purchased by Microsoft years ago for a pittance and Natural Language Inc.’s DataTalker would rule BI. (Instead of the other way around.)

But I respect Powerset because at least they’re challenging the paradigm and taking a different approach to the problem. And, while I sure don’t understand the cost model, I also respect guys like ChaCha because they’re challenging the paradigm, too. In ChaCha’s case, they’re delivering human-powered search where you can literally chat with a live guide who helps you refine your search.

I can also respect the social search guys, including the recently launched Mahalo, because they’re challenging the paradigm as well — using Wisdom of Crowds / Web 2.0 / Wikipedia style collaboration to created “hand-written results pages” for topics, such as the always searchable “Paris Hilton.”

The folks I have trouble understanding are those on the algorithmic relevancy quest, companies like Hakia, a semantic search vendor (interviewed here by Read/Write Web) whose schtick is meaning-based search, and who comes complete with a PageRank ™ rip-off-name algorithm called SemanticRank ™. Or Ask who recently launched a $100M advertising campaign about “the algorithm“. These people remind me of the disk drive manufacturers who invested millions in very advanced technologies for improved 8″ disk drives (to serve their existing customers) all the while missing the market for 5.25” disk drives required by different customers (i.e., PC manufacturers).

Are the Hakias of the world answering the right question? Should we be grunting keywords into search boxes and relying on SomethingRank ™ to do the best job of determining relevancy? Is the search battle of the future really about “my rank’s better than you rank” or equivalently, “my PhD’s smarter than your PhD”? Aren’t these guys fighting the last war?

As usual, I think there are separate answers for Internet and enterprise search.

On the Internet side, sure I think search engines can certainly use more “magic” to improve search relevancy. For example, they can use recent queries and a user profile to impute intent. They can use dynamic clustering and iterative query refinement (e.g., faceted navigation) to help users incrementally improve the precision of their queries.

More practically, I think vertical search and community sites are a great way of improving search results. The context of the site you’re on provides a great clue to what you’re looking for. Typing “Paris Hilton” into Expedia means you’re probably looking for a hotel, where typing it EOnLine means you’re looking for information on the jailed debutante.

Of course, there are a host of Web 2.0 style techniques to improve search like diggs and wikis which can be put to work as well.

Increasingly, our publishing and media customers are going well beyond “improving search” and changing the paradigm to “content applications” — systems that combine software and content to help specific users accomplish specific tasks. See Elsevier’s PathConsult as a concrete example.

On the enterprise search side, I think the answer is different. As I’ve often mentioned, on the enterprise side you lack the rich link structure of the web, effectively lobotomizing PageRank and robbing Google of its once-special (and now increasingly gamed and hacked) sauce.

When I look for the answer of how to improve search in an enterprise context, I look back to BI, where we have decades of history to guide us about the quest to enable end-user access to corporate data.

  • Typing SQL (once seriously considered as the answer) failed. Too complex. While SQL itself was the great enabler of the BI industry, end users could never code it.
  • Creating reports in 4GL languages failed. Too complex.
  • Having other people create reports and deliver them to end users was a begrudging success. While this created a report treadmill/backlog for IT and buried end-users in too much information, it was probably the most widely used paradigm.
  • Natural language interfaces failed. Too hard to express what you really want. Too much precision required. Too much iteration required.
  • End users using graphical tools linked directly to the database schema failed. While these tools hid the complexities of SQL, they failed to hide the complexity of the database schema.

It was only when Business Objects invented a graphical, SQL-generating tool that hid all underlying database complexity and enabled users to compose an arbitrary query that the BI market took off. Simply put, there were two keys:

1. The ability to phrase an arbitrary query of arbitrary complexity (not a highly constrained search).

2. The ability to hide the complexity of the database from the underlying user

While no one has yet built a such a tool for an arbitrary XML contentbase (and while I think building one will be hard given the lack of requirement for a defined schema), MarkLogic customers use our product every day to build content applications that generate complex queries against large contentbases, and completely hide XQuery from the end-user.

Simply put, it’s not about improving search. It’
s about delivering query. That’s the game-changer.

The High Cost of Ineffective Search

Just a quick post to a recent article on the costs associated with ineffective enterprise search.

Tidbits include:

  • According to IDC, a company with 1,000 information workers can expect more than $5M in annual wasted salary costs because of poor search.
  • A recent survey of 1,000 middle managers found that more than half the information they find during searching is useless.
  • According to Butler Group, as much as 10% of a company’s salary costs are wasted through ineffective search.
  • According to Sue Feldman of IDC, people spend 9-10 hours per week searching for information and aren’t successful 1/3 to 1/2 the time.

As I always say, there’s a reason why “enterprise search sucks” returns over 1M hits on Google, including posts from luminaries such as John Udell and Tony Byrne.

While Mark Logic is not out to solve the generic enterprise search problem, I have long believed that enterprise search, as a catgory, will become stuck between a rock and a hard place.

  • The rock is the commoditization of the low-end enterprise search market through offerings like the Google Appliance and IBM OmniFind Yahoo Edition. This will suck the money out of the low end, the generic crawl-and-index market.
  • The hard place is DBMSs — specifically, DBMS-based content applications built to help people in specific roles perform specific tasks. Some people build these applications today by trying to bolt together an enterprise search engine and a DBMS (e.g., Oracle + Verity or Lucene + MySQL), but increasing I believe people will use XML content servers (special-purpose DBMSs designed to handle content) for this purpose.

When you think about it, an inverted keyword index can only help you so much when trying to solve a problem — even if you gussy it up with taxonomies and sexy extraction technology. In the end, an application designed to solve a specific problem will trump a souped-up tool every time.

Buxton IEEE Article: Beyond Search, Content Applications

Mark Logic’s own Stephen Buxton, co-author of the definitive tome, Querying XML, has recently published an article in IT Pro (a publication of the IEEE Computer Society) entitled “Beyond Search: Content Applications.”

Here is a link to the article (subscription required). If you press the link you can either view the abstract or buy the article for $19. Here’s a link to the editor’s introduction of the issue (free), where he says:

“Stephen Buxton’s article on XML content servers describes the unique capabilities of this form of repository system and the extreme precision and information extraction that it can achieve. The server’s content of unstructured text is richly tagged, usually by inflow entity extractors or taxonomies. This provides a high degree of semantic quality and makes high relevancy search and disambiguation possible. Search, as well as other applications, can be developed to sit atop the server and take full advantage of the metadata. In this way, the enterprise can benefit from true information extraction in search as well as in other applications requiring high precision and a degree of semantic awareness.”

In the article Buxton differentiates enterprise search engines from XML content servers as candidate platforms for content applications.

He also discusses several example content applications, including:

  • The Oxford University Press African American Studies Center, an online product for social sciences libraries and researchers that does extensive content integration and repurposing
  • O’Reilly Media’s SafariU, a custom publishing system that enables professors to build custom books, online through a web interface with printed versions shipped to the campus bookstore in about 2 weeks
  • Elsevier’s PathConsult, a highly contextual application designed for pathologists in order to assist them in the tricky task of differential diagnosis.

It’s worth the $19 — go ahead and get the article. Heck, it’s cheaper and faster to read than his book!

Rule 1 of Database Performance

Here’s a link to a post done by Matt Turner on his Discovering XQuery blog that discusses Publishing 2.0 and content logic.

In this post Matt discusses what I call the “thick middle tier” problem with most search-engine-based content applications.

Here’s the issue. Search engines (1) return lists of links to documents and (2) allow only fairly basic “query” (and I’m reluctant to even call them that) predicates to be applied in the search engine.

As a result, a typical search-engine-based application ends up with a thick middle tier of Java code that (1) systematically materializes each document in the returned list as a DOM tree and then (2) does subsequent processing on that document using Java.

As Matt points out, you might be tempted to think of this work as “your application” or “business logic,” but in reality it’s not. It’s content processing, not business or application processing. This approach is bad for several reasons:

  • Productivity is negatively impacted because you have to do low-level content processing yourself, and typically in a relatively low-level language, like Java
  • Performance is negatively impacted because you end up with an architecture that violates “rule 1” of database performance — push processing to the data, don’t bring data to the processing

All DBMSs strive for compliance with rule 1.

  • Query optimizers always apply the most restrictive predicate first (e.g., apply emp-id = 178 before sex = female)
  • Query optimizers always do lookup joins from the table with the most restrictive predicates on it (where dept.dname = “fieldmkt” as opposed to emp.name = “*stein*”)
  • It’s why everyone loves stored procedures. Not only do they minimize client/server interaction and allow pre-compilation, most importantly, they push processing to the data.

I’m not going to criticize people who built systems this way historically. Prior to products like MarkLogic, the thick-middle-tier architecture was the best you could do. DBMSs couldn’t handle content so the best you could do was to leave your content in files (or stuff it in BLOBs), index it with a search engine, and then build these thick-middle-tier applications.

But in the future it doesn’t have to be this way. With systems like MarkLogic, you can now build content applications using a standard query language (XQuery) and the “correct” allocation of processing across tiers. This has the following benefits:

  • Improved productivity because XQuery is a relatively high-level language
  • Greatly improved performance because you can thin-out the middle tier and push content processing to the XML content server (which is both optimized to do it and close to the content)
  • Openness and standardization, which makes it easier to find skilled resources, eliminates vendor lock-in, and makes software integration generally easier.
  • Flexibility. Typically with enough smarts in the middle layer you can hack something together than runs one query fast. The trick is when you want to run many and/or new queries fast — in that case, you really need the right architecture — i.e., one that pushes processing to the content instead of bringing content to the processing.