Sir Tim Berners-Lee, who I now call Tim 1.0, as opposed to Tim 2.0 (O’Reilly), recently did an interesting one-hour interview with Paul Miller of Talis on their Nodalities blog.
You can listen to the interview here. Because the sound quality isn’t great, I suggest listening to the interview while reading along with the full transcript here.
To me the themes remain the same:
- It’s about a machine processable web as opposed to simply a human readable one
- It’s about structuring data from pages so it can be used by programs
- It’s then about integrating data from across multiple sites and/or inferencing across information from one or more sites
- He’s a big believer that people should publish information (e.g., a catalog) in both HTML format for human viewing and RDF format for machine processing
- RDF is all about triples, which go something like: object-1 property object-2 (e.g., Dave is-brother-of Fred, Dave is-son-of Judy).
- Creating new knowledge then involves inferencing using these triples (e.g., knowing the two triples above you can induce that Fred is-son-of Judy)
Note that the whole “social graph” captured by Facebook or LinkedIn could easily be dynamically recreated if everyone had some universal profile that listed a bunch of friend-of-a-friend triples (Dave is-a-friend-of Tim, Dave is-a-friend-of Joe, …)
Just a quick post to highlight Twine, a startup competing in the semantic web space who dubs itself the first mainstream semantic web application. Frankly, in taking a quick look at it through these two posts (Twine: The First Mainstream Semantic Web App and Twine Launches A Smarter Way to Organize Your Online Life) it doesn’t strike me as a semantic web application at all.
When I think “semantic web” I think about three things:
- The concept: to make the web machine interpretable as opposed to simply machine deliverable.
- A set of core technologies (e.g., RDF, OWL, SPARQL) which by and large have not taken off in the market.
- Inferencing to create new information from the web itself. For example, if site A says that Bandit Kellogg is a Bernese Mountain Dog and site B says that Bernese Mountain Dogs eat socks, then you can induce that fact that Bandit Kellogg eats socks (which he does, voraciously).
Perhaps mine is a traditional or outdated view of the semantic web vision, but I’m not sure. See this post by Nitin Karandikar entitled The Promise of the Semantic Web for his take, which is similar to mine. (The post is based on an interview with Nova Spivack, CEO of Radar Networks, makers of Twine.)
When I look at Twine, I see more Web 2.0 technologies
Yes, it appears that Twine does automatic entity extraction (which they call Smart Tags) against things that are bookmarked. And they say they use a bunch of semantic web technologies inside the system to figure out relationships between the tags and between people and tags. Excerpt from the Read/WriteWeb:
Where Twine is differentiated from the likes of wikipedia is that its underlying data structure is entirely Semantic Web. Spivack told me that the following Semantic Web technologies are being used: RDF, OWL, SPARQL, XSL. Also he said that they plan to use GRDDL in the near future. Spivack had an interesting term for what Twine is doing with Semantic Web technologies, riffing off the Facebook Social Graph. Spivack is calling Twine a “Semantic Graph”, which he says will map relationships to both people and topics. So Twine’s Semantic Graph actually integrates the Social Graph.
I’d like offer more commentary on Twine but it seems their newfound popularity is impacting their website — I couldn’t successfully register for the invite-only Beta. When I clicked “register” it just hung forever. I’ll try again in a few weeks and if it works and if I get invited to the Beta, I’ll share some first-hand feedback.
Meantime, see the two posts cited above or check out Nitin Karandikar’s email interview with Nova Spivack on his Software Abstractions blog.