To me the themes remain the same:
- It’s about a machine processable web as opposed to simply a human readable one
- It’s about structuring data from pages so it can be used by programs
- It’s then about integrating data from across multiple sites and/or inferencing across information from one or more sites
- He’s a big believer that people should publish information (e.g., a catalog) in both HTML format for human viewing and RDF format for machine processing
- RDF is all about triples, which go something like: object-1 property object-2 (e.g., Dave is-brother-of Fred, Dave is-son-of Judy).
- Creating new knowledge then involves inferencing using these triples (e.g., knowing the two triples above you can induce that Fred is-son-of Judy)
Note that the whole “social graph” captured by Facebook or LinkedIn could easily be dynamically recreated if everyone had some universal profile that listed a bunch of friend-of-a-friend triples (Dave is-a-friend-of Tim, Dave is-a-friend-of Joe, …)