The Semantic Web landscape is changing

Fred responded to my recent anti-Semantic Web post by saying that the “Semantic Web landscape is changing.”

I really like Fred’s post. Here is where he agrees with me:

The proof that both RDF and web ontologies are useful is yet to be done.

Here is where we disagree:

Everything is changing, and everything should explode… soon!

I honestly do not see the Semantic Web being about to take off. As Bob DuCharme pointed out, people are doing “ontologies for the sake of ontologies”. This will get old very quickly. If 8 years and millions of dollars was not enough to produce a single remotely useful application, what will it take?

Are semantic web researchers becoming semantic web implementers? I do not see this happening. The papers are every bit as theoretical and as disconnected from real-world problems as they ever were.

Here are some common myths:

  • Google is getting worse every day. Only the Semantic Web can save us. (False: Google is not getting worse, it is constantly improving and at an alarming rate at that.)
  • Inference engines and ontologies are more sophisticated or somehow more intelligent than current database solutions such as relational databases, data mining algorithms, and so on. (False: Current database technology is highly sophisticated and built on lots and lots of theory.)

4 thoughts on “The Semantic Web landscape is changing”

  1. Hi Daniel,

    Thanks for re-opening this discussions. It is more an intuition I have. However, I am talking and experimenting things with the SIOC community (mostly European, the DERI, etc.), and they are doing really interesting things.

    For example, Uldis Bojar, one of the mind behind the SIOC ontology, will soon give a speech at the Blog Talk Reloaded conference about the SIOC ontology with Alexandre Passant. They are currently developing a SIOC browser (in fact it is a crawler and a triple store with a SPARQL endpoint) and the results are really interesting. In fact, their crawler is aggregating SIOC and FOAF data from various disparate blogs that installed a SIOC exporter plugin on their blog system (currently available on WordPress, b2Evolution and DotClear), archive them in a triple store. Then they query the triple store with some SPARQL queries and visualize the results with some javascript visualisation applet. The results are really impressing (considering that the information came from many different sources). In the next days, Alex should also crawl some Talk Digger SIOC documents that he will get from the web service. So in the same query, you will have data, from many different source of data, with all the same semantic.

    Take a look at the recent blog post wrote by Alex [1].

    Also keep an eye on Alex, Uldis’s [2] and John’s [3] blog in the next week, many updates and examples should be published in prevision of their talk.

    On my side, I should have some development (really soon) that would help me to start something really serious with Talk Digger related with the semantic web (keep an eye on my blog, you should read something vis-à-vis that soon enough).

    So, will the semantic web explode in the next 5 years? My intuition tells me yes. Do I have a 6th sense like mothers? No. So, what the future reserve us? I hope it will be the semantic web (beware, I never said that we will be able to infer trusts, deploy search system with the power of Google, resolve the problem of the evolution of ontologies (versioning, etc), etc, etc, etc.) But I think that in 5 years from now, we will have enough data, knowledge, and services (that use that data) to say that we can do something useful (saving time, etc) so that we will be able to say: the semantic web is now living.

    I hope I am not mad 😉



    Take care,



  2. Preserving semantics accross data sources is practically impossible unless you limit yourself to toy cases.

    There was a study some years ago about Dublin Core semantics and it was found that most Dublin Core fields (which are elementary RDF) are badly filled out by government agencies and industries. The percentage of “semantic variance” was just huge. Just huge.

    And you know what? To this day, Dublin Core is mostly a failure. For the inventors, it meant fame and fortune… but the users, Dublin Core is a useless technology. I would never trust the metadata describing a document unless *I* entered it, or it was single-sourced.

    Soooo… if we cannot make Dublin Core useful over the web… why would people think that **more** complicated technologies would work at all? Not a chance. If the simple approach fail, then the complex one is doomed before you even start.

    In fact, any project that relies on user-entered metadata or on metadata from diverse sources is doomed from the start. See this reference, for example, on why it is so:

Leave a Reply

Your email address will not be published. Required fields are marked *

To create code blocks or other preformatted text, indent by four spaces:

    This will be displayed in a monospaced font. The first four 
    spaces will be stripped off, but all other whitespace
    will be preserved.
    Markdown is turned off in code blocks:
     [This is not a link](

To create not a block, but an inline code span, use backticks:

Here is some inline `code`.

For more help see