Tim's Weblog
Tim Strehle’s links and thoughts on Web apps, software development and Digital Asset Management, since 2002.

Jonathan Rochkind: Linked Data Caution

While preparing for our DAM and the Semantic Web webinar, I came across a spectacular (and very long) blog post on the pros and cons of Linked Data. It is well applicable outside of its library context. I wish we had such a deep discussion of all the technology we’re considering to use:

Jonathan Rochkind: Linked Data Caution

My favorite quotes:

“I worry that “linked data” is being approached as a goal in and of itself, and what it is meant to accomplish (and how it will or could accomplish those things) is being approached somewhat vaguely.

[…] You still need common vocabularies for your linked data to be inter-operable, there’s no magic in linked data otherwise, linked data just says the data will be encoded in the form of triples, with the vocabularies being encoded in the form of URIs.

[…] The nature of linked data as being building complex information graphs based on simple triples can actually make the linked data more difficult to deal with practically. […] By being so abstract and formally simple, it can get confusing.

[…] My sense is that the general industry understanding is that linked data has not caught on like people thought it would in the 2007-2012 heyday, and adoption has in fact slowed and reversed.

[…] There’s pretty widespread agreement in the industry at large that [Linked Data experiments] have not lived up to their initial expected promise or hype, and have as of yet delivered few if any significant user-facing products based upon them.

[…] Seldom in my experience do I run into a problem where simply transitioning infrastructure to linked data would provide a solution or fundamental advancement. The barriers often have at their roots business models; or lack of common shared domain models; or lack of person power to create/record the ‘facts’ needed in machine-readable format.

[…] So linked data has got good marketing and a critical mass, in an environment where decision-makers want to do something but don’t know what to do.

[…] I think those experienced with library metadata realize that good domain modelling (eg vocabulary control), and getting different actors to use the same standard formats is a challenge. I think they believe that linked data will somehow solve this challenge by being “open to extension” — I think this is a false promise.

[…] And yes, I actually agree, our library web pages should use schema.org markup to expose their information in machine-readable markup. […] But the good thing is it’s not that hard to do […], and does not require rebuilding our entire infrastructure.

[…] Be skeptical. Sure, of me too. […] Work to understand what’s really going on so you can evaluate benefits and potentials yourself, and understand what it would take to get there.

[…] Stay user centered. “Linked data” can’t be your goal. You are using linked data to accomplish something to add value to your patrons.

[…] Sure, make all your data (in any format, linked data or not) available on the open web, under an open license. […] Put identifiers everywhere. I don’t care if they are in the form of URLs. Get your vendors to do this too.

[…] As we’re “doing linked data”, figure out ways to get improvements that effect our users positively incrementally, at each stage, iteratively.”

I agree with Jonathan’s points, but still think moving to Linked Data is a critical first step for better interoperability – see my blog post Linked Data for better image search on the Web.

Regarding the limitations of RDF triples, see Why I prefer Topic Maps to RDF.

(Via Tracy Wolfe.)