Linked Open Data: Because only Open Data is not enough, not even in Italy.

The value of an “open” license is that data released with that license can be shared and reused without restriction.

In order to address the developer community, the opening of licenses is the first step: without this step the rest is like a card castle.

But the “open” license also has another value: to make data mashups or to link them if data is allocated to different databases, such as Europeana or DBpedia , it is necessary to have compatible license schemes to avoid incurring some set of data for which the license to use is restrictive, thus, in return, an incomplete or ineffective data set.

A valid example of Linked Data is Linked Geo Data, spatial data is crucial to interconnecting geographic resources by providing browsing and authoring facilities.

Linked Open Data: where do you prodest ?

Let’s go for order: because only Open Data is not enough.

The web of documents becomes the web of data, these describe “things” that have “properties” to which certain “values” correspond.
Imagining a table: the rows are the “things”, each column represents the “properties”, and the intersection represents the property of the thing.
In summary, we tend to think of data in this way: “what”, “property”, “value”.
Every “thing” can have more property and more things can be related. From a graphical point of view, imagining a graph the nodes are things and strings the relationships between things.

The precipitous issue is that of identifying things globally and univocally from the point of view of a database. The Linked Data Key is the URIs that allow the above identification. GLi URIs identify things that are described rather than actions about those things, and if two people create data using the same URI, then they are describing the same thing by making it easy to merge data from distinct data sources, with the ability to recognize the distinction between resources and representations of such resources: the same URI might return a different representation of the resource, such as HTML or XML or JSON .
So if we are going to publish the data on the web, we need a standard to express the data so that a receiving client data can figure out what a thing is, what a property is, what is a value and, since this is the web, even what is a link. This is the key rule we need and this is what gives RDF: Data expressed in RDF format can use URIs from different websites. If two sets of data use the same URI then it is very easy to work when they talk about the same thing, for example, allowing to gather information published by a school with information obtained from statistical surveys elsewhere published according to the standard, of course. And the great thing about the RDF model (which uses URIs to identify properties) is that those data sets can be combined automatically, because the standard lets you know where to look for the information you need.

Using HTTP URIs facilitates the retrieval of a document from the web. This allows you to program on-demand access to information. Developers need not download huge databases while they are interested in a small part of those data. How can we easily create structured and reusable data from Excel or (worse) formats from PDF files? How to address the changes over time, and record the origin of the information we provide? How do we represent statistical information? Or localization information? These are things you learn by going to work!

It’s complicated to start adopting Linked Data for both social, cultural and technological reasons. Nothing will happen from evening to morning, but little by little there will be network effects: more shared URIs, more shared vocabularies, which makes it easier to adopt Linked Data patterns offering more benefits for everyone.

Once the data is modeled and modeled, you have to query them and this is done with a standard query language: the SPARQL .
In fact, what is needed is generating larger data sets, combining granular link data in lists and charts, this is essentially what SPARQL does.

Therefore, to publish Linked Data it is necessary
1) to understand the principles (Use RDF data model with RDF links, typed links between two resources, to link data about the same things)
2) understand the data (with FOAF SIOC Dublin Core shared vocabularies , geo, SKOS , Review)
3) choose URI (http URIs) for the things expressed in the data (such things as people, places, events, books, films, concepts, photos, comments, reviews)
4) link to other data sets ( with RDF links)
In summary RDF is the format for Linked Data; RDF uses URIs to name things; when an URI is called, it returns RDF descriptions of things called with the same and always via RDF describing relationships between things. Finally, zenith is achieved by linking different data sets.

In spite of problems and issues that might be raised about the difficulties of developers or resource scarcity, I believe that Linked Open Data Data is the best approach available for publishing data in an extremely varied and distributed environment in a gradual way and sustainable.

Because? Linked Open Data means publishing data on the web while working with the web.

 

Leave a Reply

Your email address will not be published. Required fields are marked *