Nowadays we are witnessing an exponential increase in open data which are published in order to be reused by third parties. The major problem about these data is that they have very different formats and structures as they are issued in totally different institutions. Modern elearning methods can be improved using these open data, but their different structures inhibit the development of such elearning applications. This paper describes LODRo, a platform containing standardized public data on existing museums and archaeological research centers in Romania. These data are published by the Romanian National Heritage Institute on Romanian Open Data portal, in CSV format. Standardizing their format involves transforming existing open data in RDF triples and attaching additional properties using online web services, such as geographical coordinates. Furthermore, the existing information is linked with other online resources available from Linked Open Data Cloud. After transforming these Romanian national heritage open data we obtained as results 1014 resources having 5836 RDF triples, each resource representing a Romanian museum and 4290 resources having 39458 RDF triples, each resource representing a Romanian archaeological research center. Also, using DBpedia and GeoNames datasets, we found links to all towns and counties that host a Romanian cultural heritage and to a number of 164 out of 1014 total Romanian museums, this number representing the total number of Romanian museums that are mention on DBpedia. In LODRo, all these RDF triples can be queried using an existing SPARQL endpoint. Using these enhanced data published by LODRo platform, developers can build mobile learning applications to help museum visitors and archaeological researchers to discover additional data about existing sites.
The Semantic Web standardization and their growing usage in professional communities, both governmental and non-governmental, have naturally driven for accelerating the growth of published data volume in the virtual space by 422 published datasets in 2011 to 9,960 until 2019, totaling of 192,230,648 triples from various areas as medicine, education, art, history, technology, public administration etc. This trend of increasing of the semantic datasets published in the virtual space, leads to the emergence of a new challenge: ensuring the data quality; a first step in this direction being made by Tim Berners-Lee in 2010, when he defined a set of criteria that data scientists are encouraged to use it for ensuring a highest quality level of datasets. However, an important shape has not been mentioned: data accuracy, a feature not strictly specific to semantic data but which is applied to any type of data representation. The paper starts with a brief presentation of the most important metrics used to determine the level of data quality, along with a brief introduction of the most used string similarity algorithms. After that, the paper presents a new feature for an existing open data integration tool, called Karma, that allows users, such as data analysts and scientists, to improve their time management plan by reducing the time needed to clean their data. This feature has been implemented as a string suggestion for miswritten strings by using the presented string similarity metrics, keeping, at the same time, the framework design and the framework workflow too.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.