Abstract. The idea of Linked Data is to aggregate, harmonize, integrate, enrich, and publish data for re-use on the Web in a cost-efficient way using Semantic Web technologies. We concern two major hindrances for re-using Linked Data: It is often difficult for a re-user to 1) understand the characteristics of the dataset and 2) evaluate the quality the data for the intended purpose. This paper introduces the "Linked Data Finland" platform LDF.fi addressing these issues. We extend the famous 5-star model of Tim Berners-Lee, with the sixth star for providing the dataset with a schema that explains the dataset, and the seventh star for validating the data against the schema. LDF.fi also automates data publishing and provides data curation tools. The first prototype of the platform is available on the web as a service, hosting tens of datasets and supporting several applications. LDF.fi 8 contributes to the current state-of-the-art of Linked Data publishing [2] as follows: 1) We propose extending the 5-star model 9 into a 7-star model, with the goal of encouraging data publishers to provide their data with explicit metadata schemas and to validate their data for better quality. 2) LDF.fi automates the data publishing process so that not only a SPARQL endpoint but also a rich set of additional data services are generated automatically based on the metadata about the dataset and its graphs. 3) LDF.fi 1 http://www.w3.org/DesignIssues/LinkedData.html 2 Publishing Linked Data
Abstract. University data is typically stored in separate data silos even though the data is implicitly richly related together. Such data has a large and diverse user base, including faculty members, students, industrial partners, alumnis, collaborating universities, and media. In this paper, we demonstrate two tools for understanding and using the contents of linked university data. The first tool, Visualization Playground (VISU), supports querying and visualizing the data for example for illustrating emerging trends in universities (e.g., about publications) and for comparing differences. The second tool, Vocabulary Visualizer (V 2 ), demonstrates the usage of vocabularies in the Linked University Data Cloud. It reveals what kinds of data different universities have published, and what terms are used to describe the contents. Such analysis is a basis for facilitating design of Linked Data applications across university data boundaries. Towards Linked University DataData production and knowledge publication in universities are traditionally based on separate data silos for different data types and domains. Such silos include data such as publication information, course and event descriptions, educational materials, web pages and news feeds. University information systems have traditionally been implemented without considering opening the data stored in there and how it could be done. Another big challenge with separated data silos is the wide diversity of data models and practices in use. Linked Open Data (LOD) principles and technologies enable universities to publish their legacy data with shared open standards, and offer a variety of approaches for integrating university contents with the existing Web of Data [1]. The promise is that the use of LOD technologies supports academic organizations to be more transparent, comparable, and even more open for new ideas.Linked Universities 1 is a collaboration alliance and application scenario where open datasets from universities are published and linked together using the 5-star methodology 2 . Several universities 3 have already published SPARQL endpoints 1 http://linkeduniversities.org 2
Abstract. For achieving semantic interoperability, messages or documents exchanged electronically between systems are commonly modelled using standard specifications, such as the UN/CEFACT CCTS (core components technical specification). However, additional requirements, such as the need for layout markup or common metadata for certain archiving scenarios might be applied to the documents. Furthermore, the management of resulting artefacts, i.e., core components, XML schemas and related infrastructure, could be cumbersome in some cases. This paper investigates the use of the W3C XHTML+RDFa (extensible hypertext markup language with resource description framework attributes) for representing both the layout and semantics of documents modelled according to CCTS. The paper focuses on the validation of XHTML+RDFa documents against a core components library represented as an ontology. In addition, the paper illustrates and validates this demand-driven solution in the scope of the Finnish National Project for IT in Social Services.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.