SWI-Prolog is neither a commercial Prolog system nor a purely academic enterprise, but increasingly a community project. The core system has been shaped to its current form while being used as a tool for building research prototypes, primarily for knowledge- intensive and interactive systems. Community contributions have added several interfaces and the constraint (CLP) libraries. Commercial involvement has created the initial garbage collector, added several interfaces and two development tools: PlDoc (a literate program- ming documentation system) and PlUnit (a unit testing environment). In this article we present SWI-Prolog as an integrating tool, supporting a wide range of ideas developed in the Prolog community and acting as glue between foreign resources. This article itself is the glue between technical articles on SWI-Prolog, providing context and experience in applying them over a longer period
It is widely accepted that proper data publishing is difficult. The majority of Linked Open Data (LOD) does not meet even a core set of data publishing guidelines. Moreover, datasets that are clean at creation, can get stains over time. As a result, the LOD cloud now contains a high level of dirty data that is difficult for humans to clean and for machines to process.Existing solutions for cleaning data (standards, guidelines, tools) are targeted towards human data creators, who can (and do) choose not to use them. This paper presents the LOD Laundromat which removes stains from data without any human intervention. This fully automated approach is able to make very large amounts of LOD more easily available for further processing right now.LOD Laundromat is not a new dataset, but rather a uniform point of entry to a collection of cleaned siblings of existing datasets. It provides researchers and application developers a wealth of data that is guaranteed to conform to a specified set of best practices, thereby greatly improving the chance of data actually being (re)used.
For magazine editors and others, finding suitable photographs for a particular purpose is increasingly problematic. Advances in storage media along with the Web enable us to store and distribute photographic images worldwide. While large databases containing photographic images exist, the tools and methods for searching and selecting an image are limited. Typically, the databases have a semistructured indexing scheme that allows a keyword search but not much more to help the user find the desired photograph.Currently, researchers promote the use of explicit background knowledge as a way out of the search problems encountered on the Internet and in multimedia databases. The semantic Web 1 and emerging standards (such as the resource description framework (RDF) 2 ) make creating a syntactic format specifying background knowledge for information resources possible.In this article, we explore the use of background knowledge contained in ontologies to index and search collections of photographs. We developed an annotation strategy and tool to help formulate annotations and search for specific images. We also compare our approach's performance with two existing Web-based search engine options. The article concludes with observations regarding the standards and tools we used in this annotation study. Our approachCompanies offering photographic images for sale often provide CDs containing samples of the images in reduced jpeg format. Magazine editors and others typically search these CDs to find an illustration for an article. To simulate this process and create our test case, we obtained three CDs with collections of animal photo samples. The CDs contained about 3,000 photos, but we used a subset of approximately 100 photos of apes for our annotation study. Figure 1 shows the general architecture used in our annotation study. We specified all ontologies in RDF Schema (RDFS) 2 using the Protégé-2000 3 ontology editor (version 1.4). This editor supports the construction of ontologies in a frame-like fashion with classes and slots. Protégé can save the ontology definitions in RDFS. The SWI-Prolog RDF parser 4 reads the resulting RDFS file into the annotation tool, which subsequently generates an annotation interface based on the RDFS specification. The tool supports reading in photographs, creating annotations, and storing annotations in an RDF file. A query tool with a similar interface can read RDF files and search for suitable photographs in terms of the ontology.The architecture shown in Figure 1 is in the same spirit as the one Yves Lafon and Bert Bos described. 5 However, we place more emphasis on the nature of the ontologies, the subject matter description, and the explicit link to a domain ontology. Developing ontologiesTo define semantic annotations for ape photographs, we needed at least two groups of definitions:• Structure of a photo annotation. We defined a photo annotation ontology that specifies an annotation's structure independent of the particular subject matter domain (in our case, apes). This ontology ...
a b s t r a c tIn this article we describe a Semantic Web application for semantic annotation and search in large virtual collections of cultural-heritage objects, indexed with multiple vocabularies. During the annotation phase we harvest, enrich and align collection metadata and vocabularies. The semantic-search facilities support keyword-based queries of the graph (currently 20 M triples), resulting in semantically grouped result clusters, all representing potential semantic matches of the original query. We show two sample search scenario's. The annotation and search software is open source and is already being used by third parties. All software is based on established Web standards, in particular HTML/XML, CSS, RDF/OWL, SPARQL and JavaScript.
Abstract. Within the cultural heritage field, proprietary metadata and vocabularies are being transformed into public Linked Data. These efforts have mostly been at the level of large-scale aggregators such as Europeana where the original data is abstracted to a common format and schema. Although this approach ensures a level of consistency and interoperability, the richness of the original data is lost in the process. In this paper, we present a transparent and interactive methodology for ingesting, converting and linking cultural heritage metadata into Linked Data. The methodology is designed to maintain the richness and detail of the original metadata. We introduce the XMLRDF conversion tool and describe how it is integrated in the ClioPatria semantic web toolkit. The methodology and the tools have been validated by converting the Amsterdam Museum metadata to a Linked Data version. In this way, the Amsterdam Museum became the first 'small' cultural heritage institution with a node in the Linked Data cloud.
No abstract
Abstract. This paper describes a method for converting existing thesauri and related resources from their native format to RDF(S) and OWL. The method identifies four steps in the conversion process. In each step, decisions have to be taken with respect to the syntax or semantics of the resulting representation. Each step is supported through a number of guidelines. The method is illustrated through conversions of two large thesauri: MeSH and WordNet.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.