Extensional higher-order logic programming has been introduced as a generalization of classical logic programming. An important characteristic of this paradigm is that it preserves all the well-known properties of traditional logic programming. In this paper we consider the semantics of negation in the context of the new paradigm. Using some recent results from non-monotonic fixed-point theory, we demonstrate that every higher-order logic program with negation has a unique minimum infinite-valued model. In this way we obtain the first purely model-theoretic semantics for negation in extensional higher-order logic programming. Using our approach, we resolve an old paradox that was introduced by W. W. Wadge in order to demonstrate the semantic difficulties of higher-order logic programming.
We propose a purely extensional semantics for higher-order logic programming. In this semantics program predicates denote sets of ordered tuples, and two predicates are equal iff they are equal as sets. Moreover, every program has a unique minimum Herbrand model which is the greatest lower bound of all Herbrand models of the program and the least fixed-point of an immediate consequence operator. We also propose an SLD-resolution proof system which is proven sound and complete with respect to the minimum Herbrand model semantics. In other words, we provide a purely extensional theoretical framework for higher-order logic programming which generalizes the familiar theory of classical (first-order) logic programming.
In this paper we present SPREFQL, an extension of the SPARQL language that allows appending a "PREFER" clause that expresses 'soft' preferences over the query results obtained by the main body of the query. The extension does not add expressivity and any SPREFQL query can be transformed to an equivalent standard SPARQL query. However, clearly separating preferences from the 'hard' patterns and filters in the "WHERE" clause gives queries where the intention of the client is more cleanly expressed, an advantage for both human readability and machine optimization. In the paper we formally define the syntax and the semantics of the extension and we also provide empirical evidence that optimizations specific to SPREFQL improve run-time efficiency by comparison to the usually applied optimizations on the equivalent standard SPARQL query.This general preference relation is restricted into intrinsic preference formulas that do not rely on external information to compare two objects:
Logic programs with ordered disjunction (LPODs) extend classical logic programs with the capability of expressing alternatives with decreasing degrees of preference in the heads of program rules. Despite the fact that the operational meaning of ordered disjunction is clear, there exists an important open issue regarding its semantics. In particular, there does not exist a purely model-theoretic approach for determining the most preferred models of an LPOD. At present, the selection of the most preferred models is performed using a technique that is not based exclusively on the models of the program and in certain cases produces counterintuitive results. We provide a novel, model-theoretic semantics for LPODs, which uses an additional truth value in order to identify the most preferred models of a program. We demonstrate that the proposed approach overcomes the shortcomings of the traditional semantics of LPODs. Moreover, the new approach can be used to define the semantics of a natural class of logic programs that can have both ordered and classical disjunctions in the heads of clauses. This allows programs that can express not only strict levels of preferences but also alternatives that are equally preferred.
Abstract-We describe an Inductive Logic Programming (ILP) approach to learning descriptions in Description Logics (DL) under uncertainty. The approach is based on implementing many-valued DL proofs as propositionalizations of the elementary DL constructs and then providing this implementation as background predicates for ILP. The proposed methodology is tested on a many-valued variation of eastbound-trains and Iris, two well known and studied Machine Learning datasets. I. INTRODUCTIONDescription logics (DL) are a family of logics that has found many applications in conceptual and semantic modelling, and is one of the key technologies behind semantic web applications.Fuzzy and, in general, many-valued extensions of DL semantics have given significant boost to their importance in both related fields: semantic conceptualization gains a means of expressing the uncertainty that is inherent in real-world modelling problems and uncertainty inference gains access to the vast conceptualization effort that has been carried out in the context of the semantic web.Despite, however, the rapid progress in inference methods for many-valued DL, there has been very limited success in applying machine learning methodologies to this family of logics, and especially to its more expressive members (such as those covering OWL and OWL 2) that are routinely used in web intelligence applications.In this paper we first introduce the machine learning discipline of Inductive Logic Programming and then discuss previous work on applying ILP to learning DL (Section II). These approaches tend to propose adaptation of ILP algorithms so that they cover DL, but are restricted to the less expressive members of the DL family. Instead, we investigate a novel approach whereby we re-formulate DL inference within the ILP paradigm, effectively mapping our problem to an equivalent problem within the domain of application of ILP (Section III). We evaluate our approach by applying an unadapted ILP system to a DL learning task under this mapping (Section IV), and close the paper by drawing conclusions and outlining future research directions (Section V).
We define a novel, extensional, three-valued semantics for higher-order logic programs with negation. The new semantics is based on interpreting the types of the source language as three-valued Fitting-monotonic functions at all levels of the type hierarchy. We prove that there exists a bijection between such Fitting-monotonic functions and pairs of two-valued-result functions where the first member of the pair is monotone-antimonotone and the second member is antimonotonemonotone. By deriving an extension of consistent approximation fixpoint theory (Denecker et al. 2004) and utilizing the above bijection, we define an iterative procedure that produces for any given higher-order logic program a distinguished extensional model. We demonstrate that this model is actually a minimal one. Moreover, we prove that our construction generalizes the familiar well-founded semantics for classical logic programs, making in this way our proposal an appealing formulation for capturing the well-founded semantics for higher-order logic programs.
The DARE platform has been designed to help research developers deliver user-facing applications and solutions over diverse underlying e-infrastructures, data and computational contexts. The platform is Cloud-ready, and relies on the exposure of API, which are suitable for raising the abstraction level and hiding complexity. It implements the cataloguing and execution of fine-grained and Python-based dispel4py workflows as services. Reflection is achieved via a logical knowledge base, comprising multiple internal catalogues, registries and semantics, while it supports persistent and pervasive data provenance. This paper presents design and implementation aspects of the DARE platform, as well as it provides directions for future development.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
334 Leonard St
Brooklyn, NY 11211
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.