Abstract:ASMOV (Automated Semantic Matching of Ontologies with Verification) is a novel algorithm that uses lexical and structural characteristics of two ontologies to iteratively calculate a similarity measure between them, derives an alignment, and then verifies it to ensure that it does not contain semantic inconsistencies. In this paper, we describe the ASMOV algorithm, and then present experimental results that measure its accuracy using the OAEI 2008 tests, and that evaluate its use with two different thesauri: W… Show more
“…For instance, Lily [33] uses four types of patterns, e.g., redundant mapping, imp recise mapping, Inconsistent mapping and abnormal mapping. ASM OV [34] uses five types of patterns to check semantics, e.g., Multiple -entity correspondences, Crisscross correspondences, disjointsubsumption contradiction, Subsumption and equivalence incomp leteness, Do main and Range inco mpleteness. The pattern disjoint-subsumption contradiction used by ASMOV corresponds to inconsistent mapping pattern used by Lily.…”
Abstract-Alignment overco mes divergence in the specification of the semantics of vocabularies by different but overlapping ontologies. Therefore, it enhances semantic interoperability for many web based applications. However, ontology change following applications new requirements or new perception of domain knowledges can leads to undesirable knowledge such as inconsistent and therefore to a useless alignment. Ontologies and align ments are encoded in knowledge bases allowing applications to store only some explicit knowledge while they derive imp licit ones by applying reasoning services on these knowledge bases. This underlying representation of ontologies and align ments leads us to follow base revision theory to deal with align ment revision under ontology change. For that purpose, we adapt kernel contraction framework to design rational operators and to formulate the set of postulates that characterize each class of these operators. We demonstrate the connection between each class of operators and the set of postulates that characterize them. Finally, we present algorithms to compute align ment kernels and incision functions. Kernels are sets of correspondences responsible of undesirable knowledge following align ment semantics. Incision functions determine the sets of correspondences to eliminate in order to restore alignment consistency or to realize a successful contraction.
“…For instance, Lily [33] uses four types of patterns, e.g., redundant mapping, imp recise mapping, Inconsistent mapping and abnormal mapping. ASM OV [34] uses five types of patterns to check semantics, e.g., Multiple -entity correspondences, Crisscross correspondences, disjointsubsumption contradiction, Subsumption and equivalence incomp leteness, Do main and Range inco mpleteness. The pattern disjoint-subsumption contradiction used by ASMOV corresponds to inconsistent mapping pattern used by Lily.…”
Abstract-Alignment overco mes divergence in the specification of the semantics of vocabularies by different but overlapping ontologies. Therefore, it enhances semantic interoperability for many web based applications. However, ontology change following applications new requirements or new perception of domain knowledges can leads to undesirable knowledge such as inconsistent and therefore to a useless alignment. Ontologies and align ments are encoded in knowledge bases allowing applications to store only some explicit knowledge while they derive imp licit ones by applying reasoning services on these knowledge bases. This underlying representation of ontologies and align ments leads us to follow base revision theory to deal with align ment revision under ontology change. For that purpose, we adapt kernel contraction framework to design rational operators and to formulate the set of postulates that characterize each class of these operators. We demonstrate the connection between each class of operators and the set of postulates that characterize them. Finally, we present algorithms to compute align ment kernels and incision functions. Kernels are sets of correspondences responsible of undesirable knowledge following align ment semantics. Incision functions determine the sets of correspondences to eliminate in order to restore alignment consistency or to realize a successful contraction.
“…There are some ontology alignment systems that do semantic verification and disallow mappings that lead to unsatisfiable concepts (e.g., [10,12]). Further, adding missing is-a relations to ontologies was a step in the alignment process in [17].…”
“…Hu et al [7] build a kernel by adopting the formal semantics of the Semantic Web that is then extended iteratively in terms of discriminative property-value pairs in the descriptions of URIs. Algorithms that combine formal semantics of the Semantic Web and string matching techniques also include Zhishi.me [11], LN2R [15], CODI [12] and ASMOV [8]. These systems can be applied to datasets in different domains without human provided matching rules, such as People, Location, Organization and Restaurant.…”
Abstract. Due to the decentralized nature of the Semantic Web, the same real world entity may be described in various data sources and assigned syntactically distinct identifiers. In order to facilitate data utilization in the Semantic Web, without compromising the freedom of people to publish their data, one critical problem is to appropriately interlink such heterogeneous data. This interlinking process can also be referred to as Entity Coreference, i.e., finding which identifiers refer to the same real world entity. This proposal will investigate algorithms to solve this entity coreference problem in the Semantic Web in several aspects. The essence of entity coreference is to compute the similarity of instance pairs. Given the diversity of domains of existing datasets, it is important that an entity coreference algorithm be able to achieve good precision and recall across domains represented in various ways. Furthermore, in order to scale to large datasets, an algorithm should be able to intelligently select what information to utilize for comparison and determine whether to compare a pair of instances to reduce the overall complexity. Finally, appropriate evaluation strategies need to be chosen to verify the effectiveness of the algorithms.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.