2010
DOI: 10.1007/978-3-642-13486-9_8
|View full text |Cite
|
Sign up to set email alerts
|

Natural Language Interfaces to Ontologies: Combining Syntactic Analysis and Ontology-Based Lookup through the User Interaction

Abstract: Abstract. With large datasets such as Linked Open Data available, there is a need for more user-friendly interfaces which will bring the advantages of these data closer to the casual users. Several recent studies have shown user preference to Natural Language Interfaces (NLIs) in comparison to others. Although many NLIs to ontologies have been developed, those that have reasonable performance are domain-specific and tend to require customisation for each new domain which, from a developer's perspective, makes … Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
2
1

Citation Types

0
96
0
10

Year Published

2013
2013
2024
2024

Publication Types

Select...
6
1

Relationship

0
7

Authors

Journals

citations
Cited by 132 publications
(106 citation statements)
references
References 15 publications
0
96
0
10
Order By: Relevance
“…In that field we have selected three QA systems: PowerAqua [16], FREyA (Feedback, Refinement and Extended VocabularY Aggregation) [9] and Treo [12], which share with our approach some similarities.…”
Section: Discussionmentioning
confidence: 99%
See 2 more Smart Citations
“…In that field we have selected three QA systems: PowerAqua [16], FREyA (Feedback, Refinement and Extended VocabularY Aggregation) [9] and Treo [12], which share with our approach some similarities.…”
Section: Discussionmentioning
confidence: 99%
“…PowerAqua takes as input a natural language query, translates it into a set of logical queries, which are then answered by consulting and aggregating information derived from multiple heterogeneous semantic sources [9]. It is divided in three components: the Linguistic Component, the Relation Similarities Services and the Inference Engine.…”
Section: Discussionmentioning
confidence: 99%
See 1 more Smart Citation
“…But despite growing interest, there is a lack of standardized evaluation benchmarks to evaluate and compare the quality and performance of ontology-based question answering approaches at large scale [28]. To assess their current strengths and weaknesses, a range of such systems, for example GINGSENG [4], NLPReduce [23], Querix [21], FREyA [13] and PANTO [41], made use of the independent Mooney datasets 4 and corresponding queries. These are the only shared datasets that have been used to objectively 5 compare different ontology-based question answering systems for a given ontology or dataset.…”
Section: Existing Evaluation Methods and Competitionsmentioning
confidence: 99%
“…Recall, on the other hand, is defined differently among systems. Damljanovic et al [13] define recall as the ratio of the number of questions correctly answered by the system to the total number of all questions in the dataset, while for Wang et al [41] recall is the ratio of the number of questions that deliver some output-independently of whether the output is valid or not-to the total number of all questions. Such differences, together with discrepancies in the number of queries evaluated, render a direct comparison difficult.…”
Section: Existing Evaluation Methods and Competitionsmentioning
confidence: 99%