2007
DOI: 10.1145/1229179.1229180
|View full text |Cite
|
Sign up to set email alerts
|

An exploration of the principles underlying redundancy-based factoid question answering

Abstract: The so-called "redundancy-based" approach to question answering represents a successful strategy for mining answers to factoid questions such as "Who shot Abraham Lincoln?" from the World Wide Web. Through contrastive and ablation experiments with Aranea, a system that has performed well in several TREC QA evaluations, this work examines the underlying assumptions and principles behind redundancy-based techniques. Specifically, we develop two theses: that stable characteristics of data redundancy allow factoid… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

1
48
0
1

Year Published

2010
2010
2020
2020

Publication Types

Select...
3
2
2

Relationship

0
7

Authors

Journals

citations
Cited by 60 publications
(50 citation statements)
references
References 35 publications
1
48
0
1
Order By: Relevance
“…A cross-model comparison demonstrates that the Umodel statistically significantly outperforms previous redundancy-based models [3], [16] (as described in Sect. 4.5).…”
Section: )mentioning
confidence: 99%
See 1 more Smart Citation
“…A cross-model comparison demonstrates that the Umodel statistically significantly outperforms previous redundancy-based models [3], [16] (as described in Sect. 4.5).…”
Section: )mentioning
confidence: 99%
“…Approaches based on this observation are called redundancy-based models. Some pioneering studies [3], [4], [16], [22]- [24] have investigated redundancy from the Web for the AV in QA.…”
Section: Introductionmentioning
confidence: 99%
“…Moreover, it is concerned with the extraction of appropriate answers from the text content of highly ranked passages or documents (see e.g. Lin, 2007). Both problems are lying outside the scope of the entity ranking task, since entity occurrences are recognized and tagged beforehand and requests on the answer type are assumed to be given explicitly.…”
Section: Entity Retrieval Tasksmentioning
confidence: 99%
“…Hence, they rank entities directly by their number of mentions. Others sum up the relevance scores of text fragments that contain string-identical answer candidates (Lin, 2007). Another recent study addresses the issue of answer identity in more detail by incorporating similarity scores between candidate answers in the calculation of the individual answer scores.…”
Section: Ranking Approaches For Entitiesmentioning
confidence: 99%
See 1 more Smart Citation