Proceedings of the 20th International Conference on Computational Linguistics - COLING '04 2004
DOI: 10.3115/1220355.1220543
|View full text |Cite
|
Sign up to set email alerts
|

Information extraction for question answering

Abstract: We investigate the impact of the precision/recall trade-off of information extraction on the performance of an offline corpus-based question answering (QA) system. One of our findings is that, because of the robust final answer selection mechanism of the QA system, recall is more important. We show that the recall of the extraction component can be improved using syntactic parsing instead of more common surface text patterns, substantially increasing the number of factoid questions answered by the QA system.

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

1
2
0

Year Published

2004
2004
2016
2016

Publication Types

Select...
4
3
1

Relationship

1
7

Authors

Journals

citations
Cited by 30 publications
(5 citation statements)
references
References 8 publications
1
2
0
Order By: Relevance
“…Therefore, some of the results of this work agree with Jijkoun et al (2004) and Jiang and Zhai (2007), who had concluded that more complex features might hurt the performance of the classifiers (or increase the recall with a cost in precision). Nevertheless, syntactic information has also been reported as useful in previous works (Kambhatla 2004; Zhou et al 2005; Mintz et al 2009).…”
Section: Discussionsupporting
confidence: 90%
See 1 more Smart Citation
“…Therefore, some of the results of this work agree with Jijkoun et al (2004) and Jiang and Zhai (2007), who had concluded that more complex features might hurt the performance of the classifiers (or increase the recall with a cost in precision). Nevertheless, syntactic information has also been reported as useful in previous works (Kambhatla 2004; Zhou et al 2005; Mintz et al 2009).…”
Section: Discussionsupporting
confidence: 90%
“…Jijkoun, De Rijke and Mur (2004) evaluate different surface and syntactic patterns for obtaining pairs for question answering. Their results show that syntactic parsing improves the performance of the question answering systems, despite the fact that the extraction is less precise than that obtained through surface patterns.…”
Section: Related Workmentioning
confidence: 99%
“…Text segments are extracted as candidate character values if the text segments satisfy relevant linguistic rules. Linguistic rules used in MicroPIE include regular expressions, Part-Of-Speech (POS) tag patterns and syntactic patterns [50]. In some cases, multiple linguistic rules are often integrated in one character extractor to deal with varied textual expressions of a character.…”
Section: Methodsmentioning
confidence: 99%
“…Our approach for answering factoid questions is largely based on QUARTZ, our QA system used for experiments in the TREC 2003 QA track [7] and the CLEF 2004 Question Answering track [6]. We use an architecture where several streams run in parallel: each is based on a different approach to QA and is a self contained QA system in itself.…”
Section: Factoid Questionsmentioning
confidence: 99%
“…We also implemented a number of simple filtering mechanisms that serve as sanity checks for the answer candidates and performed a number of experiments in our answer justification module. We give an overview of the new components here and refer the reader to [3,4,6,7] for an account of the rest of the system.…”
Section: Factoid Questionsmentioning
confidence: 99%