2007
DOI: 10.1007/978-3-540-74999-8_50
|View full text |Cite
|
Sign up to set email alerts
|

Applying Dependency Trees and Term Density for Answer Selection Reinforcement

Abstract: Abstract. This paper describes the experiments performed for the QA@CLEF-2006 within the joint participation of the eLing Division at VEng and the Language Technologies Laboratory at INAOE. The aim of these experiments was to observe and quantify the improvements in the final step of the Question Answering prototype when some syntactic features were included into the decision process. In order to reach this goal, a shallow approach to answer ranking based on the term density measure has been integrated into th… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1

Citation Types

0
1
0

Year Published

2010
2010
2011
2011

Publication Types

Select...
1
1

Relationship

0
2

Authors

Journals

citations
Cited by 2 publications
(1 citation statement)
references
References 10 publications
0
1
0
Order By: Relevance
“…Rege et al [33] describe many forms of representing documents as graphs and Badia and Kantardzic [2] propose a methodology for construction of graphs via statistical learning. Graphs are widely used in natural language processing, for example, is question answering [26], text classification [4,10], named entity recognition [5], or information representation [3,31] (in combination with vector space techniques). In other cases the documents are represented by probabilistic functions [25] or composite function of probabilistic function on words [12,35].…”
Section: Statistical Computational Structuresmentioning
confidence: 99%
“…Rege et al [33] describe many forms of representing documents as graphs and Badia and Kantardzic [2] propose a methodology for construction of graphs via statistical learning. Graphs are widely used in natural language processing, for example, is question answering [26], text classification [4,10], named entity recognition [5], or information representation [3,31] (in combination with vector space techniques). In other cases the documents are represented by probabilistic functions [25] or composite function of probabilistic function on words [12,35].…”
Section: Statistical Computational Structuresmentioning
confidence: 99%