Proceedings of the 40th International ACM SIGIR Conference on Research and Development in Information Retrieval 2017
DOI: 10.1145/3077136.3080705
|View full text |Cite
|
Sign up to set email alerts
|

On the Benefit of Incorporating External Features in a Neural Architecture for Answer Sentence Selection

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

1
8
0

Year Published

2018
2018
2020
2020

Publication Types

Select...
4
2
1

Relationship

0
7

Authors

Journals

citations
Cited by 10 publications
(9 citation statements)
references
References 15 publications
1
8
0
Order By: Relevance
“…This model is simple, fast and well studied. It has also been reproduced in other work Chen et al, 2017;Sequiera et al, 2017).…”
Section: Answer Sentence Selection Networksupporting
confidence: 65%
See 2 more Smart Citations
“…This model is simple, fast and well studied. It has also been reproduced in other work Chen et al, 2017;Sequiera et al, 2017).…”
Section: Answer Sentence Selection Networksupporting
confidence: 65%
“…We remove from the dev. and test sets questions without answers, and questions with System MAP MRR Santos et al 201675.30 85.11 75.88 82.19 Severyn and Moschitti (2016) 76.54 81.86 Wang et al (2016b) 77.14 84.47 Rao et al (2016) 80.10 87.70 80.20 87.50 Shen et al (2017) 82 Yin et al (2016) 69.21 71.08 Severyn and Moschitti (2016) 69.51 71.07 Chen et al (2017) 70.10 71.80 Rao et al (2016) 70.90 72.30 Tymoshenko et al (2016) 71.25 72.30 Guo et al (2017) 71.71 73.36 71.80 73.10 Shen et al 201773.30 75.00 Wang et al (2016a) 73.41 74.18 Wang and Jiang (2017) 74 training instances with difficult negative examples. Our system beats several others that use word alignments and attention mechanisms.…”
Section: Trecqamentioning
confidence: 99%
See 1 more Smart Citation
“…That is, passages are ranked in response to a query using passage-relevance estimates. The merits of integrating the estimates using learning-to-rank (LTR) approaches were also demonstrated [5,9,10,39,56,58].…”
Section: Introductionmentioning
confidence: 99%
“…With the small size of the dataset, traditional features demonstrate good performance in comparison with the neural network models. This, along with more evidence on the usefulness of combining traditional features in deep learning architecture (R.-C. Chen, Yulianti, Sanderson, & Bruce Croo, 2017;Sequiera et al, 2017), encouraged us to build a hybrid model. An essential dilemma for building the feature-assisted model is how to incorporate engineered features to sentence embeddings inputs.…”
Section: A Feature-assisted Neural Network Architecture Modelmentioning
confidence: 99%