Proceedings of the 2019 ACM SIGIR International Conference on Theory of Information Retrieval 2019
DOI: 10.1145/3341981.3344249
|View full text |Cite
|
Sign up to set email alerts
|

Performance Prediction for Non-Factoid Question Answering

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

1
10
0

Year Published

2020
2020
2024
2024

Publication Types

Select...
5
3

Relationship

1
7

Authors

Journals

citations
Cited by 22 publications
(12 citation statements)
references
References 6 publications
1
10
0
Order By: Relevance
“…While effective, this setting produces point estimates for each candidate in D, removing all uncertainty and confidence estimates, from the predictions. At this point, areas of research such as QPP and cut off prediction try to determine these quantities heuristically via score and document distributions [2,8,18,31,49,50,56]. This task has become increasingly challenging with the changing nature of neural retrieval models as previously established post-retrieval QPP methods are not as effective for neural models [19].…”
Section: Problem Statement and Motivationmentioning
confidence: 99%
“…While effective, this setting produces point estimates for each candidate in D, removing all uncertainty and confidence estimates, from the predictions. At this point, areas of research such as QPP and cut off prediction try to determine these quantities heuristically via score and document distributions [2,8,18,31,49,50,56]. This task has become increasingly challenging with the changing nature of neural retrieval models as previously established post-retrieval QPP methods are not as effective for neural models [19].…”
Section: Problem Statement and Motivationmentioning
confidence: 99%
“…Early research in QPP utilizes linguistic information [29], statistical features [15,23,24] in pre-retrieval methods, or analyses clarity [15,16], robustness [7,20,48,54,55], retrieval scores [34,41,44,47,55] for post-retrieval prediction, which further evolves into several effective frameworks [17,20,28,38,40,45,46]. The QPP techniques have also been explored and analyzed in [3,5,6,10,18,21,22,25,35,36,39,42,43,52,53,27]. With the recent development deep learning techniques, NeuralQPP [51] achieves promising results by training a three-components deep network under weak supervision of existing methods.…”
Section: Related Workmentioning
confidence: 99%
“…With the recent development deep learning techniques, NeuralQPP [51] achieves promising results by training a three-components deep network under weak supervision of existing methods. Recently, while NQA-QPP [22] uses BERT to generate contextualized embedding for QPP in non-factoid question answering, BERT-QPP [4] directly applies BERT with pointwise learning in the prediction task, outperforming previous methods on the MS MARCO dev set [30] and TREC Deep Learning track query sets [13,14]. Groupwise Ranking.…”
Section: Related Workmentioning
confidence: 99%
See 1 more Smart Citation
“…BERT [18] is a large-scale network based on Transformer which is pre-trained for a language modeling task. BERT has recently proven to be effective in a wide range of NLP and IR tasks, including question answering [18], passage re-ranking [32,34], query performance prediction [20], and conversational QA [40]. The coloring in Figure 2 shows shared parameters.…”
Section: End To End Modeling and Trainingmentioning
confidence: 99%