Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics 2019
DOI: 10.18653/v1/p19-1465
|View full text |Cite
|
Sign up to set email alerts
|

Simple and Effective Text Matching with Richer Alignment Features

Abstract: In this paper, we present a fast and strong neural approach for general purpose text matching applications. We explore what is sufficient to build a fast and well-performed text matching model and propose to keep three key features available for inter-sequence alignment: original point-wise features, previous aligned features, and contextual features while simplifying all the remaining components. We conduct experiments on four well-studied benchmark datasets across tasks of natural language inference, paraphr… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
95
0

Year Published

2019
2019
2022
2022

Publication Types

Select...
4
3
2

Relationship

1
8

Authors

Journals

citations
Cited by 150 publications
(95 citation statements)
references
References 28 publications
(55 reference statements)
0
95
0
Order By: Relevance
“…The context contains not only the correct answer but also the sentences on which the correct answer is based. There is a number of datasets for span-based QA, such as SQuAD [8], TriviaQA [9], and NewsQA [10], as well as many models such as BiDAF [11], DocQA [12], and BERT [2]) that perform well. In multiple-choice QA, questions and multiple options are given and the correct answer is chosen from the given options.…”
Section: B Question Answeringmentioning
confidence: 99%
“…The context contains not only the correct answer but also the sentences on which the correct answer is based. There is a number of datasets for span-based QA, such as SQuAD [8], TriviaQA [9], and NewsQA [10], as well as many models such as BiDAF [11], DocQA [12], and BERT [2]) that perform well. In multiple-choice QA, questions and multiple options are given and the correct answer is chosen from the given options.…”
Section: B Question Answeringmentioning
confidence: 99%
“…The EM is the ratio that represents the extent to which the results predicted by the model and the answer are fully matched. The F 1 score (9) is the harmonic mean of precision, calculated by (7), and recall, calculated by (8). True Positive (TP) represents that the value of the actual class is 'yes,' and the value of the predicted class is also 'yes.'…”
Section: ) Metricsmentioning
confidence: 99%
“…We formulate this task as semantic matching between texts [13,21,31], since we only use textual features of items at current stage. The main challenge to associate e-commerce concepts with related items is that the length of the concept is too short so that limited information can be used.…”
Section: Item Associationmentioning
confidence: 99%
“…To further investigate how knowledge helps, we dig into cases. Using our base model without knowledge injected, the matching score of Model AUC F1 P@10 BM25 --0.7681 DSSM [13] 0.7885 0.6937 0.7971 MatchPyramid [21] 0.8127 0.7352 0.7813 RE2 [31] 0.8664 0.7052 0.8977 Ours 0.8610 0.7532 0.9015 Ours + Knowledge 0.8713 0.7769 0.9048 Table 6: Experimental results in semantic matching between e-commerce concepts and items.…”
Section: Concept-item Semantic Matchingmentioning
confidence: 99%