2017
DOI: 10.48550/arxiv.1704.05426
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

A Broad-Coverage Challenge Corpus for Sentence Understanding through Inference

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
316
0

Year Published

2019
2019
2023
2023

Publication Types

Select...
3
3
2

Relationship

0
8

Authors

Journals

citations
Cited by 228 publications
(316 citation statements)
references
References 0 publications
0
316
0
Order By: Relevance
“…More recent efforts focus on designing generic sentence-level learning objectives or tasks. On the supervised learning regime, Conneau et al (2017); Cer et al (2018) empirically show the effectiveness of leveraging the NLI task (Bowman et al, 2015a;Williams et al, 2017) to promote generic sentence representations. The task involves classifying each sentence pair into one of three categories: entailment, contradiction, or neutral.…”
Section: Related Workmentioning
confidence: 99%
See 2 more Smart Citations
“…More recent efforts focus on designing generic sentence-level learning objectives or tasks. On the supervised learning regime, Conneau et al (2017); Cer et al (2018) empirically show the effectiveness of leveraging the NLI task (Bowman et al, 2015a;Williams et al, 2017) to promote generic sentence representations. The task involves classifying each sentence pair into one of three categories: entailment, contradiction, or neutral.…”
Section: Related Workmentioning
confidence: 99%
“…We observe the same trend on the categorical-level evaluation, i.e., large inter class distances come along with large intra cluster distances. In contrast, the original pre-trained language models consistently show poor performance on all three down- 7 We take 10000 randomly sampled entailment pairs from the combination of SNLI (Bowman et al, 2015a) and MNLI (Williams et al, 2017) datasets as ppos, and the whole 20000 examples as pdata.…”
Section: Vascl Leads To More Dispersed Representationmentioning
confidence: 99%
See 1 more Smart Citation
“…Language is a highly general and multi-faceted domain, and benchmarks are playing an increasingly important role in NLP: the General Language Understanding Evaluation benchmark (GLUE) contributing to highlight the potential of self-supervised learning for language problems [58,14]. GLUE encompasses 9 different Language classification tasks including sentiment analysis [78], sentence similarity [15,7], Natural Language Inference [12,83], Question Answering [62], and coreference [37]. While state-of-the-art approaches have now achieve a human-level performance on GLUE [42,38,61], Wang et al [82] proposed a more challenging successor, SuperGLUE.…”
Section: Benchmarksmentioning
confidence: 99%
“…The evolution of deep learning methodologies and continual expansion of computing capacity has enabled many advancements in language modeling field. In specific, transformer-based architectures have foreshadowed the creation of BERT and its many variants, surpassing previously held records in GLUE, SQuAD, and MultiNLI benchmarks [1][2][3][4]. BERT-based architectures have become lightweight and more efficient (DistilBERT) and trained more effectively to become increasingly performant (RoBERTa) [5,6].…”
Section: Introductionmentioning
confidence: 99%