2020
DOI: 10.48550/arxiv.2007.10820
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

IITK at SemEval-2020 Task 10: Transformers for Emphasis Selection

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
4
1

Citation Types

0
8
0

Year Published

2021
2021
2021
2021

Publication Types

Select...
2

Relationship

0
2

Authors

Journals

citations
Cited by 2 publications
(8 citation statements)
references
References 0 publications
0
8
0
Order By: Relevance
“…Figure 4 presents the Emphasis Heatmap for some examples from the test set using our final model. In Table 4 we benchmark our EmpLite model with the state-of-the-art solution by IITK (Singhal et al, 2020) which utilized huge pretrained models like ELMo, BERT (Devlin et al, 2018), RoBERTa (Liu et al, 2019) and XLNet (Yang et al, 2019). These models require huge RAM/ROM for on-device inferencing making it unsuitable for edge devices where resources are constrained.…”
Section: Experimental Settings and Resultsmentioning
confidence: 99%
See 2 more Smart Citations
“…Figure 4 presents the Emphasis Heatmap for some examples from the test set using our final model. In Table 4 we benchmark our EmpLite model with the state-of-the-art solution by IITK (Singhal et al, 2020) which utilized huge pretrained models like ELMo, BERT (Devlin et al, 2018), RoBERTa (Liu et al, 2019) and XLNet (Yang et al, 2019). These models require huge RAM/ROM for on-device inferencing making it unsuitable for edge devices where resources are constrained.…”
Section: Experimental Settings and Resultsmentioning
confidence: 99%
“…Pre-trained language model has also been used to achieve emphasis selection (Huang et al, 2020). Singhal et al (Singhal et al, 2020) achieves significantly good performance with (a) Bi-LSTM + Attention approach, and (b) Transformers approach. To achieve their modest performances, these architectures produce huge models.…”
Section: Introductionmentioning
confidence: 99%
See 1 more Smart Citation
“…In our transformers approach, we experiment with two different transformer based model architectures, namely RoBERTa (Liu et al 2019) and XLNet (Yang et al 2019). Our choice of transformer architectures is inspired by the best performing architectures in SemEval-2020 Task 10, Emphasis selection for written text in visual media (Singhal et al 2020;Anand et al 2020). Both these models were pre-trained on large amounts of unannotated data in an unsupervised manner.…”
Section: Introductionmentioning
confidence: 99%
“…Additionally, we also concatenate some word-level features in the attention output before feeding it to the fully connected layers. A part of our modification is inspired by the team that stood 3rd on the SemEval-2020 Task 10 leaderboard (Singhal et al 2020).…”
Section: Introductionmentioning
confidence: 99%