2021
DOI: 10.3390/s21082712
|View full text |Cite
|
Sign up to set email alerts
|

Subsentence Extraction from Text Using Coverage-Based Deep Learning Language Models

Abstract: Sentiment prediction remains a challenging and unresolved task in various research fields, including psychology, neuroscience, and computer science. This stems from its high degree of subjectivity and limited input sources that can effectively capture the actual sentiment. This can be even more challenging with only text-based input. Meanwhile, the rise of deep learning and an unprecedented large volume of data have paved the way for artificial intelligence to perform impressively accurate predictions or even … Show more

Help me understand this report
View preprint versions

Search citation statements

Order By: Relevance

Paper Sections

Select...
4
1

Citation Types

0
5
0

Year Published

2022
2022
2023
2023

Publication Types

Select...
4
1
1

Relationship

1
5

Authors

Journals

citations
Cited by 8 publications
(5 citation statements)
references
References 30 publications
0
5
0
Order By: Relevance
“…This version uses the same architecture as BERT but pretrains ten times more data, including both BERT data and 63 million news articles, a web text corpus, and stories. Because it offers the highest level of forecasting accuracy to date, we apply RoBERTa-large to compare emotion detection models [ 19 ].…”
Section: Methodsmentioning
confidence: 99%
See 2 more Smart Citations
“…This version uses the same architecture as BERT but pretrains ten times more data, including both BERT data and 63 million news articles, a web text corpus, and stories. Because it offers the highest level of forecasting accuracy to date, we apply RoBERTa-large to compare emotion detection models [ 19 ].…”
Section: Methodsmentioning
confidence: 99%
“…With RoBERTa, a fine-tuning process takes place for sequence-level classification tasks, as in the BERT architecture. We follow an existing process [ 19 ], in which emotion data sets get split into a training (80%) and a testing (20%) data set for the transformer model training. At the beginning of the sequence, a special class token [CLS] gets added for classification tasks.…”
Section: Methodsmentioning
confidence: 99%
See 1 more Smart Citation
“…Figure 2 illustrates the overall architecture of the BERT classification network. The input text is split into a sequence of tokens (Tok) and then processed in the BERT model ( 22 ). [CLS] is a special classification token inserted at the beginning of every input sequence.…”
Section: Methodsmentioning
confidence: 99%
“…Figure 2 illustrates the overall architecture of the BERT classification network. The input text is split into a sequence of tokens (Tok) and then processed in the BERT model (22).…”
Section: Few-shot Learningmentioning
confidence: 99%