2019 International Joint Conference on Neural Networks (IJCNN) 2019
DOI: 10.1109/ijcnn.2019.8851895
|View full text |Cite
|
Sign up to set email alerts
|

Seq2Seq Deep Learning Models for Microtext Normalization

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

2
14
0

Year Published

2020
2020
2023
2023

Publication Types

Select...
5
3
1

Relationship

1
8

Authors

Journals

citations
Cited by 21 publications
(16 citation statements)
references
References 41 publications
2
14
0
Order By: Relevance
“…Multimodal features have been proposed to recognize human emotions. Commonly used modalities are visual [16], audio [18], text [14], facial action [15], posed versus spontaneous expression and multiple physiological parameters [13]. In the multimodal emotion recognition, audio-visual contents are most studied [12], [19].…”
Section: Related Workmentioning
confidence: 99%
See 1 more Smart Citation
“…Multimodal features have been proposed to recognize human emotions. Commonly used modalities are visual [16], audio [18], text [14], facial action [15], posed versus spontaneous expression and multiple physiological parameters [13]. In the multimodal emotion recognition, audio-visual contents are most studied [12], [19].…”
Section: Related Workmentioning
confidence: 99%
“…Only when α i = µ i C and α * i = µ i C, the error between the estimated value f (x) and the actual value y i is more than ε. Thus, we can obtain the bias as (14).…”
Section: ∂L ∂Wmentioning
confidence: 99%
“…To factor in such contextual signals, recent advancements in NLP has considered these sequential nature of a written language as well as the long-term dependencies present in sentences. Thus, the research community has proposed different methodologies to perform micro-text normalisation based on deep learning (Min and Mott, 2015;Edizel et al, 2019;Gu et al, 2019;Satapathy et al, 2019). While we address the problem of text normalisation in the NLP context, it has also been adopted as a key component for speech applications (Sproat and Jaitly, 2016;Zhang et al, 2019).…”
Section: Related Workmentioning
confidence: 99%
“…'af' -'as fuck', 'kys' -'kill your self ', etc.). A list of microtext 3 from Satapathy et al (2019) was used to normalize those words. Due to the limitation of computational power, we decide not to pre-train BERT model from scratch but fine-tune from the BERT-Large, Uncased (Whole Word Masking) checkpoint.…”
Section: Data Preprocessingmentioning
confidence: 99%