2018
DOI: 10.48550/arxiv.1810.02508
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

MELD: A Multimodal Multi-Party Dataset for Emotion Recognition in Conversations

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
68
0

Year Published

2019
2019
2023
2023

Publication Types

Select...
5
3

Relationship

2
6

Authors

Journals

citations
Cited by 47 publications
(68 citation statements)
references
References 0 publications
0
68
0
Order By: Relevance
“…Each dataset consists of reviews rated on a scale of 1 (strong negative) to 5 (strong positive). Similarly, for ERC, we collect three widely used datasets: DyDa: DailyDialog (Li et al, 2017), IEMOCAP: interactive emotional dyadic motion capture database (Busso et al, 2008), and MELD: Multimodal EmotionLines Dataset (Poria et al, 2018). To demonstrate our methodology, we partition the DyDa dataset into four equal chunks.…”
Section: Methodsmentioning
confidence: 99%
“…Each dataset consists of reviews rated on a scale of 1 (strong negative) to 5 (strong positive). Similarly, for ERC, we collect three widely used datasets: DyDa: DailyDialog (Li et al, 2017), IEMOCAP: interactive emotional dyadic motion capture database (Busso et al, 2008), and MELD: Multimodal EmotionLines Dataset (Poria et al, 2018). To demonstrate our methodology, we partition the DyDa dataset into four equal chunks.…”
Section: Methodsmentioning
confidence: 99%
“…In addition, for the contextual information, inspired by the position embedding proposed in Transformer [13], we propose Identity Embedding and add it to the features of each modality, then based on attention mechanism and LSTM [14], the contextual information can be modeled throughout the information flow. The effectiveness of the proposed model is demonstrated by comprehensive experiments on two large and widely used emotional datasets, i.e., the IEMOCAP [15] and the MELD [16]. Our contributions can be summarized as follows:…”
Section: Fusion For Predictionmentioning
confidence: 99%
“…This work evaluates the performance of the proposed algorithm on two different datasets which are widely used in MSA research: IEMOCAP [15] and MELD [16].…”
Section: Datasets and Metricsmentioning
confidence: 99%
“…Emotion recognition in conversation is a popular area in NLP. Many ERC datasets have been scripted and annotated in the past few years, such as IEMOCAP (Busso et al 2008), MELD (Poria et al 2018), DailyDialog (Li et al 2017), EmotionLines (Chen et al 2018) and EmoryNLP (Zahiri and Choi 2018). IEMOCAP, MELD, and EmoryNLP are multimodal datasets, containing acoustic, visual and textual information, while the remaining two datasets are textual.…”
Section: Emotion Recognition In Conversationmentioning
confidence: 99%
“…MELD (Poria et al 2018) is a multi-modal emotion classification dataset. It is a multi-party dialogue dataset created from scripts of the Friends TV series.…”
Section: Datasetsmentioning
confidence: 99%