Proceedings of the 2021 International Conference on Multimodal Interaction 2021
DOI: 10.1145/3462244.3479959
|View full text |Cite
|
Sign up to set email alerts
|

M2H2: A Multimodal Multiparty Hindi Dataset For Humor Recognition in Conversations

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
5
0

Year Published

2022
2022
2024
2024

Publication Types

Select...
4
3
1

Relationship

0
8

Authors

Journals

citations
Cited by 10 publications
(6 citation statements)
references
References 25 publications
0
5
0
Order By: Relevance
“…Twenty one (21) humor features have been carefully studied and research evidence of their use in humor computational detection has been presented. The review of the selected forty seven (47) papers exploited in this systematic literature review revealed that the application of different manning methods to the existing dataset results in diverse features.…”
Section: Featuresmentioning
confidence: 99%
See 1 more Smart Citation
“…Twenty one (21) humor features have been carefully studied and research evidence of their use in humor computational detection has been presented. The review of the selected forty seven (47) papers exploited in this systematic literature review revealed that the application of different manning methods to the existing dataset results in diverse features.…”
Section: Featuresmentioning
confidence: 99%
“…A dataset is characterized as multimodal when it includes multiple such modalities. TV sitcoms are the most frequently used source for retrieving multimodal datasets by authors (Bertero and Fung 2016;Chauhan et al 2021;Yang et al 2019;Purandare and Litman 2006;Patro et al 2021;Bedi et al 2021). This is because SitCom (Situational Comedy) is a comedy genre that focuses on xed characters carried over many episodes that allow researchers to collect large scale datasets.…”
Section: Multimodalmentioning
confidence: 99%
“…For Open Mic, Mittal et al [42] collected standup comedy recordings and used the audience's laughter to create annotations indicating the degree of humour on a scale from zero to four. Similar to text-only datasets, most multimodal datasets are in English, notable exceptions being the already mentioned MUMOR-ZH [19], MaSaC [40], the Chinese dataset used in [41] and M2H2 [43], which is based on a Hindi TV show.…”
Section: Multimodal Humour Recognitionmentioning
confidence: 99%
“…Besides, our dataset is the first to include annotations according to the HSQ. While other datasets are labelled automatically, for example by using canned laughter [18,38], or by three human annotators [19,43], each video in our database has been labelled by the same 9 annotators. In denotes the overall duration of each dataset, #Spk the number of speakers in it.…”
Section: Comparison With Other Humour Datasetsmentioning
confidence: 99%
“…With the development of deep-learning-based approaches, a number of methods have been proposed in recent work; for example, Kumar et al [ 8 ] propose a combination of convolutional neural networks (CNN) and long short-term memory (LSTM), with the addition of a highway to enhance performance; Weller et al [ 9 ] proposed the use of the transformer architecture to take advantage of its learning from the context of sentences; Lu Ren et al [ 10 ] proposed to combine humor recognition and pun recognition, training the two tasks jointly, thus enhancing performance. There is also humor recognition through multimodal means [ 11 , 12 ]. Unlike these works, we propose a deep learning method approach based on humor linguistics to capture inconsistency features, phonetic features, and ambiguity features in humor, introducing the incongruity of humor caused by semantic inconsistency, lexical, and ambiguity.…”
Section: Introductionmentioning
confidence: 99%