2020
DOI: 10.1109/taffc.2017.2764470
|View full text |Cite
|
Sign up to set email alerts
|

Intensional Learning to Efficiently Build Up Automatically Annotated Emotion Corpora

Abstract: Textual emotion detection has a high impact on business, society, politics or education with applications such as, detecting depression or personality traits, suicide prevention or identifying cases of cyber-bulling. Given this context, the objective of our research is to contribute to the improvement of emotion recognition task through an automatic technique focused on reducing both the time and cost needed to develop emotion corpora. Our proposal is to exploit a bootstrapping approach based on intensional le… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
10
0

Year Published

2021
2021
2023
2023

Publication Types

Select...
8
1

Relationship

1
8

Authors

Journals

citations
Cited by 21 publications
(10 citation statements)
references
References 47 publications
0
10
0
Order By: Relevance
“…However, they do not distinguish at all about whose emotion one is concerned with, therefore conflating different problems. Canales et al (2020) suggest a computational paradigm, intensional learning, which is a bootstrapping approach from a set of seed emotional sentences, augmented with distributional semantic methods, on top of which a supervised classifier is then trained, to efficiently build up automatically annotated corpora. In a way, the first step is similar to what we did (for the lexicon), but instead of creating a machine learning classifier we developed rules that we subsequently apply.…”
Section: Other Approachesmentioning
confidence: 99%
“…However, they do not distinguish at all about whose emotion one is concerned with, therefore conflating different problems. Canales et al (2020) suggest a computational paradigm, intensional learning, which is a bootstrapping approach from a set of seed emotional sentences, augmented with distributional semantic methods, on top of which a supervised classifier is then trained, to efficiently build up automatically annotated corpora. In a way, the first step is similar to what we did (for the lexicon), but instead of creating a machine learning classifier we developed rules that we subsequently apply.…”
Section: Other Approachesmentioning
confidence: 99%
“…Word2Vec has produced good results in extracting semantics as it has been used in many techniques. Word2Vec is employed in [39] for bootstrapping to generate automatic annotated emotional corpus. Word2Vec is used for solving the words sense in word sense disambiguation in [40].…”
Section: B Semantic Extractionmentioning
confidence: 99%
“…12, NO. 1, JANUARY-MARCH 2021 + Check author entry for coauthors P Paiva, R.P., see Panda, R., T-AFFC Oct.-Dec. 2020 614-626 Palafox, L., see Rivas, J.J., T-AFFC July-Sept. 2020 470-481 Panda, R., Malheiro, R., and Paiva, R.P., Novel Audio Features for Music Emotion Recognition;614-626 Park, C., see Yun, W., T-AFFC Oct.-Dec. 2020 Peljhan, M., see Wolfe, H., T-AFFC April -June 2020 284-295 Peng, G., see Wang, S., T-AFFC Oct.-Dec. 2020 T-AFFC Oct.-Dec. 2020 588-600 Sarkar, S., see Mahbub, U., T-AFFC Oct.-Dec. 2020 601-613 Scharinger, C., see Grissmann, S., T-AFFC April -June 2020 327-334 Schudlo, L.C., see Sarabadani, S., T-AFFC Oct.-Dec. 2020 588-600 Scilingo, E.P., see Nardelli, M., T-AFFC July-Sept. 2020 459-469 Seguier, R., see Arias, P., T-AFFC July-Sept. 2020 507-518 Seguier, R., see Weber, R., T-AFFC July-Sept. 2020 419-432 Sethu, V., see Cummins, N., T-AFFC April -June 2020 272-283 Shang, Y., see Zhou, X., T-AFFC July-Sept. 2020 542-552 Sharma, K., Castellini, C., Stulp, F., and van den Broek, E.L., Continuous, Real-Time Emotion Annotation: A Novel Joystick-Based Analysis Framework; T-AFFC Jan.-March 2020 78-84 Shi, L., see Ding, Y., 405-418 Sigal, L., see Annalyn, N., T-AFFC July-Sept. 2020 Soladie, C., see Arias, P., T-AFFC July-Sept. 2020 507-518 Soladie, C., see Weber, R., T-AFFC July-Sept. 2020 419-432 Song, P., and Zheng, W., Feature Selection Based Transfer Subspace Learning for Speech Emotion Recognition; T-AFFC July-Sept. 2020 373-382 Song, P., see Song, T., T-AFFC July-Sept. 2020 532-541 Song, T., Zheng, W., Song, P., and Cui, Z., EEG Emotion Recognition Using Dynamical Graph Convolutional Neural Networks; T-AFFC July-Sept. 2020 532-541 Spasic, I., Williams, L., and Buerki, A., Idiom-Based Features in Sentiment Analysis: Cutting the Gordian Knot; T-AFFC April -June 2020 189-199 Spuler, M., see Grissmann, S., T-AFFC April -June 2020 327-334 Steephen, J.E., Obbineni, S.C., Kummetha, S., and Bapi, R.S., HED-ID: An Affective Adaptation Model Explaining the Intensity-Duration Relationship of Emotion; T-AFFC Oct.-Dec. 2020 736-750 Stone, R., see Moghimi, M., T-AFFC Jan.-March 2020 45-62 Strapparava, C., see Canales, L., T-AFFC April -June 2020335-347 Stulp, F., see Sharma, K., T-AFFC Jan.-March 2020 Su, M., see Huang, K., T-AFFC July-Sept. 2020 393-404 Sucar, L.E., see Rivas, J.J., T-AFFC July-Sept. 2020 470-481 T Tahon, M., Lecorve, G., and Lolive, D., Can We Generate Emotional Pronunciations for Expressive Speech Synthesis? ; T-AFFC Oct.-Dec. 2020 684-695 Takala, T., see Tarvainen, J., T-AFFC April -June 2020 313-326 Tang, Y., see Zhao, S., T-AFFC Oct.-Dec. 2020 574-587 Tarvainen, J., Laaksonen, J., andTakala, T., Film Mood andIts Quantitative Determinants in Different Types of Scenes; T-AFFC April -June 2020…”
Section: Nagata T and Mori H Defining Laughter Context For Laugmentioning
confidence: 99%