2021
DOI: 10.1007/978-3-030-77626-8_20
|View full text |Cite
|
Sign up to set email alerts
|

Multimodal Emotion Analysis Based on Acoustic and Linguistic Features of the Voice

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1

Citation Types

0
4
0

Year Published

2022
2022
2024
2024

Publication Types

Select...
2
2
1

Relationship

2
3

Authors

Journals

citations
Cited by 6 publications
(4 citation statements)
references
References 19 publications
0
4
0
Order By: Relevance
“…In addition to the model developed and presented in [3], this work will continue by exploring how the robot can hear all people in vicinity but "concentrate" on only one.…”
Section: Discussionmentioning
confidence: 99%
See 1 more Smart Citation
“…In addition to the model developed and presented in [3], this work will continue by exploring how the robot can hear all people in vicinity but "concentrate" on only one.…”
Section: Discussionmentioning
confidence: 99%
“…The model is implemented for use on the interactive biomimetic robotic head PLEA [2]. Based on the multimodal approach, PLEA uses visual and voice modalities fused in a separate algorithm to determine the most appropriate hypothesis about the person's emotional state at a given moment [3]. These decisions are changeable over time as the system receives the latest information.…”
Section: Introductionmentioning
confidence: 99%
“…By using the entropy reduction method, it is possible to determine a drop in uncertainty in query nodes when evidence or proof is provided to some particular node in the network, as shown in Eq. (11).…”
Section: Figure 4 Ambiguities In Bn Reasoningmentioning
confidence: 99%
“…The partial information currently concluded or acquired by senses can instantly change the reasoning output. In this way, a single piece of information can result in recognition, or it can result in a change in perspective [11].…”
Section: Introductionmentioning
confidence: 99%