2015 International Conference on Affective Computing and Intelligent Interaction (ACII) 2015
DOI: 10.1109/acii.2015.7344561
|View full text |Cite
|
Sign up to set email alerts
|

Cross-language acoustic emotion recognition: An overview and some tendencies

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

4
28
0

Year Published

2016
2016
2024
2024

Publication Types

Select...
4
2
2

Relationship

0
8

Authors

Journals

citations
Cited by 52 publications
(34 citation statements)
references
References 42 publications
4
28
0
Order By: Relevance
“…The training emotion corpora are either (i) the other language within the same language family (diagonals), (ii) the aggregation of the corpora in the other language family (off-diagonals), or (iii) both within and across language families ('All' column). As indicated in [11], we expect to have high accuracies on diagonals (i. e., same language family) with respect to the offdiagonals (i. e., cross language family). This is valid for the Sino-Tibetan and Romance families.…”
Section: Resultsmentioning
confidence: 95%
See 1 more Smart Citation
“…The training emotion corpora are either (i) the other language within the same language family (diagonals), (ii) the aggregation of the corpora in the other language family (off-diagonals), or (iii) both within and across language families ('All' column). As indicated in [11], we expect to have high accuracies on diagonals (i. e., same language family) with respect to the offdiagonals (i. e., cross language family). This is valid for the Sino-Tibetan and Romance families.…”
Section: Resultsmentioning
confidence: 95%
“…Likewise, Scherer et al concluded that cultureand language-specific paralinguistic patterns may influence the emotion perception [10]. Furthermore, Feraru et al investigated emotion recognition from speech on cross-language families by including less researched languages from completly different language families such as Burmese, Romanian or Turkish [11]. They found that, AER for corpora of the same language has the highest accuracy while emotion recognition across language families has the lowest.…”
Section: Introductionmentioning
confidence: 99%
“…Among them, Variety encompasses multimodality (audio, video, text) and multilinguality/multiculturalism, and Veracity emerges from subjectivity of assessments (annotations). These aspects have been addressed for: (i) the textual modality by: automatic translation [17], defining multilingual WordNet Grid [18], and (ii) for the audio modality by: analyzing within or between language family emotion recognition [19], feature transfer learning between languages [20], model transfer learning [21], language identification [22], audio denoising [23], and decision aggregation through cooperative speaker models [24]. Regarding the Volume and Velocity, there is a need for fast computation.…”
Section: Emotion Analysis In Big Data and Pre-requisitesmentioning
confidence: 99%
“…NIF also provides different URI Schemes to identify text fragments inside a string, e. g., a scheme based on RFC5147 [91], and a custom scheme based on context. To this end, texts are converted to RDF 19 literals and a URI 20 is generated so that linked data annotations can be defined for that text. The same idea can also be applied to annotate multimedia [92].…”
Section: Linked Data and Knowledge Graphmentioning
confidence: 99%
“…Speech-based affective computing is now well developed. There is nearly thirty years of research related to speech-based affective computing [1] and 66% of the world's native language speaking populations are represented by affective speech data sets [2]. Despite a large body of evidence linking eye and head-based cues to emotion and motivational state conveyance [3]- [12], the use of these cues is underdeveloped for affective computing purposes.…”
Section: Introductionmentioning
confidence: 99%