2016 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP) 2016
DOI: 10.1109/icassp.2016.7472789
|View full text |Cite
|
Sign up to set email alerts
|

Cross lingual speech emotion recognition using canonical correlation analysis on principal component subspace

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
37
0

Year Published

2018
2018
2023
2023

Publication Types

Select...
6
2
1

Relationship

0
9

Authors

Journals

citations
Cited by 38 publications
(37 citation statements)
references
References 18 publications
0
37
0
Order By: Relevance
“…Building upon this work, Mao et al [22] proposed to learn a shared feature representation across domains by constraining their model to share the class priors across domains. Sagha et al [23], also motivated by the work of Deng et al [11], used principal component analysis (PCA) along with kernel canonical correlation analysis (KCCA) to find views with the highest correlation between the source and target corpora. First, they used PCA to represent the feature space of the source and target data.…”
Section: Related Workmentioning
confidence: 99%
“…Building upon this work, Mao et al [22] proposed to learn a shared feature representation across domains by constraining their model to share the class priors across domains. Sagha et al [23], also motivated by the work of Deng et al [11], used principal component analysis (PCA) along with kernel canonical correlation analysis (KCCA) to find views with the highest correlation between the source and target corpora. First, they used PCA to represent the feature space of the source and target data.…”
Section: Related Workmentioning
confidence: 99%
“…Interestingly, approaches in the AVEC 2018 CES did not employ approaches such as transfer learning [80,81] or domain adaptation techniques [29,54] typically seen in cross-cultural testing. In [76], the authors proposed a model based on emotional salient detection to identify emotion markers invariant to sociocultural context.…”
Section: Cross-cultural Emotion Recognitionmentioning
confidence: 99%
“…Successful baseline experiments evaluating the automatic emotion recognition component have been performed in earlier work [3,7,18] and were herein expanded by performing novel BoAW experiments, leading to a UAR of 43.3% (4 emotion classes) on the collected data and promising a large margin for improvement considering recent machine learning techniques such as generative models and transfer learning. Furthermore, primary usability evaluations with the target group are currently ongoing, giving first insights into the promising success of the system.…”
Section: Discussionmentioning
confidence: 99%