Interspeech 2019 2019
DOI: 10.21437/interspeech.2019-1708
|View full text |Cite
|
Sign up to set email alerts
|
Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1

Citation Types

0
15
0

Year Published

2022
2022
2023
2023

Publication Types

Select...
4
4
1

Relationship

0
9

Authors

Journals

citations
Cited by 20 publications
(15 citation statements)
references
References 13 publications
0
15
0
Order By: Relevance
“…For SER models, only a few evaluations are available. Gorrostieta et al [17] found a decrease in CCC for females compared to males for arousal in MSP-Podcast (v1.3) of around .234 for their convolutional model. Besides group fairness, this contribution investigates individual fairness by estimating the influence of the speaker on the model performance, which is a known problem for other speaker verification models [42].…”
mentioning
confidence: 91%
See 1 more Smart Citation
“…For SER models, only a few evaluations are available. Gorrostieta et al [17] found a decrease in CCC for females compared to males for arousal in MSP-Podcast (v1.3) of around .234 for their convolutional model. Besides group fairness, this contribution investigates individual fairness by estimating the influence of the speaker on the model performance, which is a known problem for other speaker verification models [42].…”
mentioning
confidence: 91%
“…Although this field has seen tremendous progress in the last decades [1], three major challenges remain for real-world paralinguistics-based SER applications: a) improving on its inferior valence performance [4,8], b) overcoming issues of generalisation and robustness [12,13], and c) alleviating individual-and group-level fairness concerns, which is a prerequisite for ethical emotion recognition technology [14,15]. Previous works have attempted to tackle these issues in isolation, e. g. by using cross-modal knowledge distillation to increase valence performance [16], speech enhancement or data augmentation to improve robustness [12,13], and de-biasing techniques to mitigate unfair outcomes [17]. However, each of those approaches comes with its own knobs to twist and hyperparameters to tune, making their combination far from straightforward.…”
Section: Introductionmentioning
confidence: 99%
“…Issues of fairness and biases have been extensively studied in several domains involving ML. A few prominent examples include facial analysis [25,36,[52][53][54][55][56][57], natural language understanding [58][59][60][61][62], affect recognition [63][64][65], criminal justice [44,[66][67][68], and health care [69,70].…”
Section: Fairness In Asvmentioning
confidence: 99%
“…In particular, this lets us investigate whether the fine-tuning is necessary for adapting to acoustic mismatches between the pre-training and downstream domains, as previously shown for convolutional neural networks (CNNs) [19], or to better leverage linguistic information. This type of behavioural testing goes beyond past work that typically investigates SER models' robustness with respect to noise and small perturbations [20,21,22] or fairness [23,24], thus, providing better insights into the inner workings of SER models.…”
Section: Introductionmentioning
confidence: 99%