2020
DOI: 10.1016/j.scitotenv.2019.135484
|View full text |Cite
|
Sign up to set email alerts
|

On the effectiveness of facial expression recognition for evaluation of urban sound perception

Abstract: Sound perception studies mostly depend on questionnaires with fixed indicators. Therefore, it is desirable to explore methods with dynamic outputs. The present study aims to explore the effects of sound perception in the urban environment on facial expressions using software named FaceReader based on facial expression recognition (FER). The experiment involved three typical urban sound recordings, namely, traffic noise, natural sound, and community sound. A questionnaire on the evaluation of sound perception w… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

2
16
0

Year Published

2020
2020
2023
2023

Publication Types

Select...
7
1

Relationship

0
8

Authors

Journals

citations
Cited by 28 publications
(18 citation statements)
references
References 40 publications
(40 reference statements)
2
16
0
Order By: Relevance
“…In terms of individual differences, first, the point-biserial correlation analysis revealed that gender and facial expression are not significantly correlated in the music and birdsong interventions. This is consistent with previous research conclusions reached from evaluating acoustic environment using questionnaires (Meng et al, 2020a ). However, a significant correlation was found between facial expressions and gender at 20 s in the stream sound intervention, with valence significantly higher among women than men ( r = 0.869, p = 0.011).…”
Section: Discussionsupporting
confidence: 93%
See 2 more Smart Citations
“…In terms of individual differences, first, the point-biserial correlation analysis revealed that gender and facial expression are not significantly correlated in the music and birdsong interventions. This is consistent with previous research conclusions reached from evaluating acoustic environment using questionnaires (Meng et al, 2020a ). However, a significant correlation was found between facial expressions and gender at 20 s in the stream sound intervention, with valence significantly higher among women than men ( r = 0.869, p = 0.011).…”
Section: Discussionsupporting
confidence: 93%
“…However, in the trials with the sound of a stream and with birdsong, arousal rose again after 80 s, which may be due to distraction among the participants. The result is consistent with previous research findings (Meng et al, 2020a ). Accordingly, we chose the first 80 s as the duration for analysis in our experiment.…”
Section: Methodssupporting
confidence: 94%
See 1 more Smart Citation
“…Another study conducted by Meng at al. [32] using the method with dynamic results examined the impact of sound perception in an urban environment on facial expressions, measured using software called FaceReader, based on recognizing facial expressions. It can be concluded from this study that traffic noise caused mimic changes twice as fast as natural sound.…”
Section: Noise Perceptionmentioning
confidence: 99%
“…Therefore, with the right tools, any indications preceding or following them can be subject to detection and recognition. There has been an increase in the need to detect a person's emotions in the past few years and increasing interest in human emotion recognition in various fields including, but not limited to, human-computer interfaces [1], animation [2], medicine [3,4], security [5,6], diagnostics for Autism Spectrum Disorders (ASD) in children [7], and urban sound perception [8].…”
Section: Introductionmentioning
confidence: 99%