2017 8th IEEE International Conference on Cognitive Infocommunications (CogInfoCom) 2017
DOI: 10.1109/coginfocom.2017.8268268
|View full text |Cite
|
Sign up to set email alerts
|

Classification of cognitive load using voice features: A preliminary investigation

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

2
7
0

Year Published

2021
2021
2023
2023

Publication Types

Select...
3

Relationship

0
3

Authors

Journals

citations
Cited by 3 publications
(9 citation statements)
references
References 18 publications
2
7
0
Order By: Relevance
“…Audio features from each trial were extracted using the openSMILE Version 2.1.0 feature extraction toolkit (Eyben et al, 2013). This toolkit extracts a wide array of vocal features suitable for signal processing and machine learning analyses (Mijić et al, 2017). The toolkit was configured to use 10 ms moving window, a time period where vocal features can be considered stationary (Rao, 2011), and the "emo_large" feature set was selected.…”
Section: Discussionmentioning
confidence: 99%
See 3 more Smart Citations
“…Audio features from each trial were extracted using the openSMILE Version 2.1.0 feature extraction toolkit (Eyben et al, 2013). This toolkit extracts a wide array of vocal features suitable for signal processing and machine learning analyses (Mijić et al, 2017). The toolkit was configured to use 10 ms moving window, a time period where vocal features can be considered stationary (Rao, 2011), and the "emo_large" feature set was selected.…”
Section: Discussionmentioning
confidence: 99%
“…The sequence of numbers for each digit span trial was fully randomized to avoid providing the opportunity for machine learning algorithms to learn to differentiate digit sequences and not cognitive load. The importance of this randomization is discussed in detail in Mijić et al (2017).…”
Section: Cognitive Assessmentmentioning
confidence: 99%
See 2 more Smart Citations
“…In our previous research we have focused on developing optimized stimulation paradigms for elicitation of multimodal responses related to stress resilience (Dropuljić et al, 2017;Ćosić et al, 2019b) and cognitive functioning estimation (Mijić et al, 2017(Mijić et al, , 2019. Acoustic feature sets were related to: (a) fundamental frequency, an estimate of the base harmonic that the vibrating vocal cords are producing during vocalization; RMS energy, a signal-processing-based estimate of the energy of the sound recorded by the microphone; (b) formant frequencies (f1-f4) and mel-frequency-cepstral-coefficients which describe the spectral and cepstral behavior of the recorded utterances; (c) jitter and shimmer, representing voice perturbations; and (d) number of voiced segments per second and mean voiced/unvoiced segment lengths in seconds, which are related to speech rate.…”
Section: Acoustic Featuresmentioning
confidence: 99%