2019
DOI: 10.1007/978-3-030-29390-1_4
|View full text |Cite
|
Sign up to set email alerts
|

Effects of Age-Related Cognitive Decline on Elderly User Interactions with Voice-Based Dialogue Systems

Abstract: Cognitive functioning that affects user behaviors is an important factor to consider when designing interactive systems for the elderly, including emerging voice-based dialogue systems such as smart speakers and voice assistants. Previous studies have investigated the interaction behaviors of dementia patients with voice-based dialogue systems, but the extent to which age-related cognitive decline in the non-demented elderly influences the user experiences of modern voice-based dialogue systems remains uninves… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1

Citation Types

0
25
0
1

Year Published

2020
2020
2024
2024

Publication Types

Select...
5
3
1

Relationship

1
8

Authors

Journals

citations
Cited by 36 publications
(31 citation statements)
references
References 60 publications
0
25
0
1
Order By: Relevance
“…For example, mobile applications for collecting speech responses to neuropsychological tasks such as verbal fluency, counting backward, and picture description, have been developed and showed accurate classification rates for detecting patients with AD and MCI ( 32 , 33 ). As other examples, vocal characteristics in speech data during typical tasks on smart speakers was suggested to be associated with cognitive scores of neuropsychological tests used for screening of dementia ( 34 ), while linguistic features extracted from conversational data of phone calls were indicated as significant indicators for differentiating AD patients from cognitively-normal (CN) older adults ( 35 ). These approaches focusing on speech data that can be collected from everyday situations would increase opportunities for assessment and help with the early detection of AD.…”
Section: Introductionmentioning
confidence: 99%
“…For example, mobile applications for collecting speech responses to neuropsychological tasks such as verbal fluency, counting backward, and picture description, have been developed and showed accurate classification rates for detecting patients with AD and MCI ( 32 , 33 ). As other examples, vocal characteristics in speech data during typical tasks on smart speakers was suggested to be associated with cognitive scores of neuropsychological tests used for screening of dementia ( 34 ), while linguistic features extracted from conversational data of phone calls were indicated as significant indicators for differentiating AD patients from cognitively-normal (CN) older adults ( 35 ). These approaches focusing on speech data that can be collected from everyday situations would increase opportunities for assessment and help with the early detection of AD.…”
Section: Introductionmentioning
confidence: 99%
“…As reported in a previous study, speech disfluency can represent an accessibility barrier to voice assistants. For instance, long hesitations or pauses can be misinterpreted by the system as a sentence delimiter [ 23 ], causing an alteration of speech segmentation. Moreover, users must be able to correctly articulate words, especially multisyllable words (eg, temperature) or specific words that may require more effort to be articulated [ 24 ].…”
Section: Introductionmentioning
confidence: 99%
“…A recent overview of voice user interfaces for elderly can be found in [31]. Currently, there is relatively little research, which takes age into account when building voice interfaces, as confirmed by Stigall et al In a study presented in [16] it was researched how age-related cognitive decline influenced the use of voice interfaces. Several implications for speech assistants, like longer pauses between words, are mentioned, which should be considered in future systems.…”
Section: Resultsmentioning
confidence: 99%
“…In the future Mozilla's TTS 15 using deep learning methods could be an appropriate choice. Together with their ASR these components are intended to be integrated in web browsers as part of the Web Speech API16 .…”
mentioning
confidence: 99%