This report, arising from a study of affiliation and disaffiliation in interaction, addresses an apparently 'anomalous' finding in relation to complaint sequences in conversation. In some of the cases we collected in which one speaker was complaining on behalf of the other (their coparticipant), taking her side in some matter, the one on whose behalf the other was complaining did not affiliate with the complaint. Instead they resisted the complaint (again, one made on their behalf) and demurred to 'go so far'. This finding is anomalous in the sense that if A is complaining on behalf of B, in respect of some harm done to B, then it might be expected that B would go along with the complaint, and affiliate with A. To account for how it might come about that B demurs from 'going as far as' A, we explore how complaints are frequently introduced in conversation. We show that complaints may emerge through a progression in which 'the complainant' does not initially go on record with a complaint, but instead secures the other's participation in co-constructing the complaint. Hence the 'complaint recipient' may be the first to make the complaint explicit, in a sequence of escalating affiliation. In the 'anomalous' cases, it appears that this escalation goes too far for the putative complainant (B). Going too far: complaining, escalating and disaffiliation
Neurogenerative disorders, like dementia, can affect a person's speech, language and as a consequence, conversational interaction capabilities. A recent study, aimed at improving dementia detection accuracy, investigated the use of conversation analysis (CA) of interviews between patients and neurologists as a means to differentiate between patients with progressive neurodegenerative memory disorder (ND) and those with (non-progressive) functional memory disorders (FMD). However, manual CA is expensive and difficult to scale up for routine clinical use. In this paper, we present an automatic classification using an intelligent virtual agent (IVA). In particular, using two parallel corpora of respectively neurologist-and IVA-led interactions, we show that using acoustic, lexical and CA-inspired features enables ND/FMD classification rates of 90.0% for the neurologist-patient conversations, and an encouraging 90.9% for the IVApatient conversations. Analysis of the significance of individual features show that some differences exist between the IVA and human-led conversations for example in average turn length of patients.
Background: There is scope for additional research into the specific linguistic and sequential structures used in speech and language therapist-led therapeutic conversations with people with aphasia. Whilst there is some evidence that SLTs use different conversational strategies than the partners of PWA (Lindsay & Wilkinson 1999), research to date has focussed mainly on measuring the effects of conversationbased therapies -not on analysing therapeutic conversations taking place between SLTs and PWA.Aims: This paper presents an analysis of the use of oh-prefacing by some PWA during therapeutic supported conversations with SLTs. Methods & Procedures: Normally-occurring therapeutic conversations betweenSLTs and PWA after stroke were qualitatively analysed using Conversation Analysis (CA). Interactions with five people with aphasia were video-recorded, involving three different specialist stroke SLTs. Outcomes & Results:The analysis revealed a difference in the way some PWA use turns that display understanding (e.g., oh right) vs those that continue the conversation, merely claiming understanding (e.g., right). This use of oh-prefacing is similar to that described in typical conversations by Heritage (1984). In our data, SLTs are shown to treat oh-prefaced turns differently from non-oh-prefaced turns, by pursuing the topic in the latter, and progressing on to a new topic in the former. Conclusions:At least some PWA use oh-prefacing in the same way as non-languageimpaired adults to display understanding of information, vs. merely claiming to understand. The SLTs in our data are shown to treat non-oh-prefaced turns as mere claims of understanding by providing the PWA with additional information, using supported conversation techniques (Kagan 1998), and pursuing additional same-topic talk, whereas oh-prefaced turns are treated as displays of understanding by being confirmed, and leading to changes of topic. This study is a first step in providing SLTs with a clearer understanding of the ways in which they are assessing the understanding of PWA, which may in turn help them better support non-therapy staff.
This pilot study provides proof-of-principle that a machine learning approach to analyzing transcripts of interactions between neurologists and patients describing memory problems can distinguish people with neurodegenerative dementia from people with FMD.
Recent approaches to word vector representations, e.g., 'w2vec' and 'GloVe', have been shown to be powerful methods for capturing the semantics and syntax of words in a text. The approaches model the co-occurrences of words and recent successful applications on written text have shown how the vector representations and their interrelations represent the meaning or sentiment in the text. Most applications have targeted written language, however, in this paper, we investigate how these models port to the spoken language domain where the text is the result of (erroneous) automatic speech transcription. In particular, we are interested in the task of detecting signs of dementia in a person's spoken language. This is motivated by the fact that early signs of dementia are known to affect a person's ability to express meaning articulately for example when they engage in a conversation -something which is known to be cognitively very demanding. We analyse conversations designed to probe people's short and long-term memory and propose three different methods for how word vectors may be used in a classification setup. We show that it is possible to identify dementia from the output of a speech recogniser despite a high occurrence of recognition errors.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.