This paper presents a dialogue act taxonomy designed for the developmentof a conversational agent for elderly. The main goal of this conversational agent is to improvelife quality of the user by means of coaching sessions in different topics. In contrast to otherapproaches such as task-oriented dialogue systems and chit-chat implementations, the agent shoulddisplay a pro-active attitude, driving the conversation to reach a number of diverse coachinggoals. Therefore, the main characteristic of the introduced dialogue act taxonomy is its capacityfor supporting a communication based on the GROW model for coaching. In addition, the taxonomyhas a hierarchical structure between the tags and it is multimodal. We use the taxonomy toannotate a Spanish dialogue corpus collected from a group of elder people. We also present apreliminary examination of the annotated corpus and discuss on the multiple possibilities it presentsfor further research.
The EMPATHIC project develops and validates new interaction paradigms for personalized virtual coaches (VC) to promote healthy and independent aging. To this end, the work presented in this paper is aimed to analyze the interaction between the EMPATHIC-VC and the users. One of the goals of the project is to ensure an end-user driven design, involving senior users from the beginning and during each phase of the project. Thus, the paper focuses on some sessions where the seniors carried out interactions with a Wizard of Oz driven, simulated system. A coaching strategy based on the GROW model was used throughout these sessions so as to guide interactions and engage the elderly with the goals of the project. In this interaction framework, both the human and the system behavior were analyzed. The way the wizard implements the GROW coaching strategy is a key aspect of the system behavior during the interaction. The language used by the virtual agent as well as his or her physical aspect are also important cues that were analyzed. Regarding the user behavior, the vocal communication provides information about the speaker’s emotional status, that is closely related to human behavior and which can be extracted from the speech and language analysis. In the same way, the analysis of the facial expression, gazes and gestures can provide information on the non verbal human communication even when the user is not talking. In addition, in order to engage senior users, their preferences and likes had to be considered. To this end, the effect of the VC on the users was gathered by means of direct questionnaires. These analyses have shown a positive and calm behavior of users when interacting with the simulated virtual coach as well as some difficulties of the system to develop the proposed coaching strategy.
In this paper, a task of human-machine interaction based on speech is presented. The specific task consists on the use and control of a set of home appliances through a turnbased dialogue system. This work focuses on the first part of the dialogue system, the Automatic Speech Recognition (ASR) system. Two lines of work are taken into account to improve the performance of the ASR system. On one hand, the acoustic modeling required for the ASR is improved via Speaker Adaptation techniques. On the other hand, the Language Modeling in the system is improved by the use of class-based Language Models. The results show the good performance of both techniques to improve the ASR results, as the Word Error Rate (WER) drops from 5.81% using a close-talk microphone to a 0.99% and from 14.53% using a lapel microphone to a 1.52%. Also, an important reduction is achieved in terms of the Category Error Rate (CER), which measures the ability of the ASR system to extract the semantic information of the uttered sentence, dropping from 6.13% and 15.32% to 1.29% and 1.32% for the two microphones used in the experiments.
In this work, a Spanish corpus that was developed, within the EMPATHIC project 1 framework, is presented. It was designed for building a dialogue system capable of talking to elderly people and promoting healthy habits, through a coaching model. The corpus, that comprises audio, video an text channels, was acquired by using a Wizard of Oz strategy. It was annotated in terms of different labels according to the different models that are needed in a dialogue system, including an emotion based annotation that will be used to generate empathetic system reactions. The annotation at different levels along with the employed procedure are described and analysed.1 http://www.empathic-project.eu/
Developing accurate emotion recognition systems requires extracting suitable features of these emotions. In this paper, we propose an original approach of parameters extraction based on the strong, theoretical and empirical, correlation between the emotion categories and the dimensional emotions parameters. More precisely, acoustic features and dimensional emotion parameters are combined for better speech emotion characterisation. The procedure consists in developing arousal and valence models by regression on the training data and estimating, by classification, their values in the test data. Hence, when classifying an unknown sample into emotion categories, these estimations could be integrated into the feature vectors. It is noted that the results using this new set of parameters show a significant improvement of the speech emotion recognition performance.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.