The experience of inner speech is a common one. Such a dialogue accompanies the introspection of mental life and fulfills essential roles in human behavior, such as self-restructuring, self-regulation, and re-focusing on attentional resources. Although the underpinning of inner speech is mostly investigated in psychological and philosophical fields, the research in robotics generally does not address such a form of self-aware behavior. Existing models of inner speech inspire computational tools to provide a robot with this form of self-awareness. Here, the widespread psychological models of inner speech are reviewed, and a cognitive architecture for a robot implementing such a capability is outlined in a simplified setup.
Motivation
The epidemic at the beginning of this year, due to a new virus in the coronavirus family, is causing many deaths and is bringing the world economy to its knees. Moreover, situations of this kind are historically cyclical. The symptoms and treatment of infected patients are, for better or worse even for new viruses, always the same: more or less severe flu symptoms, isolation and full hygiene. By now man has learned how to manage epidemic situations, but deaths and negative effects continue to occur. What about technology? What effect has the actual technological progress we have achieved? In this review, we wonder about the role of robotics in the fight against COVID. It presents the analysis of scientific articles, industrial initiatives and project calls for applications from March to now highlighting how much robotics was ready to face this situation, what is expected from robots and what remains to do.
Results
The analysis was made by focusing on what research groups offer as a means of support for therapies and prevention actions. We then reported some remarks on what we think is the state of maturity of robotics in dealing with situations like COVID-19.
A cognitive architecture for inner speech is presented. It is based on the Standard Model of Mind, integrated with modules for self-talking processes. Briefly, the working memory of the proposed architecture includes the phonological loop as a component which manages the exchanging information between the phonological store and the articulatory control system. The inner dialogue is modeled as a loop where the phonological store hears the inner voice produced by the hidden articulator process. A central executive module drives the whole system, and contributes to the generation of conscious thoughts by retrieving information from long-term memory. The surface form of thoughts thus emerges by the phonological loop. Once a conscious thought is elicited by inner speech, the perception of new context takes place and then repeating the cognitive loop. A preliminary formalization by event calculus of some of the described processes, and early results of their implementation on the humanoid robot Pepper by SoftBank Robotics are discussed.
This paper aims to discuss the possible role of inner speech in influencing trust in human–automation interaction. Inner speech is an everyday covert inner monolog or dialog with oneself, which is essential for human psychological life and functioning as it is linked to self-regulation and self-awareness. Recently, in the field of machine consciousness, computational models using different forms of robot speech have been developed that make it possible to implement inner speech in robots. As is discussed, robot inner speech could be a new feature affecting human trust by increasing robot transparency and anthropomorphism.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.