Pain sensation is essential for survival, since it draws attention to physical threat to the body. Pain assessment is usually done through self-reports. However, self-assessment of pain is not available in the case of noncommunicative patients, and therefore, observer reports should be relied upon. Observer reports of pain could be prone to errors due to subjective biases of observers. Moreover, continuous monitoring by humans is impractical. Therefore, automatic pain detection technology could be deployed to assist human caregivers and complement their service, thereby improving the quality of pain management, especially for noncommunicative patients. Facial expressions are a reliable indicator of pain, and are used in all observer-based pain assessment tools. Following the advancements in automatic facial expression analysis, computer vision researchers have tried to use this technology for developing approaches for automatically detecting pain from facial expressions. This paper surveys the literature published in this field over the past decade, categorizes it, and identifies future research directions. The survey covers the pain datasets used in the reviewed literature, the learning tasks targeted by the approaches, the features extracted from images and image sequences to represent pain-related information, and finally, the machine learning methods used.
In this paper, the recent results of the space project IMPERA are presented. The goal of IMPERA is the development of a multirobot planning and plan execution architecture with a focus on a lunar sample collection scenario in an unknown environment. We describe the implementation and verification of different modules that are integrated into a distributed system architecture. The modules include a mission planning approach for a multirobot system and modules for task and skill execution within a lunar use‐case scenario. The skills needed for the test scenario include cooperative exploration and mapping strategies for an unknown environment, the localization and classification of sample containers using a novel approach of semantic perception, and the skill of transporting sample containers to a collection point using a mobile manipulation robot. Additionally, we present our approach of a reliable communication framework that can deal with communication loss during the mission. Several modules are tested within several experiments in the domain of planning and plan execution, communication, coordinated exploration, perception, and object transportation. An overall system integration is tested on a mission scenario experiment using three robots.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.