Robot safety has been a prominent research topic in recent years since robots are more involved in daily tasks. It is crucial to devise the required safety mechanisms to enable service robots to be aware of and react to anomalies (i.e., unexpected deviations from intended outcomes) that arise during the execution of these tasks. Detection and identification of these anomalies is an essential step towards fulfilling these requirements. Although several architectures are proposed for anomaly detection, identification is not yet thoroughly investigated. This task is challenging since indicators may appear long before anomalies are detected. In this paper, we propose a ConvoLUtional threEstream Anomaly Identification (CLUE-AI) framework to address this problem. The framework fuses visual, auditory and proprioceptive data streams to identify everyday object manipulation anomalies. A stream of 2D images gathered through an RGB-D camera placed on the head of the robot is processed within a self-attention enabled visual stage to capture visual anomaly indicators. The auditory modality provided by the microphone placed on the robot's lower torso is processed within a designed convolutional neural network (CNN) in the auditory stage. Last, the force applied by the gripper and the gripper state are processed within a CNN to obtain proprioceptive features. These outputs are then combined with a late fusion scheme. Our novel three-stream framework design is analyzed on everyday object manipulation tasks with a Baxter humanoid robot in a semi-structured setting. The results indicate that the framework achieves an f-score of 94% outperforming the other baselines in classifying anomalies that arise during runtime.
Automated action planning is crucial for efficient execution of mobile robot missions. Automated planners use complete domain descriptions to construct plans. Nevertheless, there is usually a gap between the real world and its representation. Therefore, there is another source of uncertainty for mobile robot systems due to the impossibility of perfectly representing action descriptions (e.g., preconditions and effects) in all circumstances. Incomplete domain representations may lead a planner to fail constructing a valid plan when unforeseen events are encountered. We investigate these types of situations, especially the failure cases and how robots can recover from real-time execution failures. The main focus of our research is to design a dynamic planning framework which can generate alternative plans by applying generic updates in the domain representation when the execution of a plan fails. Our proposed method constructs new feasible plans by using the updated domain representations even if the outcomes of the operators are partially known in advance or feasible plans are not possible with the original representation of the domain. Besides updating the domain representation, our method manipulates the planner by using a reasoning mechanism so that it chooses more relevant actions to recover from failures. This is achieved by considering the effects of the failed action and trying to accomplish these effects with alternative actions.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.