2018 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS) 2018
DOI: 10.1109/iros.2018.8594169
|View full text |Cite
|
Sign up to set email alerts
|

Failure Detection Using Proprioceptive, Auditory and Visual Modalities

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
22
0

Year Published

2021
2021
2023
2023

Publication Types

Select...
4
3

Relationship

1
6

Authors

Journals

citations
Cited by 18 publications
(22 citation statements)
references
References 23 publications
0
22
0
Order By: Relevance
“…In Experiments 4a,b,c, for anomaly identification, a total of 124 trials were used for testing. 11 For anomaly identification, we had an average accuracy of 97.04%, an average precision of 97.02% and an average recall of 99.42% across the three sub-experiments. Very strong performance was achieved all around and charted in Fig.…”
Section: Experiments 4: Adaptation Resultsmentioning
confidence: 99%
See 1 more Smart Citation
“…In Experiments 4a,b,c, for anomaly identification, a total of 124 trials were used for testing. 11 For anomaly identification, we had an average accuracy of 97.04%, an average precision of 97.02% and an average recall of 99.42% across the three sub-experiments. Very strong performance was achieved all around and charted in Fig.…”
Section: Experiments 4: Adaptation Resultsmentioning
confidence: 99%
“…In [10], language is used to generate motion, and a simple HMM is used to detect success or failure based on trajectory position information. In [11], visual, audio, and proprioceptive features are used through Hidden Markov Models (HMMs) and other heuristics on a tabletop task to detect failure. In [12], a hierarchical Dirichlet process (HDP) prior was used with HMMs and a Gaussian observation model and Gibbs sampling to do anomaly iden-tification and multi-class classification.…”
Section: Anomaly Identification and Classificationmentioning
confidence: 99%
“…They use force, sound and kinematic signals from a service robot to learn from nominal executions using hidden Markov models (HMMs) and Gaussian processes. Inceoglu et al [6] learn from several sensor modalities, using HMMs to classify extracted predicates from each modality into success and failure classes for different actions. They also present an end-to-end convolutional neural network [11], which classifies executions as success or failure, and identifies the failure types as well.…”
Section: A Execution Monitoring In Roboticsmentioning
confidence: 99%
“…Depending on the nature of the failure, different sensors might be used for detection; for example, force-torque sensors for collisions, or vision and auditory sensors for external events such as objects falling. Some methods [4], [5], [6] only make use of visual data such as RGB frames, depth frames, or the output of object detection algorithms. In this paper, we propose a method for visual execution monitoring, using the videos from the robot's camera and the robot's kinematics.…”
Section: Introductionmentioning
confidence: 99%
“…[18] also introduces the tactile information to improve the auditory classification performance. In [19], the sound information can work together with visual information to detect failure in robotic manipulation. Additionally, a type of audiovisual embodied navigation task is proposed recently, in which the agent navigates to a sounding object by leveraging both visual and auditory data [20][21].…”
Section: Introductionmentioning
confidence: 99%