2021
DOI: 10.1109/tim.2020.3043118
|View full text |Cite
|
Sign up to set email alerts
|

An Autonomous Eye-in-Hand Robotic System for Elevator Button Operation Based on Deep Recognition Network

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
2
1

Citation Types

0
10
0

Year Published

2021
2021
2023
2023

Publication Types

Select...
3
1
1

Relationship

0
5

Authors

Journals

citations
Cited by 8 publications
(10 citation statements)
references
References 25 publications
0
10
0
Order By: Relevance
“…There are two reasons: (6A1) the elevator button detection is not robust, which affected by light conditions, (6A2) the errors of the AMR and the camera are not compensated perfectly by the manipulator, (6A3) jerky motions sometimes happened because of the system communication. By contrast, the method in Zhu et al (2021) is proved to detect the elevator button robustly in various environments. Its failed execution is because of big orientation errors of the AMR and they did not have retrieving actions as well as the button light-up status checking.…”
Section: Resultsmentioning
confidence: 96%
See 3 more Smart Citations
“…There are two reasons: (6A1) the elevator button detection is not robust, which affected by light conditions, (6A2) the errors of the AMR and the camera are not compensated perfectly by the manipulator, (6A3) jerky motions sometimes happened because of the system communication. By contrast, the method in Zhu et al (2021) is proved to detect the elevator button robustly in various environments. Its failed execution is because of big orientation errors of the AMR and they did not have retrieving actions as well as the button light-up status checking.…”
Section: Resultsmentioning
confidence: 96%
“…Its failed execution is because of big orientation errors of the AMR and they did not have retrieving actions as well as the button light-up status checking. In the future, the elevator button recognition method in Zhu et al (2021) can be our reference to improve the performance of our robot system.…”
Section: Resultsmentioning
confidence: 99%
See 2 more Smart Citations
“…6D pose estimation of object is a key task for understanding the given scene, that is predicting the three-dimensional rotation and translation of the target object in the scene, and is useful in many real-word applications, such as robotic grasping and manipulation [1][2][3], augmented reality [4,5] and autonomous navigation [6,7]. However, such application scenarios are complex and changeable due to illumination changes, sensor noises, occlusion or even truncation between objects, so 6D object pose estimation in complex scenes is still a challenge.…”
Section: Introductionmentioning
confidence: 99%