2020 IEEE International Conference on Robotics and Automation (ICRA) 2020
DOI: 10.1109/icra40945.2020.9197124
|View full text |Cite
|
Sign up to set email alerts
|

DexPilot: Vision-Based Teleoperation of Dexterous Robotic Hand-Arm System

Abstract: Fig. 1. DexPilot enabled teleoperation across a wide variety of tasks, e.g., rectifying a Pringles can and placing it inside the red bowl (upper-left), inserting cups (upper-right), concurrently picking two cubes with four fingers (lower-left), and extracting money from a wallet (lower-right). Videos are available at https://sites.google.com/view/dex-pilot.

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

0
98
0

Year Published

2020
2020
2024
2024

Publication Types

Select...
5
3
1

Relationship

1
8

Authors

Journals

citations
Cited by 111 publications
(98 citation statements)
references
References 37 publications
0
98
0
Order By: Relevance
“…Although the delay of the robot causes some sense of contour in the actual running trajectory, the robot can follow the local motion well on the whole. In comparison, whole-process location mapping [38] requires a user to operate all trajectories, including movement, grasping, obstacle avoidance, and placement, which is a heavy burden on the user. In some situations, automatic operation can be achieved, such as peg-in-hole assembly [39] with complex requirements.…”
Section: Resultsmentioning
confidence: 99%
“…Although the delay of the robot causes some sense of contour in the actual running trajectory, the robot can follow the local motion well on the whole. In comparison, whole-process location mapping [38] requires a user to operate all trajectories, including movement, grasping, obstacle avoidance, and placement, which is a heavy burden on the user. In some situations, automatic operation can be achieved, such as peg-in-hole assembly [39] with complex requirements.…”
Section: Resultsmentioning
confidence: 99%
“…In addition, as we had the inverse kinematics for the delta robots, we were able to quickly translate a desired trajectory into commands to the linear actuators. This direct mapping allowed us to easily teleoperate the robot with a PS4 Controller to complete tasks that would typically require a motion tracking hand setup in order to give the robot demonstrations [21]. In the future, we plan to explore more delicate and dexterous tasks with added sensors to provide feedback when interacting with objects.…”
Section: Methodsmentioning
confidence: 99%
“…Recent advances in deep learning-based machine perception have substantially improved the accuracies of pose estimation from real-world sensory data [2][3][4][5][6]. The estimated 6-DoF object pose, represented by the translation and rotation in SE(3), serves as a compact and informative state representation for a variety of downstream tasks, such as robot grasping and manipulation [7], human-robot interactions [8], online camera calibration [9], and tele-presence robot control [10].…”
Section: Introductionmentioning
confidence: 99%