2019
DOI: 10.1177/0278364919888565
|View full text |Cite
|
Sign up to set email alerts
|

Leveraging depth data in remote robot teleoperation interfaces for general object manipulation

Abstract: Robust remote teleoperation of high-degree-of-freedom manipulators is of critical importance across a wide range of robotics applications. Contemporary robot manipulation interfaces primarily utilize a free positioning pose specification approach to independently control each degree of freedom in free space. In this work, we present two novel interfaces, constrained positioning and point-and-click. Both novel approaches incorporate scene information from depth data into the grasp pose specification process, ef… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
12
0

Year Published

2021
2021
2023
2023

Publication Types

Select...
6
2

Relationship

0
8

Authors

Journals

citations
Cited by 27 publications
(12 citation statements)
references
References 34 publications
(52 reference statements)
0
12
0
Order By: Relevance
“…Chernova et al have developed remote manipulation interfaces that are robust to high latency environments, researching how to leverage depth data in remote robot teleoperation interfaces for general object manipulation [43,44] and developing temporal models for robot classification of human interruptibility for modeling availability of collocated human crew members [45].…”
Section: Efficient Interaction Methodsmentioning
confidence: 99%
“…Chernova et al have developed remote manipulation interfaces that are robust to high latency environments, researching how to leverage depth data in remote robot teleoperation interfaces for general object manipulation [43,44] and developing temporal models for robot classification of human interruptibility for modeling availability of collocated human crew members [45].…”
Section: Efficient Interaction Methodsmentioning
confidence: 99%
“…If a 2D camera is used, the depth can be estimated by processing the acquired data [35]. Alternatively, this estimation is not needed if a 3D camera is used [36], e.g., the Microsoft Kinect.…”
Section: Computer Vision Systemmentioning
confidence: 99%
“…In this case, the robot can run parameterized subroutines while multi-person teams of highly trained operators analyze data from the robot and control it at various abstraction-levels (from joint angle to locomotion goal), including situations with unstable communication channels ( Johnson et al, 2015 ). These subroutines can be parameterized by selecting or moving virtual markers displaying the grasping pose ( Kent et al, 2020 ), robot joint position ( Nakaoka et al, 2014 ), or using affordance templates ( Hart et al, 2014 ). In a retrospective analysis, Yanco et al (2015) highlight the training required for operating the robots during these trials, and reports that researchers should explore new interaction methods that could be used by first responders without extensive training.…”
Section: Related Workmentioning
confidence: 99%