2008
DOI: 10.1080/15599610802301599
|View full text |Cite
|
Sign up to set email alerts
|

Interaction Control of Robot Manipulators Using Force and Vision

Abstract: An approach to force and visual control of robot manipulators in contact with a partially known environment is proposed in this article. The environment is modeled as a rigid object of known geometry but of unknown and time-varying position and orientation. An algorithm for online estimation of the object pose is adopted, based on visual data provided by a camera as well as on forces measured during the interaction. This information is used by a force/position control scheme, in charge of managing the interact… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

0
7
0

Year Published

2010
2010
2021
2021

Publication Types

Select...
7

Relationship

0
7

Authors

Journals

citations
Cited by 13 publications
(7 citation statements)
references
References 17 publications
(13 reference statements)
0
7
0
Order By: Relevance
“…To this end, many pHRI works rely on the physical contact (touch) between robot and operator [31]. More recently, to guarantee interaction even in the absence of direct contact, researchers have proposed the use of pointing gestures [32], as well as the integration of vision with force [33,34,35]. Also, in our work, interaction includes both vision and force.…”
Section: Research On Physical Human-robot Collaborationmentioning
confidence: 99%
“…To this end, many pHRI works rely on the physical contact (touch) between robot and operator [31]. More recently, to guarantee interaction even in the absence of direct contact, researchers have proposed the use of pointing gestures [32], as well as the integration of vision with force [33,34,35]. Also, in our work, interaction includes both vision and force.…”
Section: Research On Physical Human-robot Collaborationmentioning
confidence: 99%
“…A camera frame O c - x c y c z c attached to the camera and an object frame O o - x o y o z o attached to the object are considered. Then, the transformation of the object’s feature point P from the object frame to the camera frame is defined as (Lippiello et al , 2008): …”
Section: System Modelmentioning
confidence: 99%
“…As a robot is expected to be more autonomous and flexible to accomplish complicated tasks in unknown environments, the need to integrate a number of different sensors into the robot system becomes increasingly important. Consequently, multisensor-based control scheme, which can compensate for changes in the environment and uncertainties in the dynamic models without explicit human intervention or reprogramming, has been proposed and developed (Xiao et al , 2000; Lippiello et al , 2008; Long et al , 2014). In various sensors, the force and vision sensors are very widely used.…”
Section: Introductionmentioning
confidence: 99%
“…In this context, the robot must infer the user intention, to interact more naturally, from the human perspective [8,9,10]. To this end, both visual (e.g., based on Microsoft Kinect TM [11]) and force feedback, have been used [12,13,14,15,16]. Generally, we believe that direct sensor-based methods, such as visual servoing [17], provide better solutions, for intuitive HRI, than planning techniques, requiring a priori models of the environment and agents [18].…”
Section: Introductionmentioning
confidence: 99%