2020
DOI: 10.1109/lra.2020.2969953
|View full text |Cite
|
Sign up to set email alerts
|

Eye-in-Hand Visual Servoing Enhanced With Sparse Strain Measurement for Soft Continuum Robots

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

0
32
0

Year Published

2021
2021
2023
2023

Publication Types

Select...
9

Relationship

4
5

Authors

Journals

citations
Cited by 62 publications
(43 citation statements)
references
References 21 publications
0
32
0
Order By: Relevance
“…However, for such schemes like the ones in the studies by Yip and Camarillo (2014), Lee et al (2017a), the question of how to characterize and resolve the outliers with low confidence of the ground truth needs to be carefully considered. Although excessive addition of sensors is not recommended, several self-contained measurement devices such as fiber Bragg gratings (FBGs) which can appropriately accommodate the flexible continuum body have the potential for sensor fusion with positional sensors (Lun et al, 2019;Wang et al, 2020;Wang et al, 2021). Meanwhile, the combination of analytical and data-driven models will be another trend for continuum robot control.…”
Section: Discussionmentioning
confidence: 99%
“…However, for such schemes like the ones in the studies by Yip and Camarillo (2014), Lee et al (2017a), the question of how to characterize and resolve the outliers with low confidence of the ground truth needs to be carefully considered. Although excessive addition of sensors is not recommended, several self-contained measurement devices such as fiber Bragg gratings (FBGs) which can appropriately accommodate the flexible continuum body have the potential for sensor fusion with positional sensors (Lun et al, 2019;Wang et al, 2020;Wang et al, 2021). Meanwhile, the combination of analytical and data-driven models will be another trend for continuum robot control.…”
Section: Discussionmentioning
confidence: 99%
“…The eye-in-hand configuration has several advantages as it enables more flexible and precise viewing of target objects in the workspace, thus augmenting the versatility and accuracy of robotic manipulation [32]. However, as eye-in-hand camera moves with the robot, conventional frame-based cameras suffer from motion blur, which imposes constraints on ambient illumination and maximum operational speeds [20], [21]. event-based vision has the potential to address these challenges in conventional robotic visual servoing.…”
Section: A Related Workmentioning
confidence: 99%
“…In conventional visual servoing, frame-based cameras are mainly used to detect, track and match visual features by processing intensity images at consecutive frames; which in practice, causes delays in visual processing and the consequent robot action. Moreover, they face issues of motion blur and often require increased ambient illumination [20], [21]; which undermines their capabilities in high speed operations and varying light conditions. These shortcomings of frame-based cameras can limit their usage/applicability for visual servoing in both structured and unstructured environments.…”
Section: Introductionmentioning
confidence: 99%
“…These special electronic image sensors can be used to simulate flexible visual systems. [35][36][37] Furthermore, flexible photodetectors are the basis of newer bionic vision applications. The use of flexible electronic technology to simulate biological photoreceptor arrays, can help simulate the applications of bionic vision.…”
Section: Introductionmentioning
confidence: 99%