2020
DOI: 10.1109/lra.2020.2977259
|View full text |Cite
|
Sign up to set email alerts
|

PCA-Based Visual Servoing Using Optical Coherence Tomography

Abstract: This article deals with the development of a vision-based control law to achieve high-accuracy automatic six degrees of freedom (DoF) positioning tasks. The objective of this work is to be able to replace a biological sample under an optical device for a non-invasive depth examination at any given time (i.e., performing repetitive and accurate optical characterizations of the sample). The optical examination, also called optical biopsy, is performed thanks to an optical coherence tomography (OCT) system. The O… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1

Citation Types

0
1
0

Year Published

2021
2021
2024
2024

Publication Types

Select...
5
1

Relationship

0
6

Authors

Journals

citations
Cited by 7 publications
(1 citation statement)
references
References 28 publications
(30 reference statements)
0
1
0
Order By: Relevance
“…Most of the aforementioned state of the art on visual servoing is based on 2D image information and the literature using 3D data for visual servoing, e.g., depth maps or point clouds, is very much limited. Very few recent works have reported such methods [16]- [18]. A particular advantage of using 3D data over 2D images is that they are well-suited for complex environments, i.e., texture-less, varying light, unstructured etc., and avoid computation of complex pose estimations.…”
Section: Introductionmentioning
confidence: 99%
“…Most of the aforementioned state of the art on visual servoing is based on 2D image information and the literature using 3D data for visual servoing, e.g., depth maps or point clouds, is very much limited. Very few recent works have reported such methods [16]- [18]. A particular advantage of using 3D data over 2D images is that they are well-suited for complex environments, i.e., texture-less, varying light, unstructured etc., and avoid computation of complex pose estimations.…”
Section: Introductionmentioning
confidence: 99%