2008
DOI: 10.1109/tro.2008.918043
|View full text |Cite
|
Sign up to set email alerts
|

Localization and Matching Using the Planar Trifocal Tensor With Bearing-Only Data

Abstract: This paper addresses the robot and landmark localization problem from bearing-only data in three views, simultaneously to the robust association of this data. The localization algorithm is based on the 1D trifocal tensor, which relates linearly the observed data and the robot localization parameters. The aim of this work is to bring this useful geometric construction from computer vision closer to robotic applications. One contribution is the evaluation of two linear approaches of estimating the 1D tensor: the… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
29
0

Year Published

2009
2009
2021
2021

Publication Types

Select...
4
2
2

Relationship

1
7

Authors

Journals

citations
Cited by 48 publications
(29 citation statements)
references
References 29 publications
0
29
0
Order By: Relevance
“…On the other hand, the second part of the table shows how the precision is preserved even if the initial orientation is fixed to φ 1 = 0 in the controller for all the cases. We can assert We can compute the trifocal tensor from omnidirectional cameras [9], so, we can assume that we have no restriction in the field of view, and consequently, the large rotation that the robot performs in the outer curve motion case can be carried out. Another option to keep the target in the field of view is to perform a initial rotation in order to reach the condition t x1 = t x2 = 0, and then, to execute the rectilinear motion to the target.…”
Section: Simulation Resultsmentioning
confidence: 99%
See 2 more Smart Citations
“…On the other hand, the second part of the table shows how the precision is preserved even if the initial orientation is fixed to φ 1 = 0 in the controller for all the cases. We can assert We can compute the trifocal tensor from omnidirectional cameras [9], so, we can assume that we have no restriction in the field of view, and consequently, the large rotation that the robot performs in the outer curve motion case can be carried out. Another option to keep the target in the field of view is to perform a initial rotation in order to reach the condition t x1 = t x2 = 0, and then, to execute the rectilinear motion to the target.…”
Section: Simulation Resultsmentioning
confidence: 99%
“…According to the existence conditions of sliding modes, the bounded controller (16) is able to locally stabilize the system (9). Its attraction region is bigger as long as the control gains M and N are higher.…”
Section: Stability Analysismentioning
confidence: 99%
See 1 more Smart Citation
“…The robots usually start their operation at unknown poses and, before merging their maps, they must agree on a common reference frame. This common frame needs to be computed at least once, and usually only requires the robots to know the relative pose of its nearby teammates, see e.g., [24]- [26] where different methods for computing robot-to-robot measurements are presented. There exist several distributed algorithms that combine these measurements to produce the common frame, e.g., [27]- [30] and references therein.…”
Section: A Initial Correspondence and Data Associationmentioning
confidence: 99%
“…SIFT [15] has become the most used feature extraction approach. It has also been used directly in omnidirectional images [10], although it is not designed to work on them. This SIFT approach has inspired different works trying to replicate its good results on different imagery systems, in particular on wide-angle cameras.…”
Section: Introductionmentioning
confidence: 99%