2003
DOI: 10.1016/s0921-8890(03)00004-6
|View full text |Cite
|
Sign up to set email alerts
|

Rover navigation using stereo ego-motion

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
132
0
3

Year Published

2006
2006
2024
2024

Publication Types

Select...
7

Relationship

0
7

Authors

Journals

citations
Cited by 225 publications
(139 citation statements)
references
References 23 publications
0
132
0
3
Order By: Relevance
“…Also unlike ground robots, the camera pointing angle on a helicopter is not restricted by proximity to the terrain surface -camera angles from horizontal (zero degrees down) to nadir view (90 degrees down) are possible. Our work complements the study in (Olson et al, 2003) of the optimal camera field of view for planetary rover visual odometry.…”
Section: Gps Satellites)mentioning
confidence: 74%
See 1 more Smart Citation
“…Also unlike ground robots, the camera pointing angle on a helicopter is not restricted by proximity to the terrain surface -camera angles from horizontal (zero degrees down) to nadir view (90 degrees down) are possible. Our work complements the study in (Olson et al, 2003) of the optimal camera field of view for planetary rover visual odometry.…”
Section: Gps Satellites)mentioning
confidence: 74%
“…Our visual odometry algorithm is based on the approach presented in (Olson et al, 2003) and originally described in (Matthies, 1989). We track point landmarks through sequential stereo image pairs, triangulating the 3D positions of the landmarks at each time step from their projections into the left and right camera images.…”
Section: Stereo Visual Odometrymentioning
confidence: 99%
“…Most use feature tracking, sometimes with stereo vision, in concert with a global positioning system (GPS) and inertial sensor-based data (Agrawal, Konolige, & Bolles, 2007;Alenyà, Martinez, & Torras, 2004;Olson, et al, 2003). Autonomous robots are assessed in terms of their ability to navigate from some start point to a goal while avoiding obstacles.…”
Section: Discussionmentioning
confidence: 99%
“…VO (Olson et al, 2003) tracks features across pairs of images to measure changes in pose. Features are typically extracted using image processing techniques such as Scale Invariant Feature Transforms (SIFT) (Lowe, 1999;Barfoot, 2005) or Speeded Up Robust Features (SURF) (Bay et al, 2006).…”
Section: Dead-reckoning Techniquesmentioning
confidence: 99%
“…Since each estimate depends on the previous, error may continue to grow without bound. Olson et al (2003) reduced this error growth to linear rates by including absolute measurements of orientation in the VO algorithm. In field experiments, Konolige et al (2007) showed a similar VO algorithm to yield less than 0.1% error over a 9km traverse in rough-terrain.…”
Section: Dead-reckoning Techniquesmentioning
confidence: 99%