2012 IEEE International Conference on Robotics and Automation 2012
DOI: 10.1109/icra.2012.6224654
|View full text |Cite
|
Sign up to set email alerts
|

Visual Teach and Repeat using appearance-based lidar

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
24
0

Year Published

2012
2012
2017
2017

Publication Types

Select...
5
5

Relationship

5
5

Authors

Journals

citations
Cited by 35 publications
(24 citation statements)
references
References 26 publications
0
24
0
Order By: Relevance
“…The navigation system is based on visual teach and repeat (T&R) [18], [19], which is often employed in the space domain and similar approach can used for example in sample and return missions [20], [21]. T&R enables a mobile robot to autonomously follow a previously driven path with high accuracy.…”
Section: A Systemmentioning
confidence: 99%
“…The navigation system is based on visual teach and repeat (T&R) [18], [19], which is often employed in the space domain and similar approach can used for example in sample and return missions [20], [21]. T&R enables a mobile robot to autonomously follow a previously driven path with high accuracy.…”
Section: A Systemmentioning
confidence: 99%
“…Figure 6 provides a overview of our field robot hardware configuration. Details of the hardware configuration and the original field experiment can be found in [18]. Figure 7 shows that without calibration the rover pose estimate suffered from a slow drift in pitch.…”
Section: Driving Resultsmentioning
confidence: 99%
“…Once we have associated these scene signatures, we can perform local, metric, pose estimation. This approach of predicting where the nearest topological node is and then localising against the map is similar to teach-and-repeat systems such as McManus et al [23] and Furgale and Barfoot [1], except that our map keyframes are separated by larger distances. Figure 6 presents the localisation results for the 5 live runs against our map that contains a bank of trained classifiers per place.…”
Section: Experiments and Resultsmentioning
confidence: 99%