2022
DOI: 10.1109/lra.2022.3166457
|View full text |Cite
|
Sign up to set email alerts
|

Cross-Modal Knowledge Distillation for Depth Privileged Monocular Visual Odometry

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1

Citation Types

0
2
0

Year Published

2022
2022
2024
2024

Publication Types

Select...
4
2

Relationship

0
6

Authors

Journals

citations
Cited by 6 publications
(2 citation statements)
references
References 40 publications
0
2
0
Order By: Relevance
“…The authors in [24] further proposed to utilize autoencoder networks to learn a better representations for their optical flow maps. To eliminate the noises from input observations, a number of researchers have also investigated attention based VO methods [21,26,27,28,32,36,37,38] and uncertainty based VO methods [22,41,44,45] to mitigate the impacts of moving objects from their input observations.…”
Section: Related Workmentioning
confidence: 99%
See 1 more Smart Citation
“…The authors in [24] further proposed to utilize autoencoder networks to learn a better representations for their optical flow maps. To eliminate the noises from input observations, a number of researchers have also investigated attention based VO methods [21,26,27,28,32,36,37,38] and uncertainty based VO methods [22,41,44,45] to mitigate the impacts of moving objects from their input observations.…”
Section: Related Workmentioning
confidence: 99%
“…[32,33,34,35]. Attention mechanisms seek the clues of possible candidates from intermediate representations [21,26,27,28,29,32,36,37,38,39]. Uncertainty estimation, on the other hand, implicitly captures the noises and enables the models to respond to the measured stochasticity inherent in the observations [11,22,40,41,42,43,44,45].…”
Section: Introductionmentioning
confidence: 99%