2018 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS) 2018
DOI: 10.1109/iros.2018.8594358
|View full text |Cite
|
Sign up to set email alerts
|

VLASE: Vehicle Localization by Aggregating Semantic Edges

Abstract: In this paper, we propose VLASE, a framework to use semantic edge features from images to achieve on-road localization. Semantic edge features denote edge contours that separate pairs of distinct objects such as building-sky, roadsidewalk, and building-ground. While prior work has shown promising results by utilizing the boundary between prominent classes such as sky and building using skylines, we generalize this approach to consider semantic edge features that arise from 19 different classes. Our localizatio… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
2

Citation Types

0
28
0

Year Published

2018
2018
2022
2022

Publication Types

Select...
4
3
1

Relationship

0
8

Authors

Journals

citations
Cited by 43 publications
(29 citation statements)
references
References 55 publications
0
28
0
Order By: Relevance
“…Visual localization is the problem of estimating the camera pose of an image [8,34], typically from a set of 2D-3D matches between image pixels and 3D scene points. In the context of long-term visual localization [58,[63][64][65]74], semantics are proven useful to be able to handle variations in scene geometry and appearance, e.g., due to seasonal changes. These methods are based on the idea that the semantic meaning of a scene part is invariant to such changes.…”
Section: Introductionmentioning
confidence: 99%
“…Visual localization is the problem of estimating the camera pose of an image [8,34], typically from a set of 2D-3D matches between image pixels and 3D scene points. In the context of long-term visual localization [58,[63][64][65]74], semantics are proven useful to be able to handle variations in scene geometry and appearance, e.g., due to seasonal changes. These methods are based on the idea that the semantic meaning of a scene part is invariant to such changes.…”
Section: Introductionmentioning
confidence: 99%
“…Besides using learned features that are more robust to changes in viewing conditions, long-term localization approaches also use semantic image segmentation (Budvytis et al 2019;Garg et al 2019;Larsson et al 2019;Stenborg et al 2018;Schönberger et al 2018;Seymour et al 2019;Shi et al 2019;Taira et al 2019;Toft et al 2017Toft et al , 2018Wang et al 2019;Yu et al 2018). These methods are based on the observation that the semantic meaning of scene elements, in contrast to their appearance, is invariant to changes.…”
Section: Semantic Visual Localizationmentioning
confidence: 99%
“…These methods are based on the observation that the semantic meaning of scene elements, in contrast to their appearance, is invariant to changes. Semantic image segmentations are thus used as an invariant representation for image retrieval (Arandjelović and Zisserman 2014;Toft et al 2017;Yu et al 2018), to verify 2D-3D matches (Budvytis et al 2019;Larsson et al 2019;Stenborg et al 2018;Toft et al 2018) and camera pose estimates (Shi et al 2019;Stenborg et al 2018;Taira et al 2019;Toft et al 2018), for learning local features (Garg et al 2019;Schönberger et al 2018), and as an additional input to learning-based localization approaches (Budvytis et al 2019;Seymour et al 2019;Wang et al 2019).…”
Section: Semantic Visual Localizationmentioning
confidence: 99%
“…[3] uses object detection and random finite set representations to localize against a map of semantic objects. Semantic segmentations are for example used in [69] to evaluate the consistency of feature matches for localization, in [63] for geo-location including discovery of commonly occurring scene layouts, and in [75] to obtain semantic edges that can be used for localization. [17] extract semantic graphs from semantic segmentation and use them to compute random walk descriptors for fast matching against a database graph.…”
Section: B Exploiting Semantic For Localizationmentioning
confidence: 99%