2010
DOI: 10.1177/0278364910373409
|View full text |Cite
|
Sign up to set email alerts
|

Classification and Semantic Mapping of Urban Environments

Abstract: In this paper we address the problem of classifying objects in urban environments based on laser and vision data. We propose a framework based on Conditional Random Fields (CRFs), a flexible modeling tool allowing spatial and temporal correlations between laser returns to be represented. Visual features extracted from color imagery as well as shape features extracted from 2D laser scans are integrated in the estimation process. The paper contains the following novel developments: (1) a probabilistic formulatio… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
2

Citation Types

0
33
0

Year Published

2011
2011
2019
2019

Publication Types

Select...
4
2
1

Relationship

0
7

Authors

Journals

citations
Cited by 58 publications
(33 citation statements)
references
References 50 publications
0
33
0
Order By: Relevance
“…Thus far, most robotic scene understanding work has focused on outdoor scenes captured with laser scanners, where technologies have matured and enabled high-impact applications such as mapping and autonomous driving [2], [25], [21], [7], [30]. Indoor scenes, in comparison, prove more challenging and cover a wider range of objects, scene layouts, and scales.…”
Section: Introductionmentioning
confidence: 99%
See 1 more Smart Citation
“…Thus far, most robotic scene understanding work has focused on outdoor scenes captured with laser scanners, where technologies have matured and enabled high-impact applications such as mapping and autonomous driving [2], [25], [21], [7], [30]. Indoor scenes, in comparison, prove more challenging and cover a wider range of objects, scene layouts, and scales.…”
Section: Introductionmentioning
confidence: 99%
“…One way to label a multiframe scene would be to directly work with the merged point cloud. Point cloud labeling has worked very well for outdoor scenes [21], [7], [30], and the recent work of [1] illustrates how labeling can be done on indoor 3D scenes, where local features are computed from segments of point clouds and integrated in a Markov Random Field (MRF) (a) (b) Fig. 1.…”
Section: Introductionmentioning
confidence: 99%
“…In these methods the graph structure is typically induced by a partitioning of 3D point clouds. Authors in [5] consider 2D semantic mapping of street scenes with laser and image data providing computationally expensive solution with a graph induced by Delaunay triangulation. Both laser and image measurements are used in [20], where efficient solution is provided considering only vehicle object class.…”
Section: Related Workmentioning
confidence: 99%
“…The conventional way to approach this problem is to constrain the representation into only one of the modalities while integrating information from the other discarded domain as features. That is, the approach can be 2-D driven [5,[8][9][10][11][12]1], in that reasoning is done in the image while integrating 3-D features, or the approach can be 3-D driven [7,[13][14][15], in that the predictions are made on the 3-D data while integrating 2-D features. These approaches are typically only applicable when the two modalities are in correspondence.…”
Section: Motivation and Related Workmentioning
confidence: 99%