2014
DOI: 10.1007/978-3-319-10590-1_44
|View full text |Cite
|
Sign up to set email alerts
|

Geometry Driven Semantic Labeling of Indoor Scenes

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
45
0

Year Published

2015
2015
2024
2024

Publication Types

Select...
4
3
2

Relationship

2
7

Authors

Journals

citations
Cited by 51 publications
(45 citation statements)
references
References 31 publications
0
45
0
Order By: Relevance
“…However, they do not go beyond the depth-based features or priors. In this paper, we show how to incorporate depth information into the various components of a random field model and then evaluate the contribution made by each component in enhancing semantic labeling performance (Khan et al 2014b). Our framework is particularly inspired by the works on semantic labeling of RGBD data (Silberman and Fergus 2011;Silberman et al 2012), considering long-range interactions ), parametric learning (Szummer et al 2008;Tsochantaridis et al 2004) and geometric reconstruction (Rabbani et al 2006).…”
Section: Related Workmentioning
confidence: 99%
“…However, they do not go beyond the depth-based features or priors. In this paper, we show how to incorporate depth information into the various components of a random field model and then evaluate the contribution made by each component in enhancing semantic labeling performance (Khan et al 2014b). Our framework is particularly inspired by the works on semantic labeling of RGBD data (Silberman and Fergus 2011;Silberman et al 2012), considering long-range interactions ), parametric learning (Szummer et al 2008;Tsochantaridis et al 2004) and geometric reconstruction (Rabbani et al 2006).…”
Section: Related Workmentioning
confidence: 99%
“…Couprie et al [29] 64.5 63.5 Khan et al [30] 69.2 65.6 Stückler et al [25] 70.9 67.0 Müller and Behnke [19] 72.3 71.9 Wolf et al [18] 72. networks, the latter model required training in two steps. Additionally, the feature learning at multiple scales approach in [5] seems to be beneficial, when compared to our single scale model.…”
Section: Accuracy (%)mentioning
confidence: 99%
“…The dataset contains 795 training and 654 testing samples. The training set is used to learn the network parameters as explained in Section III-B and 3 Available at http://cs.nyu.edu/ ∼ silberman/datasets/nyu depth v2.html Couprie et al [29] 64.5 63.5 Khan et al [30] 69.2 65.6 Stückler et al [31] 70.9 67.0 Müller and Behnke [32] 72.3 71.9 Wolf et al [33] 72.6 74.1 Eigen and Fergus [17] 79.1 80.6 Husain et al [4] …”
Section: A Semantic Segmentationmentioning
confidence: 99%