2018
DOI: 10.1007/s10514-018-9734-5
|View full text |Cite
|
Sign up to set email alerts
|

Street-view change detection with deconvolutional networks

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1

Citation Types

0
191
0
1

Year Published

2018
2018
2020
2020

Publication Types

Select...
5
2
1

Relationship

0
8

Authors

Journals

citations
Cited by 255 publications
(219 citation statements)
references
References 50 publications
0
191
0
1
Order By: Relevance
“…If training data are available, supervised networks can be used to learn change detection features directly from the training data [14][15][16]. However, none of them use pretrained weights to initialize their networks, even though it has a proven value for classification and object detection tasks, especially when only a limited amount of annotated data are available [17].…”
Section: Related Workmentioning
confidence: 99%
“…If training data are available, supervised networks can be used to learn change detection features directly from the training data [14][15][16]. However, none of them use pretrained weights to initialize their networks, even though it has a proven value for classification and object detection tasks, especially when only a limited amount of annotated data are available [17].…”
Section: Related Workmentioning
confidence: 99%
“…Change detection has also been broadly studied for outdoor environments [13], [14], [15]. Structural change detection from street view images is performed in [13]. Multisensor fusion SLAM, deep deconvolution networks and fast 3D reconstruction are used to determine the changing regions between pairs of images.…”
Section: Related Workmentioning
confidence: 99%
“…Unless otherwise stated, we train all the networks using the Adam stochastic gradient descent method with a weightdecay factor of 5 × 10 −5 , parameters β 1 = 0.9, β 2 = 0.999, and we use DaConv blocks with d = 3 dilation factors, namely j = 2 j−1 for i ∈ {1, 2, 3}. 3 In the following sections we use the notation f i : Z 2 → R to indicate the output of the ith channel of any network under consideration. All our networks are implemented using the Caffe 4 framework and trained on a single Nvidia K40 GPU.…”
Section: Depth-aware Cnn Architectures For Robotic Perception Tasksmentioning
confidence: 99%
“…Deep models have been applied to a number of robotics tasks involving RGB inputs, e.g. monocular depth prediction [1], 3D scene layout understanding [2], change detection in large 3D maps [3] and camera relocalization [4], [5], [6].…”
Section: Introductionmentioning
confidence: 99%
See 1 more Smart Citation