2018
DOI: 10.48550/arxiv.1810.03570
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

Bootstrapped CNNs for Building Segmentation on RGB-D Aerial Imagery

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1

Citation Types

0
2
0

Year Published

2018
2018
2018
2018

Publication Types

Select...
1

Relationship

0
1

Authors

Journals

citations
Cited by 1 publication
(2 citation statements)
references
References 0 publications
0
2
0
Order By: Relevance
“…This process is critical for characterizing many medical conditions, where a diagnostic workup may depend on information from patient history, physical examination, organ-level medical imaging, histological analysis, and laboratory studies. Even in a single medical imaging study, physicians commonly rely on multimodal contrast to determine an optimal diagnosis, for instance, considering both T1-and T2-weighted MRI images [9,13] or concatenation [29,33]. We propose using a densenet style architecture and fusing the multimodal information in the last eight layers of the first dense block, hence harnessing the benefits of both multi-layer fusion and densely connected networks.…”
Section: Introductionmentioning
confidence: 99%
See 1 more Smart Citation
“…This process is critical for characterizing many medical conditions, where a diagnostic workup may depend on information from patient history, physical examination, organ-level medical imaging, histological analysis, and laboratory studies. Even in a single medical imaging study, physicians commonly rely on multimodal contrast to determine an optimal diagnosis, for instance, considering both T1-and T2-weighted MRI images [9,13] or concatenation [29,33]. We propose using a densenet style architecture and fusing the multimodal information in the last eight layers of the first dense block, hence harnessing the benefits of both multi-layer fusion and densely connected networks.…”
Section: Introductionmentioning
confidence: 99%
“…A comparison of commonly used multimodal deep learning architectures with the proposed architecture. The state-of-theart methods involve multi-layer fusion[9,13] or concatenation[29,33]. We propose using a densenet style architecture and fusing the multimodal information in the last eight layers of the first dense block, hence harnessing the benefits of both multi-layer fusion and densely connected networks.…”
mentioning
confidence: 99%