2020
DOI: 10.1007/978-3-030-58542-6_20
|View full text |Cite
|
Sign up to set email alerts
|

Deep Hough-Transform Line Priors

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
42
0

Year Published

2021
2021
2023
2023

Publication Types

Select...
5
1
1

Relationship

1
6

Authors

Journals

citations
Cited by 56 publications
(42 citation statements)
references
References 43 publications
0
42
0
Order By: Relevance
“…The holistically-attracted wireframe parser (HAWP) [11] was built on the L-CNN and introduced a novel line segment reparameterization by using a holistic attraction field map that assigns each pixel to its closest line segment. Lin et al [12] proposed in their deep Hough transform line priors method to combine line priors with deep learning by incorporating a trainable Hough transform block into a deep network and performing filtering in the Hough domain with local convolutions. For the application of line detection on the Wireframe datasets, they used the L-CNN [10] and the HAWP [11] as backbones and replaced the hourglass blocks with their Hough transform blocks.…”
Section: Segmentation and Detection Of Linesmentioning
confidence: 99%
See 1 more Smart Citation
“…The holistically-attracted wireframe parser (HAWP) [11] was built on the L-CNN and introduced a novel line segment reparameterization by using a holistic attraction field map that assigns each pixel to its closest line segment. Lin et al [12] proposed in their deep Hough transform line priors method to combine line priors with deep learning by incorporating a trainable Hough transform block into a deep network and performing filtering in the Hough domain with local convolutions. For the application of line detection on the Wireframe datasets, they used the L-CNN [10] and the HAWP [11] as backbones and replaced the hourglass blocks with their Hough transform blocks.…”
Section: Segmentation and Detection Of Linesmentioning
confidence: 99%
“…During inference, the UNet was executed patchwise, and two postprocessing methods were applied to the reassembled segmentation output [6], which we refer to as PatchUNet-RANSAC and PatchUNet-Hough. Secondly, we trained the deep Hough transform line prior method [12] for our line detection task, which we abbreviate as PatchDeepHough. The method was originally developed for wireframe detection; thus, some modifications were necessary to make it applicable to our task.…”
Section: Comparison To the State-of-the-artmentioning
confidence: 99%
“…We encode an input image to a semantic feature representations F which is mapped to the Hough space, through a trainable Hough Transform module [9]. The Hough transform HT maps a feature map F of size [H ×W ] to an [N ρ ×N θ ] Hough histogram, where N ρ and N θ are the number of discrete offsets and angles.…”
Section: Hough Transform Line Priorsmentioning
confidence: 99%
“…To make effective use of additional unlabelled data, we propose a semi-supervised Hough Transform-based loss which exploits geometric prior knowledge of lanes in the Hough space [8,9]. Lanes are lines, thus we propose a semisupervised Hough Transform loss that parameterizes lines in Hough space, by mapping them to individual bins represented by an offset and an angle.…”
Section: Introductionmentioning
confidence: 99%
See 1 more Smart Citation