2012 IEEE Conference on Computer Vision and Pattern Recognition 2012
DOI: 10.1109/cvpr.2012.6248022
|View full text |Cite
|
Sign up to set email alerts
|

D-Nets: Beyond patch-based image descriptors

Abstract: Despite much research on patch-based descriptors, SIFT remains the gold standard for finding correspondences across images and recent descriptors focus primarily on improving speed rather than accuracy. In this paper we propose Descriptor-Nets (D-Nets), a computationally efficient method that significantly improves the accuracy of image matching by going beyond patch-based approaches. D-Nets constructs a network in which nodes correspond to traditional sparsely or densely sampled keypoints, and where image con… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
20
0

Year Published

2013
2013
2014
2014

Publication Types

Select...
5
2
1

Relationship

0
8

Authors

Journals

citations
Cited by 30 publications
(20 citation statements)
references
References 21 publications
0
20
0
Order By: Relevance
“…We believe that A-SIFT encapsulates SIFT by definition, and therefore we do not compare with the standard SIFT. For D-Nets [33], we use the implementation provided on their website in a straight forward manner employing the FAST keypoint detector. We measure whether each of these methods find the correct homography, or finds a shifted version of the correct one, or finds a correct but different plane, or completely fails.…”
Section: Experimental Setup and Resultsmentioning
confidence: 99%
See 1 more Smart Citation
“…We believe that A-SIFT encapsulates SIFT by definition, and therefore we do not compare with the standard SIFT. For D-Nets [33], we use the implementation provided on their website in a straight forward manner employing the FAST keypoint detector. We measure whether each of these methods find the correct homography, or finds a shifted version of the correct one, or finds a correct but different plane, or completely fails.…”
Section: Experimental Setup and Resultsmentioning
confidence: 99%
“…This leads to a heavy dependence on the matching and robust estimation approach, because random sampling cannot be prevented from mixing different affine transformations in a local region. D-Nets [33] take a different approach in finding correspondences. Their method generates lines between keypoints or grid points and calculates descriptors for each line.…”
Section: Correspondence Matchingmentioning
confidence: 99%
“…Although the pixel histogram descriptor is effective in food recognition, it is not necessarily suitable for finding correspondences between omnidirectional images. By using the keypoint pair, we present a new descriptor here based on the Descriptor-Nets (D-Nets) [15] for the matching of omnidirectional images. The keypoint pair is sampled from the image curve in the image plane, and so the descriptor contains the feature information of nonlinear distortion and the image matching is robust.…”
Section: Determination Of the Imagementioning
confidence: 99%
“…Our scene-signature detection block runs in a separate thread at approximately 2-5 Hz. Our main thread runs Visual Odometry at approximately [15][16][17][18][19][20] Hz to predict poses in between localisations. As the localisation updates occur at a slower rate, we perform posegraph relaxation over a sliding window to obtain our final estimate (see Figure 10).…”
Section: Systems Workmentioning
confidence: 99%
“…However, as this is still based on low-level structure, data association remains hard under extreme appearance change. Hundelshausen et al [16] present a noteworthy descriptor that goes beyond point features and instead constructs a network of nodes and directed edges, where each edge is a descriptor in the network, referred to as a "d-token". However, because these descriptors directly sample pixel intensities, this would not be suitable for the types of extreme appearance changes we are considering.…”
Section: Introductionmentioning
confidence: 99%