2018
DOI: 10.1007/978-3-030-01240-3_18
|View full text |Cite
|
Sign up to set email alerts
|

Repeatability Is Not Enough: Learning Affine Regions via Discriminability

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

1
175
1

Year Published

2019
2019
2023
2023

Publication Types

Select...
4
2
1

Relationship

0
7

Authors

Journals

citations
Cited by 187 publications
(186 citation statements)
references
References 48 publications
1
175
1
Order By: Relevance
“…keypoint detection). AffNet [27] applied a similar strategy to learn affinecovariant regions. Polar Transformer Networks [12] were used by [11] to build scale-invariant descriptors by transforming the input patch into log-polar space.…”
Section: Related Workmentioning
confidence: 99%
“…keypoint detection). AffNet [27] applied a similar strategy to learn affinecovariant regions. Polar Transformer Networks [12] were used by [11] to build scale-invariant descriptors by transforming the input patch into log-polar space.…”
Section: Related Workmentioning
confidence: 99%
“…This is quite reasonable, since being able to tolerate more patch transformations unavoidably decreases the discrimination power. Concerning OriNet [25], the default patch orientation system of the deep network detailed in [40] seems to be slightly better, possibly due to the difference between the keypoint detector employed during training and that used to generate input patches. RsGLOH2 achieves the best results among the handcrafted descriptors, while RalNet Shuffle and BisGLOH2 ⋆ are comparable with average baseline descriptors.…”
Section: Evaluation Resultsmentioning
confidence: 99%
“…These include SOS-Net [35], still unpublished at contest time, the recent HardNet A [29], obtained by training HardNet [24] on AMOS [29] and other datasets, RalNet Shuffle using the RalNet architecture [38] and additionally cropping and shuffling patches at training time, and RsGLOH2, "square rooting" sGLOH2 [4] according to RootSIFT [1]. Two variants of HardNet A , exploiting the deep networks described in [25] either for custom orientation assignment or to accommodate patches before extracting the descriptor, were also submitted as OriNet+HardNet A and AffNet+HardNet A , respectively. The contest also included a variant of BisGLOH2 [4], named BisGLOH2 ⋆ , using more rotations at matching time than the default ones.…”
Section: Local Image Descriptors Under Evaluationmentioning
confidence: 99%
“…As Eq. 6 indicates that the performance of GMS is related to feature quality, we experiment with recently proposed deep learning based features, including the HardNet (Mishchuk et al 2017) descriptor and HessAff (Mishkin et al 2018) detector. Specifically, we use DoG (Lowe 2004) and Hes-sAff for interest point detection, respectively.…”
Section: Paring With Deep Featuresmentioning
confidence: 99%