2013
DOI: 10.1007/978-3-642-42051-1_77
|View full text |Cite
|
Sign up to set email alerts
|

Convolutional Neural Networks Learn Compact Local Image Descriptors

Abstract: Abstract. We investigate if a deep Convolutional Neural Network can learn representations of local image patches that are usable in the important task of keypoint matching. We examine several possible loss functions for this correspondance task and show emprically that a newly suggested loss formulation allows a Convolutional Neural Network to find compact local image descriptors that perform comparably to stateof-the-art approaches.

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
2
1

Citation Types

0
16
0

Year Published

2015
2015
2024
2024

Publication Types

Select...
4
2
1

Relationship

0
7

Authors

Journals

citations
Cited by 16 publications
(16 citation statements)
references
References 11 publications
0
16
0
Order By: Relevance
“…Descriptor learning using CNNs was addressed early in [11,19], but the experimental results in these works left open questions regarding several practical aspects, such as the most appropriate network architectures and applicationdependent training schemes. More recently, the use of Siamese networks for descriptor learning was exploited by concurrent works on joint descriptor and metric learning [10,33,34].…”
Section: Related Workmentioning
confidence: 99%
“…Descriptor learning using CNNs was addressed early in [11,19], but the experimental results in these works left open questions regarding several practical aspects, such as the most appropriate network architectures and applicationdependent training schemes. More recently, the use of Siamese networks for descriptor learning was exploited by concurrent works on joint descriptor and metric learning [10,33,34].…”
Section: Related Workmentioning
confidence: 99%
“…Our method exceeds the best descriptor variant in (Trzcinski et al, 2015), namely FPBoost512-{64}, in terms of error rate at 95% recall in all training and test data combinations and a performance improvement of nearly 7.1% is achieved. To the best of our knowledge, Osendorfer et al (2013) published the best results for a method for descriptor learning based on Siamese CNN architecture without classifier so far; it is the method most similar to ours in our comparison. Compared to this method, we achieved a performance improvement of 3.5%.…”
Section: Results and Evaluationmentioning
confidence: 62%
“…For method TRC (Trzcinski et al, 2015), we chose their best performing descriptor variant for our comparison, which is the floating point version with 64 bits. In method OS (Osendorfer et al, 2013), a descriptor learning architecture based on a Siamese CNN similar to our work was used, but the authors concentrated more on the comparison of different forms of loss functions and their model is trained by standard gradient descent. Finally, SIFT (Lowe, 2004) is used as a general baseline for the descriptor matching, because it is widely acknowledged as a good descriptor in a feature engineering manner.…”
Section: Results and Evaluationmentioning
confidence: 99%
See 2 more Smart Citations