2020
DOI: 10.1016/j.imavis.2019.10.006
|View full text |Cite
|
Sign up to set email alerts
|

Fine-Grained Image Retrieval via Piecewise Cross Entropy loss

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
25
0

Year Published

2020
2020
2024
2024

Publication Types

Select...
6
2

Relationship

0
8

Authors

Journals

citations
Cited by 34 publications
(29 citation statements)
references
References 5 publications
0
25
0
Order By: Relevance
“…e centralized ranking loss [2] proposed by Zheng et al in 2018 and the decor related centralized loss [33] proposed in 2019 combine the loss with weakly supervised feature aggregation to achieve better retrieval accuracy. A large number of subsequent finegrained image retrieval research mainly focused on the method of deep metric learning [35,36,40]. is method trains deep convolutional neural networks by setting a specific loss function to make a more ideal embedding space.…”
Section: Fine-grained Image Retrieval (Fgir)mentioning
confidence: 99%
See 2 more Smart Citations
“…e centralized ranking loss [2] proposed by Zheng et al in 2018 and the decor related centralized loss [33] proposed in 2019 combine the loss with weakly supervised feature aggregation to achieve better retrieval accuracy. A large number of subsequent finegrained image retrieval research mainly focused on the method of deep metric learning [35,36,40]. is method trains deep convolutional neural networks by setting a specific loss function to make a more ideal embedding space.…”
Section: Fine-grained Image Retrieval (Fgir)mentioning
confidence: 99%
“…Proxy-based losses include proxy nca loss [45], center loss [23], decorrelated centralized loss [33], and piecewise cross-entropy loss [36]. Proxy-based loss introduces the concept of the global proxy so that all classes of the training set can be considered while reducing the amount of calculation instead of being limited to samples in a minibatch.…”
Section: Scientific Programmingmentioning
confidence: 99%
See 1 more Smart Citation
“…e cross-entropy loss function is often applied for classification problems, especially for the classification problem in neural networks [30], and the cross entropy is used as the loss function frequently. In addition, since cross-entropy involves calculating the probability of each category, cross entropy appears with the Sigmoid (or softmax) function [31] almost every time. e expression of the Sigmoid function is as follows:…”
Section: Input Layer Convolutional Layermentioning
confidence: 99%
“…Deep neural networks (DNNs) based on the fully convolutional neural network have showed great improvements over systems relying on hand-crafted features [1][2][3] on benchmark tasks. With the rapid progress in DNNs research in recent years, it has dramatically facilitated the development of computer vision, such as object detection [4][5][6], image retrieval [7][8][9], scene recognition [10,11], semantic segmentation [12][13][14], image classification and inpainting [15,16], and so on. In particular, the state-of-theart works in object detection continues to grow, including face recognition [17][18][19], pedestrian detection [20][21][22], vehicle detection [23,24], etc.…”
Section: Introductionmentioning
confidence: 99%