2021
DOI: 10.3390/s21051573
|View full text |Cite
|
Sign up to set email alerts
|

Experiments of Image Classification Using Dissimilarity Spaces Built with Siamese Networks

Abstract: Traditionally, classifiers are trained to predict patterns within a feature space. The image classification system presented here trains classifiers to predict patterns within a vector space by combining the dissimilarity spaces generated by a large set of Siamese Neural Networks (SNNs). A set of centroids from the patterns in the training data sets is calculated with supervised k-means clustering. The centroids are used to generate the dissimilarity space via the Siamese networks. The vector space descriptors… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
4
1

Citation Types

0
38
0

Year Published

2021
2021
2024
2024

Publication Types

Select...
5
1

Relationship

1
5

Authors

Journals

citations
Cited by 10 publications
(38 citation statements)
references
References 51 publications
0
38
0
Order By: Relevance
“…The basic system can be described as follows. The inputs into the system, as in [12][13][14], are the original images and HASC descriptors [19], extracted to produce a new processed image. If the original image is in color, Hasc is applied separately on each band; if it is grey level, the Hasc image is replicated three times to build an image with three bands.…”
Section: Proposed Approachmentioning
confidence: 99%
See 4 more Smart Citations
“…The basic system can be described as follows. The inputs into the system, as in [12][13][14], are the original images and HASC descriptors [19], extracted to produce a new processed image. If the original image is in color, Hasc is applied separately on each band; if it is grey level, the Hasc image is replicated three times to build an image with three bands.…”
Section: Proposed Approachmentioning
confidence: 99%
“…In the testing stage, an unknown pattern is projected onto the dissimilarity space that was learned by the SNN, which generates the feature vector that is then fed into the trained SVM for a decision. The SNN, as illustrated in Figure 2, combines two identical deep learners whose outputs are subtracted, which produces a feature vector (the absolute value of the difference) that is passed to a sigmoid and a loss function as in [12][13][14]. Unlike [12-14], which used binary cross entropy, two different loss functions are tested here (binary cross entropy and triplet loss function), and the CNN subnets are optimized with Adam and some Adam variants.…”
Section: Proposed Approachmentioning
confidence: 99%
See 3 more Smart Citations