2014 22nd International Conference on Pattern Recognition 2014
DOI: 10.1109/icpr.2014.607
|View full text |Cite
|
Sign up to set email alerts
|

Discriminative Autoencoders for Small Targets Detection

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

0
9
0

Year Published

2016
2016
2020
2020

Publication Types

Select...
6
1
1

Relationship

0
8

Authors

Journals

citations
Cited by 24 publications
(9 citation statements)
references
References 36 publications
0
9
0
Order By: Relevance
“…In similar work by [80], discriminative autoencoders aim at learning low-dimensional discriminative representations for positive (X + ) and negative (X − ) classes of data. The discriminative autoencoders build a latent space representation under the constraint that the positive data should be better reconstructed than the negative data.…”
Section: Controlling Reconstruction For Anomaly Detectionmentioning
confidence: 99%
“…In similar work by [80], discriminative autoencoders aim at learning low-dimensional discriminative representations for positive (X + ) and negative (X − ) classes of data. The discriminative autoencoders build a latent space representation under the constraint that the positive data should be better reconstructed than the negative data.…”
Section: Controlling Reconstruction For Anomaly Detectionmentioning
confidence: 99%
“…Prior to the development of deep learning, a sliding window detector [ 8 ] was widely used in object detection. Sliding window methods utilize both specific hand-crafted feature representations such as HOG and classifiers such as a support vector machine (SVM) to independently binary classify all sub-windows of an image as belonging to an object or background [ 15 , 16 ]. Even though their methods have made some improvements, hand-crafted features are insufficient to separate vehicles from complex background.…”
Section: Related Workmentioning
confidence: 99%
“…There are few prior works which propose modifications to standard AE design to improve inter-class discrimination amongst features. Authors in [25] introduced a discriminative AE by combining the HOG (Histogram of Gradients) based feature selection with manifold learning. Their discriminative AE structure works on top of linear SVM classifier built using HOG features; thus requiring as many discriminative AE as the number of SVM classifiers in stage 1.…”
Section: Related Workmentioning
confidence: 99%
“…Their discriminative AE structure works on top of linear SVM classifier built using HOG features; thus requiring as many discriminative AE as the number of SVM classifiers in stage 1. In addition to being a complex design, use of HOG in [25] requires heuristic parameter selection (such as window size) and deep insight into the image. Our model, being fully automated and based on representation learning, requires no knowledge of image structure.…”
Section: Related Workmentioning
confidence: 99%