2020
DOI: 10.1109/tit.2019.2962681
|View full text |Cite
|
Sign up to set email alerts
|

SqueezeFit: Label-Aware Dimensionality Reduction by Semidefinite Programming

Abstract: Given labeled points in a high-dimensional vector space, we seek a low-dimensional subspace such that projecting onto this subspace maintains some prescribed distance between points of differing labels. Intended applications include compressive classification. Taking inspiration from large margin nearest neighbor classification, this paper introduces a semidefinite relaxation of this problem. Unlike its predecessors, this relaxation is amenable to theoretical analysis, allowing us to provably recover a planted… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3

Citation Types

0
13
0

Year Published

2020
2020
2024
2024

Publication Types

Select...
4
1

Relationship

4
1

Authors

Journals

citations
Cited by 7 publications
(13 citation statements)
references
References 36 publications
0
13
0
Order By: Relevance
“…The drop in performance might reflect the fact that MCA is oblivious to the data labels, suggesting a label-aware alternative (cf. PCA vs. SqueezeFit [15]). The performance drop might also reflect our choice of affine linear maps and Euclidean distances, suggesting alternatives involving non-linear maps and other distances.…”
Section: Discussionmentioning
confidence: 99%
“…The drop in performance might reflect the fact that MCA is oblivious to the data labels, suggesting a label-aware alternative (cf. PCA vs. SqueezeFit [15]). The performance drop might also reflect our choice of affine linear maps and Euclidean distances, suggesting alternatives involving non-linear maps and other distances.…”
Section: Discussionmentioning
confidence: 99%
“…Setup. We model the marker selection problem as a label-aware dimension reduction method inspired by compressive classification and largest margin nearest neighbor algorithms [McWhirter et al, 2018]. One such method, SqueezeFit, aims to find a projection to the lowest dimensional subspace for which samples with different labels remain farther apart than samples with the same label.…”
Section: Discussionmentioning
confidence: 99%
“…This parameter reflects a fundamental tension in compressive classification: ∆ should be large so as to enable sufficient separation of samples with different labels in the low dimensional space, and simultaneously the projected space rank Π should be of low dimension so that this projection effectively reduces the dimension of the sample. To address the intractability of the optimization in (1), a convex relaxation technique is used [McWhirter et al, 2018]:…”
Section: Discussionmentioning
confidence: 99%
See 2 more Smart Citations