2020 IEEE 61st Annual Symposium on Foundations of Computer Science (FOCS) 2020
DOI: 10.1109/focs46700.2020.00058
|View full text |Cite
|
Sign up to set email alerts
|

Sparse PCA: Algorithms, Adversarial Perturbations and Certificates

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1

Citation Types

0
16
0

Year Published

2022
2022
2023
2023

Publication Types

Select...
2
2
1
1

Relationship

0
6

Authors

Journals

citations
Cited by 6 publications
(16 citation statements)
references
References 17 publications
0
16
0
Order By: Relevance
“…In addition, we show that our techniques work with sparse PCA with adversarial perturbations studied in [dKNS20]. This model generalizes not only sparse PCA, but also other problems studied in prior works, including the sparse planted vector problem.…”
mentioning
confidence: 72%
See 1 more Smart Citation
“…In addition, we show that our techniques work with sparse PCA with adversarial perturbations studied in [dKNS20]. This model generalizes not only sparse PCA, but also other problems studied in prior works, including the sparse planted vector problem.…”
mentioning
confidence: 72%
“…[JL09] proposed a polynomial time algorithm (called Diagonal thresholding) that finds an estimator that is close to or − as long as log , which is better than the top eigenvector of ⊤ if ≪ √ , but is worse than the information-theoretically optimal estimator by a factor √ . Later many computational lower bounds of different kind appeared: reductions from the planted clique problem [BR13a, BR13b, WBS16, GMZ17, BBH18, BB19], low degree polynomial lower bounds [DKWB19,dKNS20], statistical query lower bounds [BBH + 21], SDP and sum-ofsquares lower bounds [KNV15b, MW15,PR22], lower bounds for Markov chain Monte Carlo methods [AWZ20]. These lower bounds suggest that the algorithms described above should have optimal guarantees in the regimes ≪ √ (Diagonal thresholding) and ≫ √ (the top eigenvector), so it is unlikely that there exist efficient algorithms with significantly better guarantees if ≪ √ or ≫ √ .…”
mentioning
confidence: 99%
“…Existing methods to defend against adversarial attacks often involve some combination of (1) including adversarial samples during network training [7], (2) transforming input images into a lower-dimensional space before feeding the neural networks [10], and (3) filtering adversarial samples through a custom neural network before they reach the main network [11].…”
Section: Adversarial Attacksmentioning
confidence: 99%
“…Zhang et al propose weaklysupervised, context-based techniques [18], [19] that gather context for the scene (e.g., optical flow and prior knowledge) to provide additional information to an object detector -this information could stabilize the detector and improve consistency over time-series data from videos. Other techniques project neural network features into low-dimensional representations during training [10] -this could improve consistency on images of similar objects taken from different angles, lighting, etc. features along the object's edges.…”
Section: Improving Consistency Via Trainingmentioning
confidence: 99%
“…Spectral approaches based on iterative methods such as the power method have been extensively explored [21,3,22,23] including a SPCA algorithm with early stopping for the power method, based on the target sparsity [23]. Another line of work focused on using semidefinite programming (SDP) relaxations of SPCA [2,24,25,26]. Despite the variety of heuristic-based SPCA approaches, very few theoretical guarantees have been provided; this is partially explained by a line of hardness-of-approximation results.…”
Section: Introductionmentioning
confidence: 99%