2013
DOI: 10.1137/120867287
|View full text |Cite
|
Sign up to set email alerts
|

Faster Subset Selection for Matrices and Applications

Abstract: Abstract. We study the following problem of subset selection for matrices: given a matrix X ∈ R n×m (m > n) and a sampling parameter k (n ≤ k ≤ m), select a subset of k columns from X such that the pseudo-inverse of the sampled matrix has as smallest norm as possible. In this work, we focus on the Frobenius and the spectral matrix norms. We describe several novel (deterministic and randomized) approximation algorithms for this problem with approximation bounds that are optimal up to constant factors. Additiona… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

3
126
0

Year Published

2014
2014
2023
2023

Publication Types

Select...
4
3
1

Relationship

1
7

Authors

Journals

citations
Cited by 77 publications
(129 citation statements)
references
References 51 publications
3
126
0
Order By: Relevance
“…We restrict our attention to approximation algorithms for these problems and refer the reader to Pukelsheim (2006) for a broad survey on experimental design. Avron and Boutsidis (2013) studied the Aand E-optimal design problems and analyzed various combinatorial algorithms and algorithms based on volume sampling, and achieved approximation ratio n−d+1 k−d+1 . Wang et al (2016) found connections between optimal design and matrix sparsification, and used these connections to obtain a (1 + )-approximation when k ≥ d 2 , and also approximation algorithms under certain technical assumptions.…”
Section: Related Workmentioning
confidence: 99%
See 1 more Smart Citation
“…We restrict our attention to approximation algorithms for these problems and refer the reader to Pukelsheim (2006) for a broad survey on experimental design. Avron and Boutsidis (2013) studied the Aand E-optimal design problems and analyzed various combinatorial algorithms and algorithms based on volume sampling, and achieved approximation ratio n−d+1 k−d+1 . Wang et al (2016) found connections between optimal design and matrix sparsification, and used these connections to obtain a (1 + )-approximation when k ≥ d 2 , and also approximation algorithms under certain technical assumptions.…”
Section: Related Workmentioning
confidence: 99%
“…[1] The ratios are tight with matching integrality gap of the convex relaxation (1)-(3). Spielman and Srivastava (2011)) and column subset selection (Avron and Boutsidis (2013)) in numerical linear algebra. In this work, we consider the optimization problem of choosing the representative subset that aims to optimize the A-optimality criterion in experimental design.…”
Section: Introductionmentioning
confidence: 99%
“…Finally, recent work [2] describes a deterministic T V k + O (nk (n − r)) time algorithm that guarantees an approximation error…”
Section: Running Timesmentioning
confidence: 99%
“…7,18,20,29 Greedy subset selection algorithms have also been proposed based on the right singular vectors of the matrix. 30,31 Note that these methods may be expensive when rank k is not small since they require the top-k singular vectors. An alternative solution to the CSSP is to use the coarsened graph consisting of the columns obtained by graph coarsening.…”
Section: Column Subset Selectionmentioning
confidence: 99%