2019
DOI: 10.48550/arxiv.1906.04133
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

Bayesian experimental design using regularized determinantal point processes

Abstract: In experimental design, we are given n vectors in d dimensions, and our goal is to select k n of them to perform expensive measurements, e.g., to obtain labels/responses, for a linear regression task. Many statistical criteria have been proposed for choosing the optimal design, with popular choices including A-and D-optimality. If prior knowledge is given, typically in the form of a d × d precision matrix A, then all of the criteria can be extended to incorporate that information via a Bayesian framework. In t… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1

Citation Types

0
3
0

Year Published

2020
2020
2021
2021

Publication Types

Select...
3

Relationship

3
0

Authors

Journals

citations
Cited by 3 publications
(3 citation statements)
references
References 9 publications
0
3
0
Order By: Relevance
“…A number of optimality criteria have been considered for selecting the best subsets in experimental design. DPP subset selection has been shown to provide useful guarantees for some of the most popular criteria (such as for A-optimality and D-optimality), leading to new approximation algorithms [65,61,24]. Stochastic optimization.…”
Section: Discussionmentioning
confidence: 99%
“…A number of optimality criteria have been considered for selecting the best subsets in experimental design. DPP subset selection has been shown to provide useful guarantees for some of the most popular criteria (such as for A-optimality and D-optimality), leading to new approximation algorithms [65,61,24]. Stochastic optimization.…”
Section: Discussionmentioning
confidence: 99%
“…The term "cardinality constrained DPP" (also known as a "k-DPP" or "volume sampling") was introduced by Kulesza & Taskar (2011) to differentiate from standard DPPs which have random cardinality. Our proofs rely in part on converting DPP bounds to k-DPP bounds via a refinement of the concentration of measure argument used by Dereziński et al (2019a).…”
Section: Related Workmentioning
confidence: 99%
“…Given k and L, we define S ∼ k-DPP(L) as a distribution over all n k index subsets S ⊆ [n] of size k, such that Pr(S) ∝ det(L S ) is proportional to the determinant of the sub-matrix L S induced by the subset. DPPs have found numerous applications in machine learning, not only for summarization [31,22,20,7] and recommendation [18,8], but also in experimental design [14,33], stochastic optimization [38,34], Gaussian Process optimization [25], low-rank approximation [17,23,16], and more (recent surveys include [28,4,11]). Note that early work on DPPs focused on a random-size variant, which we denote S ∼ DPP(L), where the subset size is allowed to take any value between 0 and n, and the role of parameter k is replaced by the expected size E[|S|] = d eff (L) def = tr L(L + I) −1 .…”
Section: Introductionmentioning
confidence: 99%