2019
DOI: 10.48550/arxiv.1905.09919
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

Submodular Observation Selection and Information Gathering for Quadratic Models

Abstract: We study the problem of selecting most informative subset of a large observation set to enable accurate estimation of unknown parameters. This problem arises in a variety of settings in machine learning and signal processing including feature selection, phase retrieval, and target localization. Since for quadratic measurement models the moment matrix of the optimal estimator is generally unknown, majority of prior work resorts to approximation techniques such as linearization of the observation model to optimi… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

1
15
0

Year Published

2019
2019
2021
2021

Publication Types

Select...
3
2

Relationship

0
5

Authors

Journals

citations
Cited by 5 publications
(16 citation statements)
references
References 18 publications
1
15
0
Order By: Relevance
“…Moreover, unlike us, these setups do not consider any validation constraint. Our work is also connected with batch active learning methods [42,15,29,39], that aim to select examples from training data in order to minimize the labeling cost. In contrast, our setup has access to all the labels and it aims to select data to improve efficiency.…”
Section: Related Workmentioning
confidence: 99%
See 1 more Smart Citation
“…Moreover, unlike us, these setups do not consider any validation constraint. Our work is also connected with batch active learning methods [42,15,29,39], that aim to select examples from training data in order to minimize the labeling cost. In contrast, our setup has access to all the labels and it aims to select data to improve efficiency.…”
Section: Related Workmentioning
confidence: 99%
“…Subset selection for linear regression problems has been widely studied in literature [15,10]. Except for [10], these approaches optimize measures associated with the covariance matrix, rather than explicitly minimizing the training loss subject to the validation set error bound.…”
Section: α-Submodularity Of F (S)mentioning
confidence: 99%
“…To solve this optimization problem, we present a greedy approach which, like an ordinary greedy submodular maximization algorithm, enjoys approximation bounds. However, since some of the optimization objectives we consider are only weakly submodular, they admit some special approximation bounds which have been proposed recently in [33].…”
Section: Proposed Approachmentioning
confidence: 99%
“…Finally we conclude this section with this note that our work closely resembles SLANT [18] and is built upon the modeling framework proposed by SLANT, but the major difference is that SLANT assumes the entire event stream as endogenous, whereas our work is motivated towards exploring various techniques for systematically demarcating the externalities from the heterogeneous event stream. Similarly, our proposed algorithms are closely influenced by recent progress in subset selection literature [33] where authors deal with designing new alphabetical optimality criteria for quadratic models. However, our work is motivated towards investing the subset selection problem for linear models in a temporal setting.…”
Section: Related Workmentioning
confidence: 99%
See 1 more Smart Citation