2019
DOI: 10.1016/j.ecosta.2019.02.001
|View full text |Cite
|
Sign up to set email alerts
|

Oracle inequalities for sign constrained generalized linear models

Abstract: High-dimensional data have recently been analyzed because of data collection technology evolution. Although many methods have been developed to gain sparse recovery in the past two decades, most of these methods require selection of tuning parameters. As a consequence of this feature, results obtained with these methods heavily depend on the tuning. In this paper we study the theoretical properties of signconstrained generalized linear models with convex loss function, which is one of the sparse regression met… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1

Citation Types

0
4
0

Year Published

2020
2020
2023
2023

Publication Types

Select...
4
2

Relationship

0
6

Authors

Journals

citations
Cited by 6 publications
(4 citation statements)
references
References 32 publications
(77 reference statements)
0
4
0
Order By: Relevance
“…To begin with, we first consider linear equality constrained model for a multiple linear model and then extend the framework to generalized linear model (GLM). Later, we show through numerical illustrations that the prior on the constrained space acts as a 'natural penalty' for the high-dimensional case (the so-called p ≥ n case) for multiple linear model and this feature is similar in spirit to the work by Koike and Tanoue (2019) within the frequentist framework which uses only non-negativity constraints to produce naturally sparse estimators.…”
Section: Bayesian Models For Constrained Parametersmentioning
confidence: 84%
See 1 more Smart Citation
“…To begin with, we first consider linear equality constrained model for a multiple linear model and then extend the framework to generalized linear model (GLM). Later, we show through numerical illustrations that the prior on the constrained space acts as a 'natural penalty' for the high-dimensional case (the so-called p ≥ n case) for multiple linear model and this feature is similar in spirit to the work by Koike and Tanoue (2019) within the frequentist framework which uses only non-negativity constraints to produce naturally sparse estimators.…”
Section: Bayesian Models For Constrained Parametersmentioning
confidence: 84%
“…In recent years, in the era of high-dimensional models, sign-constrained least square estimators are shown to provide sparse recovery without any regularization (e.g., see Meinshausen (2013) and Slawski and Hein (2013)) and such non-negatively constrained models have also been developed for generalized linear model (see Koike and Tanoue (2019)). This has been a remarkable development in the era of big data which allows for the incorporation of background information to obtain sparse solution instead of generic penalty functions.…”
Section: Introductionmentioning
confidence: 99%
“…Meinshausen ( 2013 ) and Slawski and Hein ( 2013 ) show that as a regularizer, sign constraints can provide the same convergence rate as the lasso, providing that the design matrix satisfies the positive eigenvalue condition, and the sign of coefficients is known based on prior knowledge. Koike and Tanoue ( 2019 ) extends the result to general convex loss functions and nonlinear response variables, including logistic regression.…”
Section: Related Workmentioning
confidence: 86%
“…The linear inequality restrictions in linear regression models have been widely investigated to find the least square estimator and its properties such as bias, the mean square error and the efficiency over inequality restrictions (see Judge and Takayama (1966), Liew (1976), Lovell and Prescott (1970), Skarpness (1987, 1986), and Ohtani (1987)). Recently, in the area of big data, it has been proved that the incorporation of non-negative restrictions provides sparsity without using any regularization in linear regression models (Meinshausen 2013, Slawski andHein 2013) and also in generalized linear models (Koike and Tanoue 2019). Obviously, ignoring this type of information can impact the model estimation procedure, and decrease the accuracy of predictions.…”
Section: Introductionmentioning
confidence: 99%