2010
DOI: 10.1093/biomet/asq033
|View full text |Cite
|
Sign up to set email alerts
|

Penalized Bregman divergence for large-dimensional regression and classification

Abstract: Regularization methods are characterized by loss functions measuring data fits and penalty terms constraining model parameters. The commonly used quadratic loss is not suitable for classification with binary responses, whereas the loglikelihood function is not readily applicable to models where the exact distribution of observations is unknown or not fully specified. We introduce the penalized Bregman divergence by replacing the negative loglikelihood in the conventional penalized likelihood with Bregman diver… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

1
25
0

Year Published

2010
2010
2023
2023

Publication Types

Select...
7

Relationship

2
5

Authors

Journals

citations
Cited by 30 publications
(26 citation statements)
references
References 30 publications
1
25
0
Order By: Relevance
“…However, they are particularly interested in quantile regression and gaussian graphical modeling respectively. Recently, to generalize the conventional penalized likelihood, Zhang et al (2010) introduced penalized Bregman divergence. It used the concept of Bregman divergence, which unifies nearly all of the commonly used loss functions in the regression analysis and classification procedure (Zhang et al, 2009).…”
Section: Introductionmentioning
confidence: 99%
See 1 more Smart Citation
“…However, they are particularly interested in quantile regression and gaussian graphical modeling respectively. Recently, to generalize the conventional penalized likelihood, Zhang et al (2010) introduced penalized Bregman divergence. It used the concept of Bregman divergence, which unifies nearly all of the commonly used loss functions in the regression analysis and classification procedure (Zhang et al, 2009).…”
Section: Introductionmentioning
confidence: 99%
“…For instance, an important application of Bregman divergence is the quasi-likelihood model (Wedderburn, 1974) which is popular when the underlying distribution of the observations is not fully specified. Zhang et al (2010) studied the statistical properties of the penalized Bregman divergence estimator in conjunction with either nonconvex or convex penalties. The dimension p n in that work has either a smaller or nearly the same order as the sample size n, depending on the choice of penalties.…”
Section: Introductionmentioning
confidence: 99%
“…Areas for future research include (I) more efficient methods for estimating the large error covariance matrix, (II) more rigorous investigation of the sampling properties of penalized estimators under the convolution model, (III) confidence intervals for h n in spirit similar to that of Sara et al (2004) and hypothesis testing of h n = 0 for detecting activation via the penalized estimators, and (IV) related multiple comparison procedures and activation maps. It's worthy mentioning that Zhang, Jiang and Chai (2008) developed statistical inference tools for testing the significance of large-dimensional parameters via penalized estimators, when the data are independent and identically distributed. More refined work is needed for generalizing those inference methods to the fMRI time series, which are serially correlated.…”
Section: Discussionmentioning
confidence: 99%
“…To quantify the error measures for different types of response variables, Zhang, Jiang and Chai (2008) considered a broad class of loss functions Q(·, ·), called Bregman divergence (BD). The penalized-BD estimator ( β n,0 , β n ) is defined as the minimizer of the following criterion function,…”
Section: Regularized Estimation With Independent Datamentioning
confidence: 99%
See 1 more Smart Citation