2018
DOI: 10.1080/03610926.2018.1489056
|View full text |Cite
|
Sign up to set email alerts
|

Bayesian generalized fused lasso modeling via NEG distribution

Abstract: The fused lasso penalizes a loss function by the L 1 norm for both the regression coefficients and their successive differences to encourage sparsity of both. In this paper, we propose a Bayesian generalized fused lasso modeling based on a normal-exponential-gamma (NEG) prior distribution. The NEG prior is assumed into the difference of successive regression coefficients. The proposed method enables us to construct a more versatile sparse model than the ordinary fused lasso by using a flexible regularization t… Show more

Help me understand this report
View preprint versions

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
25
0

Year Published

2019
2019
2023
2023

Publication Types

Select...
5
1

Relationship

1
5

Authors

Journals

citations
Cited by 16 publications
(25 citation statements)
references
References 20 publications
0
25
0
Order By: Relevance
“…Hence, it doesn't encourage the posterior of θ i to be grouped with either θ i−1 or θ i+1 . It is worth to mention that the NEG-fusion prior [40] also has a similar plot pattern for its conditional prior density function. But a critical difference between t prior and NEG prior, is that the density for NEG prior is non-differentiable at 0.…”
Section: Bayesian Modelingmentioning
confidence: 67%
See 1 more Smart Citation
“…Hence, it doesn't encourage the posterior of θ i to be grouped with either θ i−1 or θ i+1 . It is worth to mention that the NEG-fusion prior [40] also has a similar plot pattern for its conditional prior density function. But a critical difference between t prior and NEG prior, is that the density for NEG prior is non-differentiable at 0.…”
Section: Bayesian Modelingmentioning
confidence: 67%
“…[29,45]. The penalty term λ n i=2 |θ i − θ i−1 | is can be interpreted as the negative logarithm of prior density used for Bayesian inferences, therefore, a nature Bayesian expansion to (2) is Laplace (double exponential) prior modeling [24,40]. To account for the unknown variance parameter σ 2 and θ 1 , a convenient prior specification could be…”
Section: Bayesian Modelingmentioning
confidence: 99%
“…Specifically, we specify the (i, j)th element of bold∑bold-italicθbold-italicm1 as follows: []bold∑bold-italicθbold-italicmbold−bold1ij= {array1τmi2+ki1ωm2,arrayi = j,array1ωm2,arrayij, gene i and j are positively associated,array1ωm2,arrayij, gene i and j are negatively associated,array0,arrayij, gene i and j are not directly associated, where k i is the number of nonzero elements in the i th row or column of bold∑bold-italicθbold-italicmbold−bold1, ie, k i −1 is the number of genes that interact with gene i within the pathway. Note that bold∑bold-italicθbold-italicmbold−bold1 gives the general form of the Bayesian generalized fused lasso formulation by allowing negative and no associations between two off‐diagonal elements in addition to positive associations. Thus, we model the off‐diagonal elements of bold∑bold-italicθbold-italicmbold−bold1 to introduce correlations between interacting genes given prior biological information.…”
Section: Methodsmentioning
confidence: 99%
“…where k i is the number of nonzero elements in the ith row or column of − 1 m , ie, k i − 1 is the number of genes that interact with gene i within the pathway. Note that − 1 m gives the general form of the Bayesian generalized fused lasso formulation 20 by allowing negative and no associations between two off-diagonal elements in addition to positive associations. Thus, we model the off-diagonal elements of − 1 m to introduce correlations between interacting genes given prior biological information.…”
Section: Generalized Fused Hierarchical Structured Variable Selectionmentioning
confidence: 99%
“…The most intuitive approaches to solve a sparse linear regression model is lasso [46] and its extensions such as SCAD [47], Elastic net [48], fused lasso [49], adaptive lasso [50], Bayesian-type lasso [51]- [53]. In Bayesian frameworks, lasso estimates can be interpreted as posterior mode estimates with independent and identical Laplace prior for coefficients [46], [51].…”
Section: The Bayesian Lassomentioning
confidence: 99%