2017
DOI: 10.32614/rj-2017-013
|View full text |Cite
|
Sign up to set email alerts
|

milr: Multiple-Instance Logistic Regression with Lasso Penalty

Abstract: In this work, we consider a manufactory process which can be described by a multiple-instance logistic regression model. In order to compute the maximum likelihood estimation of the unknown coefficient, an expectation-maximization algorithm is proposed, and the proposed modeling approach can be extended to identify the important covariates by adding the coefficient penalty term into the likelihood function. In addition to essential technical details, we demonstrate the usefulness of the proposed method by simu… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
4
0

Year Published

2018
2018
2024
2024

Publication Types

Select...
7
1

Relationship

0
8

Authors

Journals

citations
Cited by 11 publications
(4 citation statements)
references
References 13 publications
0
4
0
Order By: Relevance
“…In particular, the parameter set (one bias term β 0 ∈ R and β ∈ R 2MN f ) is computed within the following quadratic optimization framework (Chen et al, 2017):…”
Section: Optimization Of T-f Atoms For Mil Representationmentioning
confidence: 99%
“…In particular, the parameter set (one bias term β 0 ∈ R and β ∈ R 2MN f ) is computed within the following quadratic optimization framework (Chen et al, 2017):…”
Section: Optimization Of T-f Atoms For Mil Representationmentioning
confidence: 99%
“…IV. A new value of λ is then selected from a set S λ and Algorithm 2 runs again with hot-started vectors β and v. S λ is constructed as a grid from the initial value of λ = 0.5 up to λ max -a λ value that would already yield a trivial (all-zero) solution of (3) [25], [45]. The hot-starting approach allows re-using the already calculated result of the LBI in a consecutive run with a higher lambda so that a significantly small number of iterations may still lead to convergence.…”
Section: B Using Multiple λ Valuesmentioning
confidence: 99%
“…For λ max one can typically calculate upper bounds (e.g. as described in [25], [45]). However, as evaluated in this section, successful sparse estimation of trend breaks in OTDR profiles can be achieved with values close to 0, giving LB based algorithms an additional advantage in terms of computational complexity over the Adaptive 1 Filter.…”
Section: Complexitymentioning
confidence: 99%
“…Furthermore, Raykar et al [30] introduce a prior on the model parameter and develop a Bayesian MILR algorithm. Chen et al [31] incorporate the LASSO approach into the proposed MILR and provide an efficient computer algorithm for variable selection and estimation.…”
Section: Logistic Regression In Multiple-instance Settingmentioning
confidence: 99%