2022
DOI: 10.1609/aaai.v36i7.20704
|View full text |Cite
|
Sign up to set email alerts
|

Chunk Dynamic Updating for Group Lasso with ODEs

Abstract: Group Lasso is an important sparse regression method in machine learning which encourages selecting key explanatory factors in a grouped manner because of the use of L-2,1 norm. In real-world learning tasks, some chunks of data would be added into or removed from the training set in sequence due to the existence of new or obsolete historical data, which is normally called dynamic or lifelong learning scenario. However, most of existing algorithms of group Lasso are limited to offline updating, and only one is … Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1

Citation Types

0
1
0

Year Published

2023
2023
2023
2023

Publication Types

Select...
1

Relationship

0
1

Authors

Journals

citations
Cited by 1 publication
(1 citation statement)
references
References 21 publications
(23 reference statements)
0
1
0
Order By: Relevance
“…Comparison with Homotopy Method While there are likely to have some mathematical similarities, our DFLS differs from the family of homotopy algorithms (Watson 1986;Nocedal and Wright 1999;Watson 2001). As mentioned previously, the work of (Garrigues and Ghaoui 2008) (for Lasso) and (Hofleitner et al 2014(Hofleitner et al , 2013Li and Gu 2022) (for generalized Lasso) also incorporate instrumental variable(s) in order to realize the online learning, however, the authors achieve this by a series of linear systems, rather than specifying an ODE-based structure as in our approach. More to the point, they are all focused on the sample-wise online learning and limited to mere square loss.…”
Section: Related Workmentioning
confidence: 98%
“…Comparison with Homotopy Method While there are likely to have some mathematical similarities, our DFLS differs from the family of homotopy algorithms (Watson 1986;Nocedal and Wright 1999;Watson 2001). As mentioned previously, the work of (Garrigues and Ghaoui 2008) (for Lasso) and (Hofleitner et al 2014(Hofleitner et al , 2013Li and Gu 2022) (for generalized Lasso) also incorporate instrumental variable(s) in order to realize the online learning, however, the authors achieve this by a series of linear systems, rather than specifying an ODE-based structure as in our approach. More to the point, they are all focused on the sample-wise online learning and limited to mere square loss.…”
Section: Related Workmentioning
confidence: 98%