DOI: 10.32657/10356/164300
|View full text |Cite
|
Sign up to set email alerts
|

Lean evolutionary machine learning by multitasking simpler and hard tasks

Abstract: I would also like to thank my Thesis Advisory Committee (TAC) for their valuable insights and suggestions for my research. Many thanks to Ray Lim, Chen Zhe, and other researchers who I have met during my Ph.D journey. They have always kept my morale high and played a significant role in my personal development.Special thanks to Ms. Lim Siew Hoon and Mr. Suvindaren Subramaniam for all the opportunities given to me in my professional career.Finally, words cannot express my gratitude to my partner, May, my cat Bo… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
4
1

Citation Types

0
7
0

Publication Types

Select...
1

Relationship

0
1

Authors

Journals

citations
Cited by 1 publication
(7 citation statements)
references
References 120 publications
(177 reference statements)
0
7
0
Order By: Relevance
“…Constructing versatile representations Reusing or transferring features across related tasks has been commonplace for more than one decade (Collobert et al, 2011;Bottou, 2011;Sharif Razavian et al, 2014) and plays a fundamental role in the appeal of foundational models (Bommasani et al, 2021a). However, once the optimization process has identified a set of features that is sufficient to achieve nearoptimal performance on the training set, additional features are often discarded because they do not bring an incremental benefit to the training error, despite the fact that they may independently carry useful information (Zhang & Bottou, 2023).…”
Section: Related Workmentioning
confidence: 99%
See 4 more Smart Citations
“…Constructing versatile representations Reusing or transferring features across related tasks has been commonplace for more than one decade (Collobert et al, 2011;Bottou, 2011;Sharif Razavian et al, 2014) and plays a fundamental role in the appeal of foundational models (Bommasani et al, 2021a). However, once the optimization process has identified a set of features that is sufficient to achieve nearoptimal performance on the training set, additional features are often discarded because they do not bring an incremental benefit to the training error, despite the fact that they may independently carry useful information (Zhang & Bottou, 2023).…”
Section: Related Workmentioning
confidence: 99%
“…assumption, favoring solutions with sparse representations has well-known benefits on the generalization performance. Yet, several authors (Zhang et al, 2022;Zhang & Bottou, 2023;Chen et al, 2023) make the point that scenarios involving multiple distributions are best served by "richer representations", even richer than those constructed without sparsity regularization using the usual stochastic gradient learning procedure.…”
Section: Introductionmentioning
confidence: 99%
See 3 more Smart Citations