The ChaLearn AutoML Challenge (The authors are in alphabetical order of last name, except the first author who did most of the writing and the second author who produced most of the numerical analyses and plots.) (NIPS 2015-ICML 2016) consisted of six rounds of a machine learning competition of progressive difficulty, subject to limited computational resources. It was followed by
Research progress in AutoML has lead to state of the art solutions that can cope quite well with supervised learning task, e.g., classification with AutoSklearn. However, so far these systems do not take into account the changing nature of evolving data over time (i.e., they still assume i.i.d. data); even when this sort of domains are increasingly available in real applications (e.g., spam filtering, user preferences, etc.). We describe a first attempt to develop an AutoML solution for scenarios in which data distribution changes relatively slowly over time and in which the problem is approached in a lifelong learning setting. We extend Auto-Sklearn with sound and intuitive mechanisms that allow it to cope with this sort of problems. The extended Auto-Sklearn is combined with concept drift detection techniques that allow it to automatically determine when the initial models have to be adapted. We report experimental results in benchmark data from AutoML competitions that adhere to this scenario. Results demonstrate the effectiveness of the proposed methodology.
Meta-learning has been widely studied and implemented in many Automated Machine Learning systems to improve the process of selecting and training Machine Learning models for new tasks, by leveraging expertise acquired on previously observed tasks. We design a novel meta-learning challenge aiming at learning-to-learn from one of the most essential model evaluation data, the learning curve. It consists of multiple model evaluations collected during the process of training. A metalearner is expected to apply a learned policy to learning curves of partially trained models on the task at hand, to rapidly find the best task solution, without training all potential models to convergence. This implies learning the exploration-exploitation trade-off. Our challenge is split into two phases: a development phase and a final test phase. In each phase, a meta-learner is meta-trained and meta-tested on validation learning curves (development phase) or test learning curves (final test phase). During meta-training, the meta-learner is allowed to learn from the provided learning curves in any possible way. In meta-testing, we borrowed the common Reinforcement Learning setting in which an agent (a meta-learner) learns by interacting with an environment storing pre-computed learning curves. A metalearner must pay a cost (corresponding to the actual training and testing time) to reveal learning curve information progressively. The meta-learner is evaluated and ranked based on the average area under its learning curves. This challenge was accepted as part of the official selection of WCCI 2022 competitions.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.