Papers
arxiv:1911.04706

FLAML: A Fast and Lightweight AutoML Library

Published on Nov 12, 2019
Authors:
,
,

Abstract

We study the problem of using low computational cost to automate the choices of learners and hyperparameters for an ad-hoc training dataset and error metric, by conducting trials of different configurations on the given training data. We investigate the joint impact of multiple factors on both trial cost and model error, and propose several design guidelines. Following them, we build a fast and lightweight library FLAML which optimizes for low computational resource in finding accurate models. FLAML integrates several simple but effective search strategies into an adaptive system. It significantly outperforms top-ranked AutoML libraries on a large open source AutoML benchmark under equal, or sometimes orders of magnitude smaller budget constraints.

Community

Proposes design guidelines and automation for learners and hyperparameters; FLAML optimizes for low computational resource for accurate model search. No computational overhead beyond trial cost; search moves from cheap and inaccurate to expensive and accurate models; doesn’t depend on meta-learning and ensemble performance; intersects with neutral architecture search. From space of L learners, each learner has a hyperparameter (search) space that’s sampled in each trial (along with sample size of training data); use k-fold cross validation or holdout; for each configuration, model takes CPU time to train and validate; goal is to minimize CPU time for lowest validation error. Charts out relations among the variables: increasing sample size lowers test error (and gap), sample size and hyperparameter induced complexity decide fitting (underfit or overfit), cost is proportional to sample size and subset of hyperparameters (model complexity); properties: use sample size proportional to model complexity, cross validation is better than holdout for small size or budget, fair chance for learner (explore sufficient hyper parameters as we don’t know best learner), and optimal trials for each learner (best/lowest error needed). Has two layers: ML Layer has learners (XGBoost, LightGBMs - gradient boosting machines, random forest, etc.) that take samples and give validation error and cost, AutoML layer has learner proposer, re-sampling strategy proposer, hyperparameter and sample size proposer, and controller. Search variables and search strategies of different frameworks compared in table 2; FLAML uses the (proposed) estimated cost for improvement (ECI) for learner and random direct search with ECI-based choice of sample size for hyperparameter and sample size. Resampling strategy uses amount of data divided by budget (thresholded) to select cross validation (if lower) or holdout. Each learner is chosen with inverse (dynamically updated) ECI probability (fair chance). Benchmarked against other AutoML libraries (auto-sklearn, H2O AutoML, TOPT, cloud-automl, HpBandSter) on AutoML benchmark (for classification) and PMLB (for regression). Comparative to saturated baselines in binary classification, best in multi-class classification and regression. Ablation on learner selection strategy (roundrobin) and data (fulldata and CV only) show FLAML has lower error at the same wall clock time (10-fold averaged runs). Appendix has default search space details, dataset details, and more comparisons (time budgets). From Microsoft.

Links: Website (Blog), PapersWithCode, GitHub

Sign up or log in to comment

Models citing this paper 0

No model linking this paper

Cite arxiv.org/abs/1911.04706 in a model README.md to link it from this page.

Datasets citing this paper 0

No dataset linking this paper

Cite arxiv.org/abs/1911.04706 in a dataset README.md to link it from this page.

Spaces citing this paper 0

No Space linking this paper

Cite arxiv.org/abs/1911.04706 in a Space README.md to link it from this page.

Collections including this paper 1