Recent work has identified simple empirical scaling laws for language models, linking compute budget, dataset size, model size, and autoregressive modeling loss. The validity of these simple power laws across orders of magnitude in model scale provides compelling evidence that larger models are also more capable models. However, scaling up models under the constraints of hardware and infrastructure is no easy feat, and rapidly becomes a hard and expensive engineering problem. We investigate ways to tentatively cheat scaling laws, and train larger models for cheaper. We emulate an increase in effective parameters, using efficient approximations: either by doping the models with frozen random parameters, or by using fast structured transforms in place of dense linear layers. We find that the scaling relationship between test loss and compute depends only on the actual number of trainable parameters; scaling laws cannot be deceived by spurious parameters.
IntroductionPredictably linking model and dataset size with generalization error is a long-standing open question. Discrepancies between classical bias-variance trade-off models and modern practices have been identified [1], with phenomena such as deep double descent [2] providing glimpses into a deeper understanding. However, actionable insights for machine learning practitioners have remained elusive.Recently, simple and general empirical scaling laws for deep learning models have been uncovered. Starting with first relationships for modern convolutional networks [3], guidelines informing optimal architectures-such as EfficientNet [4]-have been derived. Importantly, these laws can extrapolate model performance across scale, motivating the training of increasingly large and capable models [5].We focus on seminal work on large generative language models [6], establishing and using scaling laws to predict the computational requirements for a specific performance level-as pioneered by [7]. This work showed that the performance of an auto-regressive language model follows a remarkably simple relationship, linking model and dataset size, compute budget, and modeling loss across many orders of magnitude in scale. This make it possible to answer questions central to the efficient training of extreme-scale models, such as: (1) which model size achieves the best loss given a fixed compute budget; or (2) how many samples are necessary to train a model of a given size optimally.Unfortunately, these empirical laws also show that training models much larger than current ones comes at a prohibitive cost. State-of-the-art language models such as GPT-3 have required several thousand PF-days to train [8], and model size has plateaued since then [9,10]. Proposed distributed * Work done while at LightOn. Preprint. Under review.