File size: 1,546 Bytes
7c284a6 |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 |
---
license: mit
---
# Exploring Scaling Laws for Local SGD in Large Language Model Training
<br>
## Introduction
This paper investigates scaling laws for local SGD in LLM training, a distributed optimization algorithm that facilitates training on loosely connected devices. Through extensive experiments, we show that local SGD achieves competitive results compared to conventional methods, given equivalent model parameters, datasets, and computational resources. Furthermore, we explore the application of local SGD in various practical scenarios, including multi-cluster setups and edge computing environments. Our findings elucidate the necessary conditions for effective multi-cluster LLM training and examine the potential and limitations of leveraging edge computing resources in the LLM training process. This demonstrates its viability as an alternative to single large-cluster training.
If you would like to learn more, we suggest you refer to our [technical report](https://arxiv.org/abs/2409.13198).
<br>
We offer all intermediate-stage models for community research, "base" for DDP, "lsgd" for local SGD
## Citation
If you find our work helpful, feel free to give us a cite.
```
@misc{he2024exploringscalinglawslocal,
title={Exploring Scaling Laws for Local SGD in Large Language Model Training},
author={Qiaozhi He and Xiaomin Zhuang and Zhihua Wu},
year={2024},
eprint={2409.13198},
archivePrefix={arXiv},
primaryClass={cs.CL},
url={https://arxiv.org/abs/2409.13198},
}
```
<br>
|