Edit model card

Train openllama-7b with in-context leanrning

A Reproduction of OpenLLaMA using 128 H100 GPUs in Bfloat16.

The pretrain data consists of Falcon, Starcoder, and the wikipedia, arxiv, books, stackexchange from RedPajama. In total, this encompassed nearly 1 trillion tokens.

The model was trained over a single epoch, incorporating 2000 warm-up steps and a cosine learning rate schedule, starting at 3e-5 with 4M batch size.

The sole distinction from the OpenLLaMA 7B Base lies in the organization of Falcon documents, which follows the methodology outlined in this arXiv paper.

image/png

Downloads last month
1,594
Inference API
Model is too large to load in Inference API (serverless). To try the model, launch it on Inference Endpoints (dedicated) instead.