leo-pekelis-gradient commited on
Commit
5861db3
1 Parent(s): 698e052

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +5 -5
README.md CHANGED
@@ -9,7 +9,7 @@ license: llama3
9
  ---
10
  <img src="https://cdn-uploads.huggingface.co/production/uploads/655bb613e8a8971e89944f3e/TSa3V8YpoVagnTYgxiLaO.png" width="200"/>
11
 
12
- # Llama-3 8B Instruct 1048k
13
  Gradient incorporates your data to deploy autonomous assistants that power critical operations across your business. To learn more or collaborate on a custom model, drop us a message at contact@gradient.ai.
14
 
15
  This model extends LLama-3 8B's context length from 8k to > 1040K, developed by Gradient, sponsored by compute from [Crusoe Energy](https://huggingface.co/crusoeai). It demonstrates that SOTA LLMs can learn to operate on long context with minimal training by appropriately adjusting RoPE theta. We trained on 320M total tokens, which is < 0.002% of Lamma-3's original pre-training data.
@@ -39,13 +39,13 @@ For training data, we generate long contexts by augmenting [SlimPajama](https://
39
  | Initialize From | LLaMA-3 7B| 65K | 262K | 524k |
40
  | Sequence Length 2^N | 16 | 18 | 19 | 20 |
41
  | RoPE theta | 15.3 M | 207.1 M | 1.06B | 2.80B |
42
- | batch_size | 1 | 1 | 2 | 2 |
43
- | gradient_accumulation_steps | 32 | 16 | 1 | 1 |
44
  | Steps | 30 | 24 | 50 | 50 |
45
  | Total Tokens | 62914560 | 100663296 | 419430400 | 838860800 |
46
- | learning_rate | 2.00E-05 | 2.00E-05 | 2.00E-05 | 2.00E-05 |
47
  | # GPUs | 8 | 32 | 512 | 512 |
48
- | Ring or Data parallelism | 1 | 1 | 8 | 8 |
49
  | GPU Type | NVIDIA L40S | NVIDIA L40S | NVIDIA L40S | NVIDIA L40S |
50
  | Minutes to Train (Wall)| 202 | 555 | 61 | 87 |
51
 
 
9
  ---
10
  <img src="https://cdn-uploads.huggingface.co/production/uploads/655bb613e8a8971e89944f3e/TSa3V8YpoVagnTYgxiLaO.png" width="200"/>
11
 
12
+ # Llama-3 8B Gradient Instruct 1048k
13
  Gradient incorporates your data to deploy autonomous assistants that power critical operations across your business. To learn more or collaborate on a custom model, drop us a message at contact@gradient.ai.
14
 
15
  This model extends LLama-3 8B's context length from 8k to > 1040K, developed by Gradient, sponsored by compute from [Crusoe Energy](https://huggingface.co/crusoeai). It demonstrates that SOTA LLMs can learn to operate on long context with minimal training by appropriately adjusting RoPE theta. We trained on 320M total tokens, which is < 0.002% of Lamma-3's original pre-training data.
 
39
  | Initialize From | LLaMA-3 7B| 65K | 262K | 524k |
40
  | Sequence Length 2^N | 16 | 18 | 19 | 20 |
41
  | RoPE theta | 15.3 M | 207.1 M | 1.06B | 2.80B |
42
+ | Batch Size | 1 | 1 | 2 | 2 |
43
+ | Gradient Accumulation Steps | 32 | 16 | 1 | 1 |
44
  | Steps | 30 | 24 | 50 | 50 |
45
  | Total Tokens | 62914560 | 100663296 | 419430400 | 838860800 |
46
+ | Learning Rate | 2.00E-05 | 2.00E-05 | 2.00E-05 | 2.00E-05 |
47
  | # GPUs | 8 | 32 | 512 | 512 |
48
+ | Ring parallelism | 1 | 1 | 8 | 8 |
49
  | GPU Type | NVIDIA L40S | NVIDIA L40S | NVIDIA L40S | NVIDIA L40S |
50
  | Minutes to Train (Wall)| 202 | 555 | 61 | 87 |
51