Text Generation
Transformers
PyTorch
Safetensors
English
hf_olmo
custom_code
akshitab commited on
Commit
09dd55d
1 Parent(s): 830ff8f

Revert "add safetensors step140000-tokens619B"

Browse files

This reverts commit 830ff8fac563a618fbe37cb74357cb5c703e5c30.

Files changed (4) hide show
  1. README.md +273 -1
  2. model.safetensors +3 -0
  3. pytorch_model.bin +3 -0
  4. revisions.txt +558 -0
README.md CHANGED
@@ -1 +1,273 @@
1
- Trained on nvidia
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: apache-2.0
3
+ datasets:
4
+ - allenai/dolma
5
+ language:
6
+ - en
7
+ ---
8
+
9
+
10
+ <img src="https://allenai.org/olmo/olmo-7b-animation.gif" alt="OLMo Logo" width="800" style="margin-left:'auto' margin-right:'auto' display:'block'"/>
11
+
12
+
13
+ # Model Card for OLMo 7B
14
+
15
+ <!-- Provide a quick summary of what the model is/does. -->
16
+
17
+ OLMo is a series of **O**pen **L**anguage **Mo**dels designed to enable the science of language models.
18
+ The OLMo models are trained on the [Dolma](https://huggingface.co/datasets/allenai/dolma) dataset.
19
+ We release all code, checkpoints, logs (coming soon), and details involved in training these models.
20
+
21
+ ## Model Details
22
+
23
+ The core models released in this batch are the following:
24
+ | Size | Training Tokens | Layers | Hidden Size | Attention Heads | Context Length |
25
+ |------|--------|---------|-------------|-----------------|----------------|
26
+ | [OLMo 1B](https://huggingface.co/allenai/OLMo-1B) | 3 Trillion |16 | 2048 | 16 | 2048 |
27
+ | [OLMo 7B](https://huggingface.co/allenai/OLMo-7B) | 2.5 Trillion | 32 | 4096 | 32 | 2048 |
28
+ | [OLMo 7B Twin 2T](https://huggingface.co/allenai/OLMo-7B-Twin-2T) | 2 Trillion | 32 | 4096 | 32 | 2048 |
29
+
30
+ We are releasing many checkpoints for these models, for every 1000 traing steps.
31
+ The naming convention is `step1000-tokens4B`.
32
+ In particular, we focus on four revisions of the 7B models:
33
+
34
+ | Name | HF Repo | Model Revision | Tokens | Note |
35
+ |------------|---------|----------------|-------------------|------|
36
+ |OLMo 7B| [allenai/OLMo-7B](https://huggingface.co/allenai/OLMo-7B)|`main`| 2.5T|The base OLMo 7B model|
37
+ |OLMo 7B (not annealed)|[allenai/OLMo-7B](https://huggingface.co/allenai/OLMo-7B)|step556000-tokens2460B|2.5T| learning rate not annealed to 0|
38
+ |OLMo 7B-2T|[allenai/OLMo-7B](https://huggingface.co/allenai/OLMo-7B)| step452000-tokens2000B |2T| OLMo checkpoint at 2T tokens|
39
+ |OLMo-7B-Twin-2T|[allenai/OLMo-7B-Twin-2T](https://huggingface.co/allenai/OLMo-7B-Twin-2T)|`main`|2T| Twin version on different hardware|
40
+
41
+ To load a specific model revision with HuggingFace, simply add the argument `revision`:
42
+ ```bash
43
+ import hf_olmo # pip install ai2-olmo
44
+ olmo = AutoModelForCausalLM.from_pretrained("allenai/OLMo-7B", revision="step1000-tokens4B")
45
+ ```
46
+
47
+ All revisions/branches are listed in the file `revisions.txt`.
48
+ Or, you can access all the revisions for the models via the following code snippet:
49
+ ```python
50
+ from huggingface_hub import list_repo_refs
51
+ out = list_repo_refs("allenai/OLMo-7B")
52
+ branches = [b.name for b in out.branches]
53
+ ```
54
+ A few revisions were lost due to an error, but the vast majority are present.
55
+
56
+ ### Model Description
57
+
58
+ <!-- Provide a longer summary of what this model is. -->
59
+
60
+ - **Developed by:** Allen Institute for AI (AI2)
61
+ - **Supported by:** Databricks, Kempner Institute for the Study of Natural and Artificial Intelligence at Harvard University, AMD, CSC (Lumi Supercomputer), UW
62
+ - **Model type:** a Transformer style autoregressive language model.
63
+ - **Language(s) (NLP):** English
64
+ - **License:** The code and model are released under Apache 2.0.
65
+ - **Contact:** Technical inquiries: `olmo at allenai dot org`. Press: `press at allenai dot org`
66
+ - **Date cutoff:** Feb./March 2023 based on Dolma dataset version.
67
+
68
+
69
+ ### Model Sources
70
+
71
+ <!-- Provide the basic links for the model. -->
72
+
73
+ - **Project Page:** https://allenai.org/olmo
74
+ - **Repositories:**
75
+ - Core repo (training, inference, fine-tuning etc.): https://github.com/allenai/OLMo
76
+ - Evaluation code: https://github.com/allenai/OLMo-Eval
77
+ - Further fine-tuning code: https://github.com/allenai/open-instruct
78
+ - **Paper:** [Link](https://arxiv.org/abs/2402.00838)
79
+ - **Technical blog post:** https://blog.allenai.org/olmo-open-language-model-87ccfc95f580
80
+ - **W&B Logs:** https://wandb.ai/ai2-llm/OLMo-7B/reports/OLMo-7B--Vmlldzo2NzQyMzk5
81
+ <!-- - **Press release:** TODO -->
82
+
83
+ ## Uses
84
+
85
+ <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
86
+
87
+ ### Inference
88
+ Quickly get inference running with the following required installation:
89
+ ```bash
90
+ pip install ai2-olmo
91
+ ```
92
+ Now, proceed as usual with HuggingFace:
93
+ ```python
94
+ import hf_olmo
95
+
96
+ from transformers import AutoModelForCausalLM, AutoTokenizer
97
+ olmo = AutoModelForCausalLM.from_pretrained("allenai/OLMo-7B")
98
+ tokenizer = AutoTokenizer.from_pretrained("allenai/OLMo-7B")
99
+ message = ["Language modeling is "]
100
+ inputs = tokenizer(message, return_tensors='pt', return_token_type_ids=False)
101
+ # optional verifying cuda
102
+ # inputs = {k: v.to('cuda') for k,v in inputs.items()}
103
+ # olmo = olmo.to('cuda')
104
+ response = olmo.generate(**inputs, max_new_tokens=100, do_sample=True, top_k=50, top_p=0.95)
105
+ print(tokenizer.batch_decode(response, skip_special_tokens=True)[0])
106
+ >> 'Language modeling is the first step to build natural language generation...'
107
+ ```
108
+ Alternatively, with the pipeline abstraction:
109
+ ```python
110
+ import hf_olmo
111
+
112
+ from transformers import pipeline
113
+ olmo_pipe = pipeline("text-generation", model="allenai/OLMo-7B")
114
+ print(olmo_pipe("Language modeling is "))
115
+ >> 'Language modeling is a branch of natural language processing that aims to...'
116
+ ```
117
+
118
+ Or, you can make this slightly faster by quantizing the model, e.g. `AutoModelForCausalLM.from_pretrained("allenai/OLMo-7B", torch_dtype=torch.float16, load_in_8bit=True)` (requires `bitsandbytes`).
119
+ The quantized model is more sensitive to typing / cuda, so it is recommended to pass the inputs as `inputs.input_ids.to('cuda')` to avoid potential issues.
120
+
121
+ Note, you may see the following error if `ai2-olmo` is not installed correctly, which is caused by internal Python check naming. We'll update the code soon to make this error clearer.
122
+ ```bash
123
+ raise ImportError(
124
+ ImportError: This modeling file requires the following packages that were not found in your environment: hf_olmo. Run `pip install hf_olmo`
125
+ ```
126
+
127
+ ### Fine-tuning
128
+ Model fine-tuning can be done from the final checkpoint (the `main` revision of this model) or many intermediate checkpoints. Two recipes for tuning are available.
129
+ 1. Fine-tune with the OLMo repository:
130
+ ```bash
131
+ torchrun --nproc_per_node=8 scripts/train.py {path_to_train_config} \
132
+ --data.paths=[{path_to_data}/input_ids.npy] \
133
+ --data.label_mask_paths=[{path_to_data}/label_mask.npy] \
134
+ --load_path={path_to_checkpoint} \
135
+ --reset_trainer_state
136
+ ```
137
+ For more documentation, see the [GitHub readme](https://github.com/allenai/OLMo?tab=readme-ov-file#fine-tuning).
138
+
139
+ 2. Further fine-tuning support is being developing in AI2's Open Instruct repository. Details are [here](https://github.com/allenai/open-instruct).
140
+
141
+ ## Evaluation
142
+
143
+ <!-- This section describes the evaluation protocols and provides the results. -->
144
+
145
+ Core model results for the 7B model are found below.
146
+
147
+ | | [Llama 7B](https://arxiv.org/abs/2302.13971) | [Llama 2 7B](https://huggingface.co/meta-llama/Llama-2-7b) | [Falcon 7B](https://huggingface.co/tiiuae/falcon-7b) | [MPT 7B](https://huggingface.co/mosaicml/mpt-7b) | **OLMo 7B** (ours) |
148
+ | --------------------------------- | -------- | ---------- | --------- | ------ | ------- |
149
+ | arc_challenge | 44.5 | 39.8 | 47.5 | 46.5 | 48.5 |
150
+ | arc_easy | 57.0 | 57.7 | 70.4 | 70.5 | 65.4 |
151
+ | boolq | 73.1 | 73.5 | 74.6 | 74.2 | 73.4 |
152
+ | copa | 85.0 | 87.0 | 86.0 | 85.0 | 90 |
153
+ | hellaswag | 74.5 | 74.5 | 75.9 | 77.6 | 76.4 |
154
+ | openbookqa | 49.8 | 48.4 | 53.0 | 48.6 | 50.2 |
155
+ | piqa | 76.3 | 76.4 | 78.5 | 77.3 | 78.4 |
156
+ | sciq | 89.5 | 90.8 | 93.9 | 93.7 | 93.8 |
157
+ | winogrande | 68.2 | 67.3 | 68.9 | 69.9 | 67.9 |
158
+ | **Core tasks average** | 68.7 | 68.4 | 72.1 | 71.5 | 71.6 |
159
+ | truthfulQA (MC2) | 33.9 | 38.5 | 34.0 | 33 | 36.0 |
160
+ | MMLU (5 shot MC) | 31.5 | 45.0 | 24.0 | 30.8 | 28.3 |
161
+ | GSM8k (mixed eval.) | 10.0 (8shot CoT) | 12.0 (8shot CoT) | 4.0 (5 shot) | 4.5 (5 shot) | 8.5 (8shot CoT) |
162
+ | **Full average** | 57.8 | 59.3 | 59.2 | 59.3 | 59.8 |
163
+
164
+ And for the 1B model:
165
+
166
+ | task | random | [StableLM 2 1.6b](https://huggingface.co/stabilityai/stablelm-2-1_6b)\* | [Pythia 1B](https://huggingface.co/EleutherAI/pythia-1b) | [TinyLlama 1.1B](https://huggingface.co/TinyLlama/TinyLlama-1.1B-intermediate-step-1195k-token-2.5T) | **OLMo 1B** (ours) |
167
+ | ------------------------------------------------------------------------------------------------------------------------------------------------------------ | ------ | ----------------- | --------- | -------------------------------------- | ------- |
168
+ | arc_challenge | 25 | 43.81 | 33.11 | 34.78 | 34.45 |
169
+ | arc_easy | 25 | 63.68 | 50.18 | 53.16 | 58.07 |
170
+ | boolq | 50 | 76.6 | 61.8 | 64.6 | 60.7 |
171
+ | copa | 50 | 84 | 72 | 78 | 79 |
172
+ | hellaswag | 25 | 68.2 | 44.7 | 58.7 | 62.5 |
173
+ | openbookqa | 25 | 45.8 | 37.8 | 43.6 | 46.4 |
174
+ | piqa | 50 | 74 | 69.1 | 71.1 | 73.7 |
175
+ | sciq | 25 | 94.7 | 86 | 90.5 | 88.1 |
176
+ | winogrande | 50 | 64.9 | 53.3 | 58.9 | 58.9 |
177
+ | Average | 36.11 | 68.41 | 56.44 | 61.48 | 62.42 |
178
+
179
+ \*Unlike OLMo, Pythia, and TinyLlama, StabilityAI has not disclosed yet the data StableLM was trained on, making comparisons with other efforts challenging.
180
+
181
+ ## Model Details
182
+
183
+ ### Data
184
+ For training data details, please see the [Dolma](https://huggingface.co/datasets/allenai/dolma) documentation.
185
+
186
+ ### Architecture
187
+
188
+ OLMo 7B architecture with peer models for comparison.
189
+
190
+ | | **OLMo 7B** | [Llama 2 7B](https://huggingface.co/meta-llama/Llama-2-7b) | [OpenLM 7B](https://laion.ai/blog/open-lm/) | [Falcon 7B](https://huggingface.co/tiiuae/falcon-7b) | PaLM 8B |
191
+ |------------------------|-------------------|---------------------|--------------------|--------------------|------------------|
192
+ | d_model | 4096 | 4096 | 4096 | 4544 | 4096 |
193
+ | num heads | 32 | 32 | 32 | 71 | 16 |
194
+ | num layers | 32 | 32 | 32 | 32 | 32 |
195
+ | MLP ratio | ~8/3 | ~8/3 | ~8/3 | 4 | 4 |
196
+ | LayerNorm type | non-parametric LN | RMSNorm | parametric LN | parametric LN | parametric LN |
197
+ | pos embeddings | RoPE | RoPE | RoPE | RoPE | RoPE |
198
+ | attention variant | full | GQA | full | MQA | MQA |
199
+ | biases | none | none | in LN only | in LN only | none |
200
+ | block type | sequential | sequential | sequential | parallel | parallel |
201
+ | activation | SwiGLU | SwiGLU | SwiGLU | GeLU | SwiGLU |
202
+ | sequence length | 2048 | 4096 | 2048 | 2048 | 2048 |
203
+ | batch size (instances) | 2160 | 1024 | 2048 | 2304 | 512 |
204
+ | batch size (tokens) | ~4M | ~4M | ~4M | ~4M | ~1M |
205
+ | weight tying | no | no | no | no | yes |
206
+
207
+
208
+ ### Hyperparameters
209
+
210
+ AdamW optimizer parameters are shown below.
211
+
212
+ | Size | Peak LR | Betas | Epsilon | Weight Decay |
213
+ |------|------------|-----------------|-------------|--------------|
214
+ | 1B | 4.0E-4 | (0.9, 0.95) | 1.0E-5 | 0.1 |
215
+ | 7B | 3.0E-4 | (0.9, 0.99) | 1.0E-5 | 0.1 |
216
+
217
+ Optimizer settings comparison with peer models.
218
+
219
+ | | **OLMo 7B** | [Llama 2 7B](https://huggingface.co/meta-llama/Llama-2-7b) | [OpenLM 7B](https://laion.ai/blog/open-lm/) | [Falcon 7B](https://huggingface.co/tiiuae/falcon-7b) |
220
+ |-----------------------|------------------|---------------------|--------------------|--------------------|
221
+ | warmup steps | 5000 | 2000 | 2000 | 1000 |
222
+ | peak LR | 3.0E-04 | 3.0E-04 | 3.0E-04 | 6.0E-04 |
223
+ | minimum LR | 3.0E-05 | 3.0E-05 | 3.0E-05 | 1.2E-05 |
224
+ | weight decay | 0.1 | 0.1 | 0.1 | 0.1 |
225
+ | beta1 | 0.9 | 0.9 | 0.9 | 0.99 |
226
+ | beta2 | 0.95 | 0.95 | 0.95 | 0.999 |
227
+ | epsilon | 1.0E-05 | 1.0E-05 | 1.0E-05 | 1.0E-05 |
228
+ | LR schedule | linear | cosine | cosine | cosine |
229
+ | gradient clipping | global 1.0 | global 1.0 | global 1.0 | global 1.0 |
230
+ | gradient reduce dtype | FP32 | FP32 | FP32 | BF16 |
231
+ | optimizer state dtype | FP32 | most likely FP32 | FP32 | FP32 |
232
+
233
+
234
+
235
+ ## Environmental Impact
236
+
237
+ OLMo 7B variants were either trained on MI250X GPUs at the LUMI supercomputer, or A100-40GB GPUs provided by MosaicML.
238
+ A summary of the environmental impact. Further details are available in the paper.
239
+
240
+ | | GPU Type | Power Consumption From GPUs | Carbon Intensity (kg CO₂e/KWh) | Carbon Emissions (tCO₂eq) |
241
+ |-----------|------------|-----------------------------|--------------------------------|---------------------------|
242
+ | OLMo 7B Twin | MI250X ([LUMI supercomputer](https://www.lumi-supercomputer.eu)) | 135 MWh | 0* | 0* |
243
+ | OLMo 7B | A100-40GB ([MosaicML](https://www.mosaicml.com)) | 104 MWh | 0.656 | 75.05 |
244
+
245
+ ## Bias, Risks, and Limitations
246
+
247
+ Like any base language model or fine-tuned model without safety filtering, it is relatively easy for a user to prompt these models to generate harmful and generally sensitive content.
248
+ Such content can also be produced unintentionally, especially in the case of bias, so we recommend users consider the risks of applications of this technology.
249
+
250
+ Otherwise, many facts from OLMo or any LLM will often not be true, so they should be checked.
251
+
252
+
253
+ ## Citation
254
+
255
+ **BibTeX:**
256
+
257
+ ```
258
+ @article{Groeneveld2023OLMo,
259
+ title={OLMo: Accelerating the Science of Language Models},
260
+ author={Groeneveld, Dirk and Beltagy, Iz and Walsh, Pete and Bhagia, Akshita and Kinney, Rodney and Tafjord, Oyvind and Jha, Ananya Harsh and Ivison, Hamish and Magnusson, Ian and Wang, Yizhong and Arora, Shane and Atkinson, David and Authur, Russell and Chandu, Khyathi and Cohan, Arman and Dumas, Jennifer and Elazar, Yanai and Gu, Yuling and Hessel, Jack and Khot, Tushar and Merrill, William and Morrison, Jacob and Muennighoff, Niklas and Naik, Aakanksha and Nam, Crystal and Peters, Matthew E. and Pyatkin, Valentina and Ravichander, Abhilasha and Schwenk, Dustin and Shah, Saurabh and Smith, Will and Subramani, Nishant and Wortsman, Mitchell and Dasigi, Pradeep and Lambert, Nathan and Richardson, Kyle and Dodge, Jesse and Lo, Kyle and Soldaini, Luca and Smith, Noah A. and Hajishirzi, Hannaneh},
261
+ journal={Preprint},
262
+ year={2024}
263
+ }
264
+ ```
265
+
266
+ **APA:**
267
+
268
+ Groeneveld, D., Beltagy, I., Walsh, P., Bhagia, A., Kinney, R., Tafjord, O., Jha, A., Ivison, H., Magnusson, I., Wang, Y., Arora, S., Atkinson, D., Authur, R., Chandu, K., Cohan, A., Dumas, J., Elazar, Y., Gu, Y., Hessel, J., Khot, T., Merrill, W., Morrison, J., Muennighoff, N., Naik, A., Nam, C., Peters, M., Pyatkin, V., Ravichander, A., Schwenk, D., Shah, S., Smith, W., Subramani, N., Wortsman, M., Dasigi, P., Lambert, N., Richardson, K., Dodge, J., Lo, K., Soldaini, L., Smith, N., & Hajishirzi, H. (2024). OLMo: Accelerating the Science of Language Models. Preprint.
269
+
270
+ ## Model Card Contact
271
+
272
+
273
+ For errors in this model card, contact Nathan or Akshita, `{nathanl, akshitab} at allenai dot org`.
model.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:b1d7671818c11442286bb3edfd40f727de3d56f8b1677fc07bba61c8a3aebf63
3
+ size 27552398824
pytorch_model.bin ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:c691ecdf9ec32368e950af49421a699e321b6efad9d21387011c0ba9e985706f
3
+ size 27552427238
revisions.txt ADDED
@@ -0,0 +1,558 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ step0-tokens0B
2
+ step1000-tokens4B
3
+ step2000-tokens9B
4
+ step3000-tokens13B
5
+ step4000-tokens18B
6
+ step5000-tokens22B
7
+ step6000-tokens27B
8
+ step7000-tokens31B
9
+ step8000-tokens35B
10
+ step9000-tokens40B
11
+ step10000-tokens44B
12
+ step11000-tokens49B
13
+ step12000-tokens53B
14
+ step13000-tokens58B
15
+ step14000-tokens62B
16
+ step15000-tokens66B
17
+ step16000-tokens71B
18
+ step17000-tokens75B
19
+ step18000-tokens80B
20
+ step19000-tokens84B
21
+ step20000-tokens88B
22
+ step21000-tokens93B
23
+ step22000-tokens97B
24
+ step23000-tokens102B
25
+ step24000-tokens106B
26
+ step25000-tokens111B
27
+ step26000-tokens115B
28
+ step27000-tokens119B
29
+ step28000-tokens124B
30
+ step29000-tokens128B
31
+ step30000-tokens133B
32
+ step31000-tokens137B
33
+ step32000-tokens142B
34
+ step33000-tokens146B
35
+ step34000-tokens150B
36
+ step35000-tokens155B
37
+ step36000-tokens159B
38
+ step37000-tokens164B
39
+ step38000-tokens168B
40
+ step39000-tokens173B
41
+ step40000-tokens177B
42
+ step41000-tokens181B
43
+ step42000-tokens186B
44
+ step43000-tokens190B
45
+ step44000-tokens195B
46
+ step45000-tokens199B
47
+ step46000-tokens203B
48
+ step47000-tokens208B
49
+ step48000-tokens212B
50
+ step49000-tokens217B
51
+ step50000-tokens221B
52
+ step51000-tokens226B
53
+ step52000-tokens230B
54
+ step53000-tokens234B
55
+ step54000-tokens239B
56
+ step55000-tokens243B
57
+ step56000-tokens248B
58
+ step57000-tokens252B
59
+ step58000-tokens257B
60
+ step59000-tokens261B
61
+ step60000-tokens265B
62
+ step61000-tokens270B
63
+ step62000-tokens274B
64
+ step63000-tokens279B
65
+ step64000-tokens283B
66
+ step65000-tokens288B
67
+ step66000-tokens292B
68
+ step67000-tokens296B
69
+ step68000-tokens301B
70
+ step69000-tokens305B
71
+ step70000-tokens310B
72
+ step71000-tokens314B
73
+ step72000-tokens319B
74
+ step73000-tokens323B
75
+ step74000-tokens327B
76
+ step75000-tokens332B
77
+ step76000-tokens336B
78
+ step77000-tokens341B
79
+ step78000-tokens345B
80
+ step79000-tokens349B
81
+ step80000-tokens354B
82
+ step81000-tokens358B
83
+ step82000-tokens363B
84
+ step83000-tokens367B
85
+ step84000-tokens372B
86
+ step85000-tokens376B
87
+ step86000-tokens380B
88
+ step87000-tokens385B
89
+ step88000-tokens389B
90
+ step89000-tokens394B
91
+ step90000-tokens398B
92
+ step91000-tokens403B
93
+ step92000-tokens407B
94
+ step93000-tokens411B
95
+ step94000-tokens416B
96
+ step95000-tokens420B
97
+ step96000-tokens425B
98
+ step97000-tokens429B
99
+ step98000-tokens434B
100
+ step99000-tokens438B
101
+ step100000-tokens442B
102
+ step101000-tokens447B
103
+ step102000-tokens451B
104
+ step103000-tokens456B
105
+ step104000-tokens460B
106
+ step105000-tokens464B
107
+ step106000-tokens469B
108
+ step107000-tokens473B
109
+ step108000-tokens478B
110
+ step109000-tokens482B
111
+ step110000-tokens487B
112
+ step111000-tokens491B
113
+ step112000-tokens495B
114
+ step113000-tokens500B
115
+ step114000-tokens504B
116
+ step115000-tokens509B
117
+ step116000-tokens513B
118
+ step117000-tokens518B
119
+ step118000-tokens522B
120
+ step119000-tokens526B
121
+ step120000-tokens531B
122
+ step121000-tokens535B
123
+ step122000-tokens540B
124
+ step123000-tokens544B
125
+ step124000-tokens549B
126
+ step125000-tokens553B
127
+ step126000-tokens557B
128
+ step127000-tokens562B
129
+ step128000-tokens566B
130
+ step129000-tokens571B
131
+ step130000-tokens575B
132
+ step131000-tokens580B
133
+ step132000-tokens584B
134
+ step133000-tokens588B
135
+ step134000-tokens593B
136
+ step135000-tokens597B
137
+ step136000-tokens602B
138
+ step137000-tokens606B
139
+ step138000-tokens610B
140
+ step139000-tokens615B
141
+ step140000-tokens619B
142
+ step141000-tokens624B
143
+ step142000-tokens628B
144
+ step143000-tokens633B
145
+ step144000-tokens637B
146
+ step145000-tokens641B
147
+ step146000-tokens646B
148
+ step147000-tokens650B
149
+ step148000-tokens655B
150
+ step149000-tokens659B
151
+ step150000-tokens664B
152
+ step151000-tokens668B
153
+ step152000-tokens672B
154
+ step153000-tokens677B
155
+ step154000-tokens681B
156
+ step155000-tokens686B
157
+ step156000-tokens690B
158
+ step157000-tokens695B
159
+ step158000-tokens699B
160
+ step159000-tokens703B
161
+ step160000-tokens708B
162
+ step161000-tokens712B
163
+ step162000-tokens717B
164
+ step163000-tokens721B
165
+ step164000-tokens725B
166
+ step165000-tokens730B
167
+ step166000-tokens734B
168
+ step167000-tokens739B
169
+ step168000-tokens743B
170
+ step169000-tokens748B
171
+ step170000-tokens752B
172
+ step171000-tokens756B
173
+ step172000-tokens761B
174
+ step173000-tokens765B
175
+ step174000-tokens770B
176
+ step175000-tokens774B
177
+ step176000-tokens779B
178
+ step177000-tokens783B
179
+ step178000-tokens787B
180
+ step179000-tokens792B
181
+ step180000-tokens796B
182
+ step181000-tokens801B
183
+ step182000-tokens805B
184
+ step183000-tokens810B
185
+ step184000-tokens814B
186
+ step185000-tokens818B
187
+ step186000-tokens823B
188
+ step187000-tokens827B
189
+ step188000-tokens832B
190
+ step189000-tokens836B
191
+ step190000-tokens840B
192
+ step191000-tokens845B
193
+ step192000-tokens849B
194
+ step193000-tokens854B
195
+ step194000-tokens858B
196
+ step195000-tokens863B
197
+ step196000-tokens867B
198
+ step197000-tokens871B
199
+ step198000-tokens876B
200
+ step199000-tokens880B
201
+ step200000-tokens885B
202
+ step201000-tokens889B
203
+ step202000-tokens894B
204
+ step203000-tokens898B
205
+ step204000-tokens902B
206
+ step205000-tokens907B
207
+ step206000-tokens911B
208
+ step207000-tokens916B
209
+ step208000-tokens920B
210
+ step209000-tokens925B
211
+ step210000-tokens929B
212
+ step211000-tokens933B
213
+ step212000-tokens938B
214
+ step213000-tokens942B
215
+ step214000-tokens947B
216
+ step215000-tokens951B
217
+ step216000-tokens956B
218
+ step217000-tokens960B
219
+ step218000-tokens964B
220
+ step219000-tokens969B
221
+ step220000-tokens973B
222
+ step221000-tokens978B
223
+ step222000-tokens982B
224
+ step223000-tokens986B
225
+ step224000-tokens991B
226
+ step225000-tokens995B
227
+ step226000-tokens1000B
228
+ step227000-tokens1004B
229
+ step228000-tokens1009B
230
+ step229000-tokens1013B
231
+ step230000-tokens1017B
232
+ step231000-tokens1022B
233
+ step232000-tokens1026B
234
+ step233000-tokens1031B
235
+ step234000-tokens1035B
236
+ step235000-tokens1040B
237
+ step236000-tokens1044B
238
+ step237000-tokens1048B
239
+ step238000-tokens1053B
240
+ step239000-tokens1057B
241
+ step240000-tokens1062B
242
+ step241000-tokens1066B
243
+ step242000-tokens1071B
244
+ step243000-tokens1075B
245
+ step244000-tokens1079B
246
+ step245000-tokens1084B
247
+ step246000-tokens1088B
248
+ step247000-tokens1093B
249
+ step248000-tokens1097B
250
+ step249000-tokens1101B
251
+ step250000-tokens1106B
252
+ step251000-tokens1110B
253
+ step252000-tokens1115B
254
+ step253000-tokens1119B
255
+ step254000-tokens1124B
256
+ step255000-tokens1128B
257
+ step256000-tokens1132B
258
+ step257000-tokens1137B
259
+ step258000-tokens1141B
260
+ step259000-tokens1146B
261
+ step260000-tokens1150B
262
+ step261000-tokens1155B
263
+ step262000-tokens1159B
264
+ step263000-tokens1163B
265
+ step264000-tokens1168B
266
+ step265000-tokens1172B
267
+ step266000-tokens1177B
268
+ step267000-tokens1181B
269
+ step268000-tokens1186B
270
+ step269000-tokens1190B
271
+ step270000-tokens1194B
272
+ step271000-tokens1199B
273
+ step272000-tokens1203B
274
+ step273000-tokens1208B
275
+ step274000-tokens1212B
276
+ step275000-tokens1217B
277
+ step276000-tokens1221B
278
+ step277000-tokens1225B
279
+ step278000-tokens1230B
280
+ step279000-tokens1234B
281
+ step280000-tokens1239B
282
+ step281000-tokens1243B
283
+ step282000-tokens1247B
284
+ step283000-tokens1252B
285
+ step284000-tokens1256B
286
+ step285000-tokens1261B
287
+ step286000-tokens1265B
288
+ step287000-tokens1270B
289
+ step288000-tokens1274B
290
+ step289000-tokens1278B
291
+ step290000-tokens1283B
292
+ step291000-tokens1287B
293
+ step292000-tokens1292B
294
+ step293000-tokens1296B
295
+ step294000-tokens1301B
296
+ step295000-tokens1305B
297
+ step296000-tokens1309B
298
+ step297000-tokens1314B
299
+ step298000-tokens1318B
300
+ step299000-tokens1323B
301
+ step300000-tokens1327B
302
+ step301000-tokens1332B
303
+ step302000-tokens1336B
304
+ step303000-tokens1340B
305
+ step304000-tokens1345B
306
+ step305000-tokens1349B
307
+ step306000-tokens1354B
308
+ step307000-tokens1358B
309
+ step308000-tokens1362B
310
+ step309000-tokens1367B
311
+ step310000-tokens1371B
312
+ step311000-tokens1376B
313
+ step312000-tokens1380B
314
+ step313000-tokens1385B
315
+ step314000-tokens1389B
316
+ step315000-tokens1393B
317
+ step316000-tokens1398B
318
+ step317000-tokens1402B
319
+ step318000-tokens1407B
320
+ step319000-tokens1411B
321
+ step320000-tokens1416B
322
+ step321000-tokens1420B
323
+ step322000-tokens1424B
324
+ step323000-tokens1429B
325
+ step324000-tokens1433B
326
+ step325000-tokens1438B
327
+ step326000-tokens1442B
328
+ step327000-tokens1447B
329
+ step328000-tokens1451B
330
+ step329000-tokens1455B
331
+ step330000-tokens1460B
332
+ step331000-tokens1464B
333
+ step332000-tokens1469B
334
+ step333000-tokens1473B
335
+ step334000-tokens1478B
336
+ step335000-tokens1482B
337
+ step336000-tokens1486B
338
+ step337000-tokens1491B
339
+ step338000-tokens1495B
340
+ step339000-tokens1500B
341
+ step340000-tokens1504B
342
+ step341000-tokens1508B
343
+ step342000-tokens1513B
344
+ step343000-tokens1517B
345
+ step344000-tokens1522B
346
+ step345000-tokens1526B
347
+ step346000-tokens1531B
348
+ step347000-tokens1535B
349
+ step348000-tokens1539B
350
+ step349000-tokens1544B
351
+ step350000-tokens1548B
352
+ step351000-tokens1553B
353
+ step352000-tokens1557B
354
+ step353000-tokens1562B
355
+ step354000-tokens1566B
356
+ step355000-tokens1570B
357
+ step356000-tokens1575B
358
+ step357000-tokens1579B
359
+ step358000-tokens1584B
360
+ step359000-tokens1588B
361
+ step360000-tokens1593B
362
+ step361000-tokens1597B
363
+ step362000-tokens1601B
364
+ step363000-tokens1606B
365
+ step364000-tokens1610B
366
+ step365000-tokens1615B
367
+ step366000-tokens1619B
368
+ step367000-tokens1623B
369
+ step368000-tokens1628B
370
+ step369000-tokens1632B
371
+ step370000-tokens1637B
372
+ step371000-tokens1641B
373
+ step372000-tokens1646B
374
+ step373000-tokens1650B
375
+ step374000-tokens1654B
376
+ step375000-tokens1659B
377
+ step376000-tokens1663B
378
+ step377000-tokens1668B
379
+ step378000-tokens1672B
380
+ step379000-tokens1677B
381
+ step380000-tokens1681B
382
+ step381000-tokens1685B
383
+ step382000-tokens1690B
384
+ step383000-tokens1694B
385
+ step384000-tokens1699B
386
+ step385000-tokens1703B
387
+ step386000-tokens1708B
388
+ step387000-tokens1712B
389
+ step388000-tokens1716B
390
+ step389000-tokens1721B
391
+ step390000-tokens1725B
392
+ step391000-tokens1730B
393
+ step392000-tokens1734B
394
+ step393000-tokens1739B
395
+ step394000-tokens1743B
396
+ step395000-tokens1747B
397
+ step396000-tokens1752B
398
+ step397000-tokens1756B
399
+ step398000-tokens1761B
400
+ step399000-tokens1765B
401
+ step400000-tokens1769B
402
+ step401000-tokens1774B
403
+ step402000-tokens1778B
404
+ step403000-tokens1783B
405
+ step404000-tokens1787B
406
+ step405000-tokens1792B
407
+ step406000-tokens1796B
408
+ step407000-tokens1800B
409
+ step408000-tokens1805B
410
+ step409000-tokens1809B
411
+ step410000-tokens1814B
412
+ step411000-tokens1818B
413
+ step412000-tokens1823B
414
+ step413000-tokens1827B
415
+ step414000-tokens1831B
416
+ step415000-tokens1836B
417
+ step416000-tokens1840B
418
+ step417000-tokens1845B
419
+ step418000-tokens1849B
420
+ step419000-tokens1854B
421
+ step420000-tokens1858B
422
+ step421000-tokens1862B
423
+ step422000-tokens1867B
424
+ step423000-tokens1871B
425
+ step424000-tokens1876B
426
+ step425000-tokens1880B
427
+ step426000-tokens1884B
428
+ step427000-tokens1889B
429
+ step428000-tokens1893B
430
+ step429000-tokens1898B
431
+ step430000-tokens1902B
432
+ step431000-tokens1907B
433
+ step432000-tokens1911B
434
+ step433000-tokens1915B
435
+ step434000-tokens1920B
436
+ step435000-tokens1924B
437
+ step436000-tokens1929B
438
+ step437000-tokens1933B
439
+ step438000-tokens1938B
440
+ step439000-tokens1942B
441
+ step440000-tokens1946B
442
+ step441000-tokens1951B
443
+ step442000-tokens1955B
444
+ step443000-tokens1960B
445
+ step444000-tokens1964B
446
+ step445000-tokens1969B
447
+ step446000-tokens1973B
448
+ step447000-tokens1977B
449
+ step448000-tokens1982B
450
+ step449000-tokens1986B
451
+ step450000-tokens1991B
452
+ step451000-tokens1995B
453
+ step452000-tokens2000B
454
+ step453000-tokens2004B
455
+ step454000-tokens2008B
456
+ step455000-tokens2013B
457
+ step456000-tokens2017B
458
+ step457000-tokens2022B
459
+ step458000-tokens2026B
460
+ step459000-tokens2030B
461
+ step460000-tokens2035B
462
+ step461000-tokens2039B
463
+ step462000-tokens2044B
464
+ step463000-tokens2048B
465
+ step464000-tokens2053B
466
+ step465000-tokens2057B
467
+ step466000-tokens2061B
468
+ step467000-tokens2066B
469
+ step468000-tokens2070B
470
+ step469000-tokens2075B
471
+ step470000-tokens2079B
472
+ step471000-tokens2084B
473
+ step472000-tokens2088B
474
+ step473000-tokens2092B
475
+ step474000-tokens2097B
476
+ step475000-tokens2101B
477
+ step476000-tokens2106B
478
+ step477000-tokens2110B
479
+ step478000-tokens2115B
480
+ step479000-tokens2119B
481
+ step480000-tokens2123B
482
+ step481000-tokens2128B
483
+ step482000-tokens2132B
484
+ step483000-tokens2137B
485
+ step484000-tokens2141B
486
+ step485000-tokens2145B
487
+ step486000-tokens2150B
488
+ step487000-tokens2154B
489
+ step488000-tokens2159B
490
+ step489000-tokens2163B
491
+ step490000-tokens2168B
492
+ step491000-tokens2172B
493
+ step492000-tokens2176B
494
+ step493000-tokens2181B
495
+ step494000-tokens2185B
496
+ step495000-tokens2190B
497
+ step496000-tokens2194B
498
+ step497000-tokens2199B
499
+ step498000-tokens2203B
500
+ step499000-tokens2207B
501
+ step500000-tokens2212B
502
+ step501000-tokens2216B
503
+ step502000-tokens2221B
504
+ step503000-tokens2225B
505
+ step504000-tokens2230B
506
+ step505000-tokens2234B
507
+ step506000-tokens2238B
508
+ step507000-tokens2243B
509
+ step508000-tokens2247B
510
+ step509000-tokens2252B
511
+ step510000-tokens2256B
512
+ step511000-tokens2261B
513
+ step512000-tokens2265B
514
+ step513000-tokens2269B
515
+ step514000-tokens2274B
516
+ step515000-tokens2278B
517
+ step516000-tokens2283B
518
+ step517000-tokens2287B
519
+ step518000-tokens2291B
520
+ step519000-tokens2296B
521
+ step520000-tokens2300B
522
+ step521000-tokens2305B
523
+ step522000-tokens2309B
524
+ step523000-tokens2314B
525
+ step524000-tokens2318B
526
+ step525000-tokens2322B
527
+ step526000-tokens2327B
528
+ step527000-tokens2331B
529
+ step528000-tokens2336B
530
+ step529000-tokens2340B
531
+ step530000-tokens2345B
532
+ step531000-tokens2349B
533
+ step532000-tokens2353B
534
+ step533000-tokens2358B
535
+ step534000-tokens2362B
536
+ step535000-tokens2367B
537
+ step536000-tokens2371B
538
+ step537000-tokens2376B
539
+ step538000-tokens2380B
540
+ step539000-tokens2384B
541
+ step540000-tokens2389B
542
+ step541000-tokens2393B
543
+ step542000-tokens2398B
544
+ step543000-tokens2402B
545
+ step544000-tokens2406B
546
+ step545000-tokens2411B
547
+ step546000-tokens2415B
548
+ step547000-tokens2420B
549
+ step548000-tokens2424B
550
+ step549000-tokens2429B
551
+ step550000-tokens2433B
552
+ step551000-tokens2437B
553
+ step552000-tokens2442B
554
+ step553000-tokens2446B
555
+ step554000-tokens2451B
556
+ step555000-tokens2455B
557
+ step556000-tokens2460B
558
+ step557000-tokens2464B