Update README.md
Browse files
README.md
CHANGED
@@ -6,7 +6,7 @@ This is [Jon Durbin's Airoboros 33B GPT4 1.4](https://huggingface.co/jondurbin/a
|
|
6 |
- Training sequences beyond 2048 have the target truncated to equal 2048.
|
7 |
- Used airoboros-gpt4-1.4.1 dataset instead of airoboros-gpt4-1.4
|
8 |
|
9 |
-
Otherwise, I emulated the training process as closely as possible
|
10 |
|
11 |
## Motivation
|
12 |
Recent advancements in extending context by RoPE scaling ([kaiokendev](https://kaiokendev.github.io/til#extending-context-to-8k) and [meta AI)](https://arxiv.org/abs/2306.15595)) demonstrate the ability to extend the context window without (total) retraining. Finetuning has shown to be necessary to properly leverage the longer context. The superHOT LoRA is a finetuned adapter that has been finetuned on longer context (8192 tokens); even when applied to dissimilar models, it successfully extends the contexts window to which the model can attend. While impressive this adapter is so flexible, how much does performance suffer relative to a model that has been finetuned with the scaled embeddings from the start? This is an experiment to explore this.
|
|
|
6 |
- Training sequences beyond 2048 have the target truncated to equal 2048.
|
7 |
- Used airoboros-gpt4-1.4.1 dataset instead of airoboros-gpt4-1.4
|
8 |
|
9 |
+
Otherwise, I emulated the training process as closely as possible (rank 64 QLoRA) It was trained on 1x RTX 6000 Ada for ~43 hours.
|
10 |
|
11 |
## Motivation
|
12 |
Recent advancements in extending context by RoPE scaling ([kaiokendev](https://kaiokendev.github.io/til#extending-context-to-8k) and [meta AI)](https://arxiv.org/abs/2306.15595)) demonstrate the ability to extend the context window without (total) retraining. Finetuning has shown to be necessary to properly leverage the longer context. The superHOT LoRA is a finetuned adapter that has been finetuned on longer context (8192 tokens); even when applied to dissimilar models, it successfully extends the contexts window to which the model can attend. While impressive this adapter is so flexible, how much does performance suffer relative to a model that has been finetuned with the scaled embeddings from the start? This is an experiment to explore this.
|