Update README.md
Browse files
README.md
CHANGED
@@ -2,7 +2,7 @@
|
|
2 |
## Overview
|
3 |
|
4 |
This is [Jon Durbin's Airoboros 33B GPT4 1.4](https://huggingface.co/jondurbin/airoboros-33b-gpt4-1.4) (merged model GPTQ Quantization, and LoRA weights) with several key modifications:
|
5 |
-
- Context length extended to 8192 by RoPE Scaled Embeddings, but NOT via the superHOT LoRA.
|
6 |
- Training sequences beyond 2048 have the target truncated to equal 2048.
|
7 |
- Used airoboros-gpt4-1.4.1 dataset instead of airoboros-gpt4-1.4
|
8 |
|
|
|
2 |
## Overview
|
3 |
|
4 |
This is [Jon Durbin's Airoboros 33B GPT4 1.4](https://huggingface.co/jondurbin/airoboros-33b-gpt4-1.4) (merged model GPTQ Quantization, and LoRA weights) with several key modifications:
|
5 |
+
- Context length extended to 8192 by RoPE Scaled Embeddings, but NOT via the superHOT LoRA. I started with base Llama-33b.
|
6 |
- Training sequences beyond 2048 have the target truncated to equal 2048.
|
7 |
- Used airoboros-gpt4-1.4.1 dataset instead of airoboros-gpt4-1.4
|
8 |
|