bhenrym14 commited on
Commit
20fa77c
1 Parent(s): 90e5fd0

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +2 -2
README.md CHANGED
@@ -1,7 +1,7 @@
1
  # RoPE Scaled Finetune of airoboros-33b-gpt4-1.4.1 (GPTQ)
2
  ## Overview
3
 
4
- This is [Jon Durbin's Airoboros 33B GPT4 1.4](https://huggingface.co/jondurbin/airoboros-33b-gpt4-1.4) (with GPTQ Quantization) with several key modifications:
5
  - Context length extended to 8192 by RoPE Scaled Embeddings, but NOT via the superHOT LoRA.
6
  - Training sequences beyond 2048 have the target truncated to equal 2048.
7
  - Used airoboros-gpt4-1.4.1 dataset instead of airoboros-gpt4-1.4
@@ -24,7 +24,7 @@ Recent advancements in extending context by RoPE scaling ([kaiokendev](https://k
24
 
25
  ## Quantization:
26
 
27
- The merged model was quantized with AutoGPTQ (bits = 4, group_size = 128, desc_act = True). If there's interest, I can upload the LoRA weights and/or merged 16bit HF model.
28
 
29
 
30
 
 
1
  # RoPE Scaled Finetune of airoboros-33b-gpt4-1.4.1 (GPTQ)
2
  ## Overview
3
 
4
+ This is [Jon Durbin's Airoboros 33B GPT4 1.4](https://huggingface.co/jondurbin/airoboros-33b-gpt4-1.4) (merged model GPTQ Quantization, and LoRA weights) with several key modifications:
5
  - Context length extended to 8192 by RoPE Scaled Embeddings, but NOT via the superHOT LoRA.
6
  - Training sequences beyond 2048 have the target truncated to equal 2048.
7
  - Used airoboros-gpt4-1.4.1 dataset instead of airoboros-gpt4-1.4
 
24
 
25
  ## Quantization:
26
 
27
+ The merged model was quantized with AutoGPTQ (bits = 4, group_size = 128, desc_act = True). The adapter weights and config are also uploaded.
28
 
29
 
30