elinas commited on
Commit
0d85677
1 Parent(s): fed038a

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +3 -9
README.md CHANGED
@@ -11,16 +11,10 @@ This LoRA trained for 3 epochs and has been converted to int4 (4bit) via GPTQ me
11
  Use the **safetensors** version of the model, the **pt** version is an old quantization that is no longer supported and will be removed in the future.
12
  See the repo below for more info.
13
 
14
- https://github.com/qwopqwop200/GPTQ-for-LLaMa
 
15
 
16
- # Important - Update 2023-04-03
17
- Recent GPTQ commits have introduced breaking changes to model loading and you should use commit `a6f363e3f93b9fb5c26064b5ac7ed58d22e3f773` in the `cuda` branch.
18
-
19
- If you're not familiar with the Git process
20
- 1. `git checkout a6f363e3f93b9fb5c26064b5ac7ed58d22e3f773`
21
- 2. `git switch -c cuda-stable`
22
-
23
- This creates and switches to a `cuda-stable` branch to continue using the quantized models.
24
 
25
  # Update 2023-03-27
26
  New weights have been added. The old .pt version is no longer supported and has been replaced by a 128 groupsize safetensors file. Update to the latest GPTQ to use it.
 
11
  Use the **safetensors** version of the model, the **pt** version is an old quantization that is no longer supported and will be removed in the future.
12
  See the repo below for more info.
13
 
14
+ # Important - Update 2023-04-05
15
+ Recent GPTQ commits have introduced breaking changes to model loading and you should this fork for a stable experience https://github.com/oobabooga/GPTQ-for-LLaMa
16
 
17
+ Curently only cuda is supported.
 
 
 
 
 
 
 
18
 
19
  # Update 2023-03-27
20
  New weights have been added. The old .pt version is no longer supported and has been replaced by a 128 groupsize safetensors file. Update to the latest GPTQ to use it.