nenkoru commited on
Commit
3691a73
·
1 Parent(s): d50e705

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +1 -4
README.md CHANGED
@@ -2,11 +2,8 @@
2
  license: other
3
  ---
4
  # alpaca-lora-7b
5
- This LoRA trained for 3 epochs and has been converted to int4 via GPTQ method. See the repo below for more info.
6
 
7
- https://github.com/qwopqwop200/GPTQ-for-LLaMa
8
-
9
- ---
10
  1. Exported to hf format using https://github.com/tloen/alpaca-lora(float32, no 8bit)
11
  2. Exported to ONNX format using optimum library(https://github.com/huggingface/optimum/pull/922)(also see fp32 repo)
12
  3. Loaded vanilla fp32 and then exported to ONNX using optimum library(https://github.com/huggingface/optimum/pull/922) with this:
 
2
  license: other
3
  ---
4
  # alpaca-lora-7b
5
+ This LoRA trained for 3 epochs.
6
 
 
 
 
7
  1. Exported to hf format using https://github.com/tloen/alpaca-lora(float32, no 8bit)
8
  2. Exported to ONNX format using optimum library(https://github.com/huggingface/optimum/pull/922)(also see fp32 repo)
9
  3. Loaded vanilla fp32 and then exported to ONNX using optimum library(https://github.com/huggingface/optimum/pull/922) with this: