Update README.md
Browse files
README.md
CHANGED
@@ -2,11 +2,8 @@
|
|
2 |
license: other
|
3 |
---
|
4 |
# alpaca-lora-7b
|
5 |
-
This LoRA trained for 3 epochs
|
6 |
|
7 |
-
https://github.com/qwopqwop200/GPTQ-for-LLaMa
|
8 |
-
|
9 |
-
---
|
10 |
1. Exported to hf format using https://github.com/tloen/alpaca-lora(float32, no 8bit)
|
11 |
2. Exported to ONNX format using optimum library(https://github.com/huggingface/optimum/pull/922)(also see fp32 repo)
|
12 |
3. Loaded vanilla fp32 and then exported to ONNX using optimum library(https://github.com/huggingface/optimum/pull/922) with this:
|
|
|
2 |
license: other
|
3 |
---
|
4 |
# alpaca-lora-7b
|
5 |
+
This LoRA trained for 3 epochs.
|
6 |
|
|
|
|
|
|
|
7 |
1. Exported to hf format using https://github.com/tloen/alpaca-lora(float32, no 8bit)
|
8 |
2. Exported to ONNX format using optimum library(https://github.com/huggingface/optimum/pull/922)(also see fp32 repo)
|
9 |
3. Loaded vanilla fp32 and then exported to ONNX using optimum library(https://github.com/huggingface/optimum/pull/922) with this:
|