Update README.md
Browse files
README.md
CHANGED
@@ -17,7 +17,7 @@ inference: false
|
|
17 |
A bilingual instruction-tuned LoRA model of https://huggingface.co/meta-llama/Llama-2-13b-hf
|
18 |
|
19 |
- Instruction-following datasets used: alpaca, alpaca-zh, open assistant
|
20 |
-
- Training framework: https://github.com/hiyouga/LLaMA-Efficient-Tuning
|
21 |
|
22 |
Usage:
|
23 |
|
@@ -40,7 +40,7 @@ inputs = inputs.to("cuda")
|
|
40 |
generate_ids = model.generate(**inputs, max_new_tokens=256, streamer=streamer)
|
41 |
```
|
42 |
|
43 |
-
You could also alternatively launch a CLI demo by using the script in https://github.com/hiyouga/LLaMA-Efficient-Tuning
|
44 |
|
45 |
```bash
|
46 |
python src/cli_demo.py --model_name_or_path hiyouga/Llama-2-Chinese-13b-chat
|
@@ -48,7 +48,9 @@ python src/cli_demo.py --model_name_or_path hiyouga/Llama-2-Chinese-13b-chat
|
|
48 |
|
49 |
---
|
50 |
|
|
|
51 |
|
|
|
52 |
|
53 |
---
|
54 |
|
|
|
17 |
A bilingual instruction-tuned LoRA model of https://huggingface.co/meta-llama/Llama-2-13b-hf
|
18 |
|
19 |
- Instruction-following datasets used: alpaca, alpaca-zh, open assistant
|
20 |
+
- Training framework: [LLaMA-Efficient-Tuning](https://github.com/hiyouga/LLaMA-Efficient-Tuning)
|
21 |
|
22 |
Usage:
|
23 |
|
|
|
40 |
generate_ids = model.generate(**inputs, max_new_tokens=256, streamer=streamer)
|
41 |
```
|
42 |
|
43 |
+
You could also alternatively launch a CLI demo by using the script in [LLaMA-Efficient-Tuning](https://github.com/hiyouga/LLaMA-Efficient-Tuning)
|
44 |
|
45 |
```bash
|
46 |
python src/cli_demo.py --model_name_or_path hiyouga/Llama-2-Chinese-13b-chat
|
|
|
48 |
|
49 |
---
|
50 |
|
51 |
+
The model is trained using the web UI of [LLaMA-Efficient-Tuning](https://github.com/hiyouga/LLaMA-Efficient-Tuning).
|
52 |
|
53 |
+
![ui](ui.jpg)
|
54 |
|
55 |
---
|
56 |
|