abcdabcd987
commited on
Commit
•
636b5eb
1
Parent(s):
8891961
Update README.md
Browse files
README.md
CHANGED
@@ -13,13 +13,21 @@ tags:
|
|
13 |
- generated_from_trainer
|
14 |
---
|
15 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
16 |
* Base Model: [Llama-2-7b-hf](https://huggingface.co/meta-llama/Llama-2-7b-hf)
|
17 |
* LoRA target: `q_proj,k_proj,v_proj,o_proj,gate_proj,up_proj,down_proj`
|
18 |
* LoRA rank: 16
|
19 |
* Training epochs: 4
|
20 |
|
21 |
-
See <https://github.com/punica-ai/punica/tree/master/examples/finetune>
|
22 |
-
|
23 |
### Training hyperparameters
|
24 |
|
25 |
The following hyperparameters were used during training:
|
|
|
13 |
- generated_from_trainer
|
14 |
---
|
15 |
|
16 |
+
## Punica
|
17 |
+
|
18 |
+
Punica: Serving multiple LoRA finetuned LLMs at the cost of one
|
19 |
+
|
20 |
+
Paper: <https://arxiv.org/abs/2310.18547>
|
21 |
+
|
22 |
+
See <https://github.com/punica-ai/punica/tree/master/examples/finetune>
|
23 |
+
|
24 |
+
## Model
|
25 |
+
|
26 |
* Base Model: [Llama-2-7b-hf](https://huggingface.co/meta-llama/Llama-2-7b-hf)
|
27 |
* LoRA target: `q_proj,k_proj,v_proj,o_proj,gate_proj,up_proj,down_proj`
|
28 |
* LoRA rank: 16
|
29 |
* Training epochs: 4
|
30 |
|
|
|
|
|
31 |
### Training hyperparameters
|
32 |
|
33 |
The following hyperparameters were used during training:
|