soujanyaporia
commited on
Commit
·
a41ef79
1
Parent(s):
b7fbc81
Update README.md
Browse files
README.md
CHANGED
@@ -42,4 +42,4 @@ As a result of this fine-tuning process, Flacuna exhibited notable performance i
|
|
42 |
| Flacuna | 13B | 49.4 | 32.5 | 67.9 |
|
43 |
|
44 |
|
45 |
-
During training, Flacuna employed a maximum input sequence length of 1280. We utilized LoRA for parameter-efficient fine-tuning.
|
|
|
42 |
| Flacuna | 13B | 49.4 | 32.5 | 67.9 |
|
43 |
|
44 |
|
45 |
+
During training, Flacuna is a 13B checkpoint of LLaMA and employed a maximum input sequence length of 1280. We utilized LoRA for parameter-efficient fine-tuning.
|