Update README.md
Browse files
README.md
CHANGED
@@ -21,14 +21,19 @@ tags:
|
|
21 |
Mistral 7b-based model fine-tuned in Spanish to add high quality Spanish text generation.
|
22 |
|
23 |
* Base model Mistral-7b
|
|
|
|
|
|
|
|
|
24 |
|
25 |
-
* Based on the excelent
|
26 |
|
27 |
-
* Fine-tuned in Spanish with a collection of poetry, books, wikipedia articles
|
|
|
|
|
28 |
|
29 |
* Trained using Lora and PEFT and INT8 quantization on 2 GPUs for several days.
|
30 |
|
31 |
-
* Quantized using llama.cpp in int4 Q4_0 and int8 Q8_0
|
32 |
|
33 |
## Usage:
|
34 |
* Any framework that uses GGUF format.
|
|
|
21 |
Mistral 7b-based model fine-tuned in Spanish to add high quality Spanish text generation.
|
22 |
|
23 |
* Base model Mistral-7b
|
24 |
+
|
25 |
+
* Two GGUF versions, int4 and int8 for fast inference in consumer hardware
|
26 |
+
|
27 |
+
* Quantized using llama.cpp in int4 Q4_0 and int8 Q8_0
|
28 |
|
29 |
+
* Based on the excelent udkai/Turdus fine-tuning mistral
|
30 |
|
31 |
+
* Fine-tuned in Spanish with a collection of texts: poetry, books, phylosophy, wikipedia articles cleaned and prepared by author.
|
32 |
+
|
33 |
+
* Added some instruction dolly and alpaca-es datasets.
|
34 |
|
35 |
* Trained using Lora and PEFT and INT8 quantization on 2 GPUs for several days.
|
36 |
|
|
|
37 |
|
38 |
## Usage:
|
39 |
* Any framework that uses GGUF format.
|