GGUF
Spanish
English
mistral
spanish
8bit
4bit
lora
multilingual
Inference Endpoints
ecastera commited on
Commit
1b9c677
1 Parent(s): 26c7789

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +8 -3
README.md CHANGED
@@ -21,14 +21,19 @@ tags:
21
  Mistral 7b-based model fine-tuned in Spanish to add high quality Spanish text generation.
22
 
23
  * Base model Mistral-7b
 
 
 
 
24
 
25
- * Based on the excelent job of fine-tuning base mistral from udkai/Turdus
26
 
27
- * Fine-tuned in Spanish with a collection of poetry, books, wikipedia articles, phylosophy texts and dolly and alpaca-es datasets.
 
 
28
 
29
  * Trained using Lora and PEFT and INT8 quantization on 2 GPUs for several days.
30
 
31
- * Quantized using llama.cpp in int4 Q4_0 and int8 Q8_0
32
 
33
  ## Usage:
34
  * Any framework that uses GGUF format.
 
21
  Mistral 7b-based model fine-tuned in Spanish to add high quality Spanish text generation.
22
 
23
  * Base model Mistral-7b
24
+
25
+ * Two GGUF versions, int4 and int8 for fast inference in consumer hardware
26
+
27
+ * Quantized using llama.cpp in int4 Q4_0 and int8 Q8_0
28
 
29
+ * Based on the excelent udkai/Turdus fine-tuning mistral
30
 
31
+ * Fine-tuned in Spanish with a collection of texts: poetry, books, phylosophy, wikipedia articles cleaned and prepared by author.
32
+
33
+ * Added some instruction dolly and alpaca-es datasets.
34
 
35
  * Trained using Lora and PEFT and INT8 quantization on 2 GPUs for several days.
36
 
 
37
 
38
  ## Usage:
39
  * Any framework that uses GGUF format.