Agnuxo commited on
Commit
4e4c734
1 Parent(s): aad0083

Upload README.md with huggingface_hub

Browse files
Files changed (1) hide show
  1. README.md +64 -19
README.md CHANGED
@@ -1,20 +1,23 @@
 
1
  ---
 
 
2
  license: apache-2.0
3
- datasets:
4
- - iamtarun/python_code_instructions_18k_alpaca
5
- - Vezora/Tested-143k-Python-Alpaca
6
- - jtatman/python-code-dataset-500k
7
- language:
8
- - es
9
- - en
10
- metrics:
11
- - glue
12
- base_model: Agnuxo/Qwen2_0.5B-Spanish_English_raspberry_pi_GGUF_16bit
13
  library_name: adapter-transformers
 
 
 
 
 
 
 
 
 
14
  ---
15
 
16
-
17
- # Uploaded model
18
 
19
  [<img src="https://github.githubassets.com/assets/GitHub-Mark-ea2971cee799.png" width="100"/><img src="https://github.githubassets.com/assets/GitHub-Logo-ee398b662d42.png" width="100"/>](https://github.com/Agnuxo1)
20
  - **Developed by:** Agnuxo(https://github.com/Agnuxo1)
@@ -24,14 +27,56 @@ library_name: adapter-transformers
24
  This mistral model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
25
 
26
  [<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
27
- # Modelo Fine-tuned: Agnuxo/Qwen2_0.5B-Spanish_English_F32
28
 
29
- Este modelo ha sido afinado para la tarea de glue (sst2) y ha sido evaluado con los siguientes resultados:
30
 
31
- - **Precisión (Accuracy)**: 0.4438
32
- - **Número de Parámetros**: 494,034,560
33
- - **Memoria Necesaria**: 1.84 GB
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
34
 
35
- Puedes encontrar más detalles en mi [GitHub](https://github.com/Agnuxo1).
36
 
37
- ¡Gracias por tu interés en este modelo!
 
1
+
2
  ---
3
+ base_model: Agnuxo/Qwen2_0.5B
4
+ language: ['en', 'es']
5
  license: apache-2.0
6
+ tags: ['text-generation-inference', 'transformers', 'unsloth', 'mistral', 'gguf']
7
+ datasets: ['iamtarun/python_code_instructions_18k_alpaca', 'jtatman/python-code-dataset-500k', 'flytech/python-codes-25k', 'Vezora/Tested-143k-Python-Alpaca', 'codefuse-ai/CodeExercise-Python-27k', 'Vezora/Tested-22k-Python-Alpaca', 'mlabonne/Evol-Instruct-Python-26k']
 
 
 
 
 
 
 
 
8
  library_name: adapter-transformers
9
+ metrics:
10
+ - accuracy
11
+ - bertscore
12
+ - bleu
13
+ - comet
14
+ - glue
15
+ - google_bleu
16
+ - perplexity
17
+ - rouge
18
  ---
19
 
20
+ # Uploaded model
 
21
 
22
  [<img src="https://github.githubassets.com/assets/GitHub-Mark-ea2971cee799.png" width="100"/><img src="https://github.githubassets.com/assets/GitHub-Logo-ee398b662d42.png" width="100"/>](https://github.com/Agnuxo1)
23
  - **Developed by:** Agnuxo(https://github.com/Agnuxo1)
 
27
  This mistral model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
28
 
29
  [<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
 
30
 
 
31
 
32
+ ## Benchmark Results
33
+
34
+ This model has been fine-tuned for various tasks and evaluated on the following benchmarks:
35
+
36
+ ### accuracy
37
+ **Accuracy:** Not Available
38
+
39
+ ![accuracy Accuracy](./accuracy_accuracy.png)
40
+
41
+ ### bertscore
42
+ **Bertscore:** Not Available
43
+
44
+ ![bertscore Bertscore](./bertscore_bertscore.png)
45
+
46
+ ### bleu
47
+ **Bleu:** Not Available
48
+
49
+ ![bleu Bleu](./bleu_bleu.png)
50
+
51
+ ### comet
52
+ **Comet:** Not Available
53
+
54
+ ![comet Comet](./comet_comet.png)
55
+
56
+ ### glue
57
+ **Glue:** Not Available
58
+
59
+ ![glue Glue](./glue_glue.png)
60
+
61
+ ### google_bleu
62
+ **Google_bleu:** Not Available
63
+
64
+ ![google_bleu Google_bleu](./google_bleu_google_bleu.png)
65
+
66
+ ### perplexity
67
+ **Perplexity:** Not Available
68
+
69
+ ![perplexity Perplexity](./perplexity_perplexity.png)
70
+
71
+ ### rouge
72
+ **Rouge:** Not Available
73
+
74
+ ![rouge Rouge](./rouge_rouge.png)
75
+
76
+
77
+ Model Size: 494,032,768 parameters
78
+ Required Memory: 1.84 GB
79
 
80
+ For more details, visit my [GitHub](https://github.com/Agnuxo1).
81
 
82
+ Thanks for your interest in this model!