chat-1.0.0
Browse files
README.md
CHANGED
@@ -18,7 +18,8 @@ model-index:
|
|
18 |
value: 53.89
|
19 |
name: strict accuracy
|
20 |
source:
|
21 |
-
url:
|
|
|
22 |
name: Open LLM Leaderboard
|
23 |
- task:
|
24 |
type: text-generation
|
@@ -33,7 +34,8 @@ model-index:
|
|
33 |
value: 6.46
|
34 |
name: normalized accuracy
|
35 |
source:
|
36 |
-
url:
|
|
|
37 |
name: Open LLM Leaderboard
|
38 |
- task:
|
39 |
type: text-generation
|
@@ -48,7 +50,8 @@ model-index:
|
|
48 |
value: 3.25
|
49 |
name: exact match
|
50 |
source:
|
51 |
-
url:
|
|
|
52 |
name: Open LLM Leaderboard
|
53 |
- task:
|
54 |
type: text-generation
|
@@ -60,10 +63,11 @@ model-index:
|
|
60 |
num_few_shot: 0
|
61 |
metrics:
|
62 |
- type: acc_norm
|
63 |
-
value: 0
|
64 |
name: acc_norm
|
65 |
source:
|
66 |
-
url:
|
|
|
67 |
name: Open LLM Leaderboard
|
68 |
- task:
|
69 |
type: text-generation
|
@@ -78,7 +82,8 @@ model-index:
|
|
78 |
value: 2.38
|
79 |
name: acc_norm
|
80 |
source:
|
81 |
-
url:
|
|
|
82 |
name: Open LLM Leaderboard
|
83 |
- task:
|
84 |
type: text-generation
|
@@ -95,8 +100,19 @@ model-index:
|
|
95 |
value: 5.91
|
96 |
name: accuracy
|
97 |
source:
|
98 |
-
url:
|
|
|
99 |
name: Open LLM Leaderboard
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
100 |
---
|
101 |
|
102 |
# MedIT SUN 2.4B
|
@@ -111,13 +127,11 @@ model-index:
|
|
111 |
- Proprietary technique developed by MedIT Solutions
|
112 |
|
113 |
**Fine-tuning**
|
114 |
-
- Open
|
115 |
-
- Open SFT datasets
|
116 |
|
117 |
**Training Status**
|
118 |
-
-
|
119 |
-
- Current version: 1.0.0
|
120 |
-
- Note: Model is still in the training phase
|
121 |
|
122 |
**Key Features**
|
123 |
- Built on Llama 3.2 architecture
|
@@ -130,6 +144,10 @@ model-index:
|
|
130 |
|
131 |
**Limitations**
|
132 |
As the model is still in training, performance and capabilities may vary. Users should be aware that the model is not in its final form and may exhibit inconsistencies or limitations typical of in-progress AI models.
|
|
|
|
|
|
|
|
|
133 |
# [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard)
|
134 |
Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_meditsolutions__Llama-3.2-SUN-2.4B-v1.0.0)
|
135 |
|
@@ -141,5 +159,4 @@ Detailed results can be found [here](https://huggingface.co/datasets/open-llm-le
|
|
141 |
|MATH Lvl 5 (4-Shot)| 3.25|
|
142 |
|GPQA (0-shot) | 0.00|
|
143 |
|MuSR (0-shot) | 2.38|
|
144 |
-
|MMLU-PRO (5-shot) | 5.91|
|
145 |
-
|
|
|
18 |
value: 53.89
|
19 |
name: strict accuracy
|
20 |
source:
|
21 |
+
url: >-
|
22 |
+
https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=meditsolutions/Llama-3.2-SUN-2.4B-v1.0.0
|
23 |
name: Open LLM Leaderboard
|
24 |
- task:
|
25 |
type: text-generation
|
|
|
34 |
value: 6.46
|
35 |
name: normalized accuracy
|
36 |
source:
|
37 |
+
url: >-
|
38 |
+
https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=meditsolutions/Llama-3.2-SUN-2.4B-v1.0.0
|
39 |
name: Open LLM Leaderboard
|
40 |
- task:
|
41 |
type: text-generation
|
|
|
50 |
value: 3.25
|
51 |
name: exact match
|
52 |
source:
|
53 |
+
url: >-
|
54 |
+
https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=meditsolutions/Llama-3.2-SUN-2.4B-v1.0.0
|
55 |
name: Open LLM Leaderboard
|
56 |
- task:
|
57 |
type: text-generation
|
|
|
63 |
num_few_shot: 0
|
64 |
metrics:
|
65 |
- type: acc_norm
|
66 |
+
value: 0
|
67 |
name: acc_norm
|
68 |
source:
|
69 |
+
url: >-
|
70 |
+
https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=meditsolutions/Llama-3.2-SUN-2.4B-v1.0.0
|
71 |
name: Open LLM Leaderboard
|
72 |
- task:
|
73 |
type: text-generation
|
|
|
82 |
value: 2.38
|
83 |
name: acc_norm
|
84 |
source:
|
85 |
+
url: >-
|
86 |
+
https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=meditsolutions/Llama-3.2-SUN-2.4B-v1.0.0
|
87 |
name: Open LLM Leaderboard
|
88 |
- task:
|
89 |
type: text-generation
|
|
|
100 |
value: 5.91
|
101 |
name: accuracy
|
102 |
source:
|
103 |
+
url: >-
|
104 |
+
https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=meditsolutions/Llama-3.2-SUN-2.4B-v1.0.0
|
105 |
name: Open LLM Leaderboard
|
106 |
+
datasets:
|
107 |
+
- argilla/OpenHermesPreferences
|
108 |
+
- argilla/magpie-ultra-v0.1
|
109 |
+
- argilla/Capybara-Preferences-Filtered
|
110 |
+
- mlabonne/open-perfectblend
|
111 |
+
- HuggingFaceTB/everyday-conversations-llama3.1-2k
|
112 |
+
- WizardLMTeam/WizardLM_evol_instruct_V2_196k
|
113 |
+
- ProlificAI/social-reasoning-rlhf
|
114 |
+
language:
|
115 |
+
- en
|
116 |
---
|
117 |
|
118 |
# MedIT SUN 2.4B
|
|
|
127 |
- Proprietary technique developed by MedIT Solutions
|
128 |
|
129 |
**Fine-tuning**
|
130 |
+
- Open (or open subsets) of open datasets from HF
|
131 |
+
- Open (or open subsets) SFT datasets from HF
|
132 |
|
133 |
**Training Status**
|
134 |
+
- Current version: chat-1.0.0
|
|
|
|
|
135 |
|
136 |
**Key Features**
|
137 |
- Built on Llama 3.2 architecture
|
|
|
144 |
|
145 |
**Limitations**
|
146 |
As the model is still in training, performance and capabilities may vary. Users should be aware that the model is not in its final form and may exhibit inconsistencies or limitations typical of in-progress AI models.
|
147 |
+
|
148 |
+
**Disclaimer and Safety Considerations**
|
149 |
+
The Model is designed to be used as a smart assistant but not as a knowledge source within your applications, systems, or environments. It is not intended to provide 100% accurate answers, especially in scenarios where high precision and accuracy are crucial.
|
150 |
+
|
151 |
# [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard)
|
152 |
Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_meditsolutions__Llama-3.2-SUN-2.4B-v1.0.0)
|
153 |
|
|
|
159 |
|MATH Lvl 5 (4-Shot)| 3.25|
|
160 |
|GPQA (0-shot) | 0.00|
|
161 |
|MuSR (0-shot) | 2.38|
|
162 |
+
|MMLU-PRO (5-shot) | 5.91|
|
|