Update README.md
Browse files
README.md
CHANGED
@@ -20,9 +20,7 @@ datasets:
|
|
20 |
# Model description
|
21 |
Heidrun-Mistral-7B-chat is a chat-model based on [Heidrun-Mistral-7B-base](https://huggingface.co/Mabeck/Heidrun-Mistral-7B-base), finetuned on [danish-OpenHermes](https://huggingface.co/datasets/Mabeck/danish-OpenHermes) and [skoleGPT](https://huggingface.co/datasets/kobprof/skolegpt-instruct) for a instruction/chat format.
|
22 |
|
23 |
-
It
|
24 |
-
|
25 |
-
Further evaluations will be tested.
|
26 |
|
27 |
# Benchmarks
|
28 |
|
@@ -32,6 +30,8 @@ The following benchmarks have been tested using [ScandEval](https://github.com/S
|
|
32 |
- **DANSK**: 50.77%+-2.29%/34.05%+-1.78%, ranks 3rd=
|
33 |
- **Hellaswag-da**: 29.18%+-0.99%/46.64%+-0.76%, ranks 4th
|
34 |
|
|
|
|
|
35 |
# Datasets
|
36 |
This model is trained on Danish instruction datasets [danish-OpenHermes](Mabeck/danish-OpenHermes) and [skoleGPT](https://huggingface.co/datasets/kobprof/skolegpt-instruct), which have not been safeguarded or alligned.
|
37 |
|
|
|
20 |
# Model description
|
21 |
Heidrun-Mistral-7B-chat is a chat-model based on [Heidrun-Mistral-7B-base](https://huggingface.co/Mabeck/Heidrun-Mistral-7B-base), finetuned on [danish-OpenHermes](https://huggingface.co/datasets/Mabeck/danish-OpenHermes) and [skoleGPT](https://huggingface.co/datasets/kobprof/skolegpt-instruct) for a instruction/chat format.
|
22 |
|
23 |
+
It is a new SOTA Danish open-source LLM and shows very strong performance in logic and reasoning tasks.
|
|
|
|
|
24 |
|
25 |
# Benchmarks
|
26 |
|
|
|
30 |
- **DANSK**: 50.77%+-2.29%/34.05%+-1.78%, ranks 3rd=
|
31 |
- **Hellaswag-da**: 29.18%+-0.99%/46.64%+-0.76%, ranks 4th
|
32 |
|
33 |
+
Further evaluations will be tested.
|
34 |
+
|
35 |
# Datasets
|
36 |
This model is trained on Danish instruction datasets [danish-OpenHermes](Mabeck/danish-OpenHermes) and [skoleGPT](https://huggingface.co/datasets/kobprof/skolegpt-instruct), which have not been safeguarded or alligned.
|
37 |
|