Update README.md
Browse files
README.md
CHANGED
@@ -4,11 +4,19 @@ tags:
|
|
4 |
- merge
|
5 |
- mergekit
|
6 |
- lazymergekit
|
|
|
|
|
|
|
7 |
---
|
8 |
|
9 |
# Mnemosyne-7B
|
10 |
|
11 |
-
Mnemosyne-7B is
|
|
|
|
|
|
|
|
|
|
|
12 |
|
13 |
## 🧩 Configuration
|
14 |
|
@@ -21,4 +29,6 @@ merge_method: model_stock
|
|
21 |
base_model: mistralai/Mistral-7B-Instruct-v0.2
|
22 |
dtype: bfloat16
|
23 |
|
24 |
-
```
|
|
|
|
|
|
4 |
- merge
|
5 |
- mergekit
|
6 |
- lazymergekit
|
7 |
+
metrics:
|
8 |
+
- code_eval
|
9 |
+
- accuracy
|
10 |
---
|
11 |
|
12 |
# Mnemosyne-7B
|
13 |
|
14 |
+
Mnemosyne-7B is an experimental large language model (LLM) created by merging several pre-trained models designed for informative and educational purposes. It combines the strengths of these models with the hope of achieving a highly informative and comprehensive LLM.
|
15 |
+
|
16 |
+
### Important Note:
|
17 |
+
|
18 |
+
This is an experimental model, and its performance and capabilities are not guaranteed. Further testing and evaluation are required to assess its effectiveness.
|
19 |
+
|
20 |
|
21 |
## 🧩 Configuration
|
22 |
|
|
|
29 |
base_model: mistralai/Mistral-7B-Instruct-v0.2
|
30 |
dtype: bfloat16
|
31 |
|
32 |
+
```
|
33 |
+
|
34 |
+
Mnemosyne-7B is a merge of the following models using [mergekit](https://github.com/cg123/mergekit):
|