File size: 988 Bytes
955bc00
 
 
 
 
 
cb1fdc0
 
 
955bc00
 
 
 
cb1fdc0
 
ff68ae6
7afa0aa
cb1fdc0
 
 
 
955bc00
 
 
 
 
 
 
 
 
 
 
 
cb1fdc0
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
---
license: apache-2.0
tags:
- merge
- mergekit
- lazymergekit
metrics:
- code_eval
- accuracy
---

# Mnemosyne-7B

Mnemosyne-7B is an experimental large language model (LLM) created by merging several pre-trained models designed for informative and educational purposes. It combines the strengths of these models with the hope of achieving a highly informative and comprehensive LLM.

GGUF: https://huggingface.co/mradermacher/Mnemosyne-7B-GGUF

### Important Note:

This is an experimental model, and its performance and capabilities are not guaranteed. Further testing and evaluation are required to assess its effectiveness.


## 🧩 Configuration

```yaml
models:
  - model: MaziyarPanahi/Mistral-7B-Instruct-KhanAcademy-v0.2
  - model: openbmb/Eurus-7b-kto
  - model: Weyaxi/Newton-7B
merge_method: model_stock
base_model: mistralai/Mistral-7B-Instruct-v0.2
dtype: bfloat16

```

Mnemosyne-7B is a merge of the following models using [mergekit](https://github.com/cg123/mergekit):