File size: 582 Bytes
f184e3e
 
3ba2a0c
 
 
 
 
 
 
91674c2
3ba2a0c
ef0da84
2b087de
 
f184e3e
3ba2a0c
 
91674c2
3ba2a0c
04064a4
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
---
license: mit
datasets:
- vicgalle/alpaca-gpt4
- BelleGroup/train_1M_CN
- stingning/ultrachat
- HuggingFaceH4/no_robots
- Open-Orca/OpenOrca
language:
- zh
- en
pipeline_tag: conversational
tags:
- Mistral
---
# Zephyr-8x7b:Zephyr Models but Mixtral 8x7B

We present to you the Zephyr-8x7b, a Mixtral 8x7B MoE model that SFT-only training on a dataset of nearly four million conversation corpora.

It has demonstrated strong contextual understanding, reasoning, and human moral alignment without alignment techniques like DPO, and we invite you to participate in our exploration!