File size: 1,086 Bytes
8d66c0b
f17fc1f
b2484a0
 
 
 
8d66c0b
 
b2484a0
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
---
license: mit
library_name: transformers
datasets:
- HuggingFaceH4/deita-10k-v0-sft
base_model: mistralai/Mistral-7B-v0.1
---

[<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="200" height="32"/>](https://wandb.ai/llm_surgery/gemma-zephyr)

# Mistral 7B Zephyr SFT V2

The [Zephyr](https://huggingface.co/HuggingFaceH4/zephyr-7b-beta) SFT recipe applied on top of Mistral 7B (new recipe with chatML format)

## Model description

- **Model type:** A 7.2B parameter GPT-like model fine-tuned on a mix of publicly available, synthetic datasets.
- **Language(s) (NLP):** Primarily English
- **Finetuned from model:** [mistralai/Mistral-7B-v0.1](https://huggingface.co/mistralai/Mistral-7B-v0.1)

## Recipe

We trained using the [alignment handbook recipe](https://github.com/huggingface/alignment-handbook/blob/main/scripts/run_sft.py) and logging to W&B

Visit the [W&B workspace here](https://wandb.ai/llm_surgery/gemma-zephyr?nw=nwusercapecape)

## Compute provided by Lambda Labs - 8xA100 80GB node