Edit model card

Mixtral-8x22B-v0.1-Instruct-sft-en-de

A full SFT of mistral-community/Mixtral-8x22B-v0.1 using a mix of English and German instruction data.

There is also an ORPO-trained version: maxidl/Mixtral-8x22B-v0.1-capybara-orpo-en-de

Dataset

source #examples
teknium/OpenHermes-2.5 1001551
maxidl/OpenOrca-gpt4-de 119559
maxidl/MathInstruct-de 56793
maxidl/Capybara-de 15991
maxidl/math-prm-800k-de 12298
maxidl/wikihow-de 10103
maxidl/no_robots-de 9500
maxidl/lima-de 1030

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 1e-05
  • train_batch_size: 1
  • eval_batch_size: 8
  • seed: 42
  • distributed_type: multi-GPU
  • num_devices: 64
  • total_train_batch_size: 64
  • total_eval_batch_size: 512
  • optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
  • lr_scheduler_type: cosine
  • lr_scheduler_warmup_steps: 50
  • num_epochs: 3

Training results

Framework versions

  • Transformers 4.39.3
  • Pytorch 2.1.2
  • Datasets 2.18.0
  • Tokenizers 0.15.2
Downloads last month
5
Safetensors
Model size
141B params
Tensor type
BF16
·
Inference Examples
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.

Model tree for maxidl/Mixtral-8x22B-v0.1-Instruct-sft-en-de

Finetuned
(14)
this model

Dataset used to train maxidl/Mixtral-8x22B-v0.1-Instruct-sft-en-de