Mistral-Finetuned-DialogSumm

This model is a fine-tuned version of TheBloke/Mistral-7B-Instruct-v0.1-GPTQ on the DialogSumm dataset.

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

The DialogSumm dataset was used to train the model.

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 0.001
  • train_batch_size: 16
  • eval_batch_size: 8
  • seed: 42
  • optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
  • lr_scheduler_type: cosine
  • training_steps: 250
  • mixed_precision_training: Native AMP

Training results

Framework versions

  • Transformers 4.35.0.dev0
  • Pytorch 2.1.0+cu118
  • Datasets 2.14.6
  • Tokenizers 0.14.1
Downloads last month
17
Inference Providers NEW
This model is not currently available via any of the supported Inference Providers.
The model cannot be deployed to the HF Inference API: The HF Inference API does not support summarization models for peft library.

Model tree for Villian7/Mistral-Finetuned-DialogSumm

Dataset used to train Villian7/Mistral-Finetuned-DialogSumm