--- license: apache-2.0 datasets: - 40umov/dostoevsky language: - ru base_model: - Vikhrmodels/Vikhr-Nemo-12B-Instruct-R-21-09-24 library_name: unsloth --- Vikhr-Nemo fine-tuned with contrastive Russian literature. * Base model: https://huggingface.co/Vikhrmodels/Vikhr-Nemo-12B-Instruct-R-21-09-24 * Dataset: https://huggingface.co/datasets/40umov/dostoevsky * Method: [ORPO](https://arxiv.org/abs/2403.07691) * Training config: https://github.com/IlyaGusev/saiga/blob/main/configs/models/doestoevsky_nemo_12b_orpo_m1.json * WandB: https://wandb.ai/ilyagusev/rulm_self_instruct/runs/4v4pcgej