Yi-1.5-34B-32K finetuned via SFT on adamo1139/uninstruct-v1-experimental-chatml. Then trained via ORPO on adamo1139/rawrr_v2-2_stage1. It's an attempt to fix synthetic SFT contamination of original Yi-1.5-34B-32K.

Next up:

Cleaning and releasing AEZAKMI v4 dataset.

Training this model on it. Maybe adding some toxic-dpo-natural on it if needed. Releasing it.

Downloads last month
360
Safetensors
Model size
34.4B params
Tensor type
FP16
·
Inference Examples
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.