Thestral-0.1-tr-chat-7B
This model is a full fine-tuned version of mistralai/Mistral-7B-v0.1 on diverse Turkish datasets.
The model is fully finetuned on translated datasets using axolotl. These datasets primarily consist of translated versions sourced from teknium/OpenHermes-2.5 and the Open-Orca/SlimOrca datasets.
See axolotl config
axolotl version: 0.4.0
base_model: mistralai/Mistral-7B-v0.1
model_type: MistralForCausalLM
tokenizer_type: LlamaTokenizer
load_in_8bit: false
load_in_4bit: false
strict: false
datasets:
- path: NovusResearch/OpenHermes-2.5-Translated-TR-sharegpt-style
type: sharegpt
conversation: chatml
- path: data/merged_all.json
ds_type: json
type: sharegpt
conversation: chatml
dataset_prepared_path:
val_set_size: 0.05
output_dir: ./out
sequence_len: 8192
sample_packing: true
pad_to_sequence_len: true
eval_sample_packing: false
wandb_project:
wandb_entity:
wandb_watch:
wandb_name:
wandb_log_model:
gradient_accumulation_steps: 4
micro_batch_size: 2
num_epochs: 2
optimizer: adamw_bnb_8bit
lr_scheduler: cosine
learning_rate: 0.000005
train_on_inputs: false
group_by_length: false
bf16: auto
fp16:
tf32: false
gradient_checkpointing: true
early_stopping_patience:
resume_from_checkpoint:
local_rank:
logging_steps: 1
xformers_attention:
flash_attention: true
## Use
wandb_project: full_finetune
wandb_entity:
wandb_watch:
wandb_name:
wandb_log_model:
warmup_steps: 10
evals_per_epoch: 0
eval_table_size:
eval_max_new_tokens: 128
saves_per_epoch: 1
debug:
deepspeed:
weight_decay: 0.0
fsdp:
fsdp_config:
special_tokens:
bos_token: "<s>"
eos_token: "<|im_end|>"
unk_token: "<unk>"
tokens:
- "<|im_start|>"
π― OpenLLMTurkishLeaderboard
Metric | Value |
---|---|
Avg. | 36.41 |
AI2 Reasoning Challenge | 27.24 |
HellaSwag | 33.93 |
MMLU | 40.64 |
TruthfulQA | 47.90 |
Winogrande | 50.86 |
GSM8k | 17.91 |
- Downloads last month
- 2,980
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social
visibility and check back later, or deploy to Inference Endpoints (dedicated)
instead.
Model tree for NovusResearch/Thestral-0.1-tr-chat-7B
Base model
mistralai/Mistral-7B-v0.1