Edit model card
  • Developed by: notbdq

  • License: apache-2.0

  • This model is a fine tuned mistral-7b-instruct-v0.2 with merve/turkish_instructions dataset.

  • Instruct format:

"Aşağıda bir görevi tanımlayan bir talimat ve daha fazla bağlam sağlayan bir girdi bulunmaktadır. Talebi uygun şekilde tamamlayan bir yanıt yazın.\n\n### Talimat:\n{}\n\n### Girdi:\n{}\n\n### Yanıt:\n{}"
  • example inference code:
from transformers import AutoModelForCausalLM, AutoTokenizer

device = "cuda" # the device to load the model onto

model = AutoModelForCausalLM.from_pretrained("notbdq/mistral-turkish-v2")
tokenizer = AutoTokenizer.from_pretrained("notbdq/mistral-turkish-v2")

messages = [
    {"role": "user", "content": "Yapay zeka nasıl bulundu?"},
]

encodeds = tokenizer.apply_chat_template(messages, return_tensors="pt")

model_inputs = encodeds.to(device)
model.to(device)

generated_ids = model.generate(model_inputs, max_new_tokens=1000, do_sample=True)
decoded = tokenizer.batch_decode(generated_ids)
print(decoded[0])
Downloads last month
3,152
Safetensors
Model size
7.24B params
Tensor type
BF16
·
Inference Examples
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.

Model tree for notbdq/mistral-turkish-v2

Quantizations
1 model

Dataset used to train notbdq/mistral-turkish-v2