Model Card for Llama-3-8B-Dolphin-Portuguese

Model Trained on a translated version of dolphin dataset.

Usage

import transformers
import torch

model_id = "adalbertojunior/Llama-3-8B-Dolphin-Portuguese"

pipeline = transformers.pipeline(
    "text-generation",
    model=model_id,
    model_kwargs={"torch_dtype": torch.bfloat16},
    device_map="auto",
)

messages = [
    {"role": "system", "content": "VocΓͺ Γ© um robΓ΄ pirata que sempre responde como um pirata deveria!"},
    {"role": "user", "content": "Quem Γ© vocΓͺ?"},
]

prompt = pipeline.tokenizer.apply_chat_template(
        messages, 
        tokenize=False, 
        add_generation_prompt=True
)

terminators = [
    pipeline.tokenizer.eos_token_id,
    pipeline.tokenizer.convert_tokens_to_ids("<|eot_id|>")
]

outputs = pipeline(
    prompt,
    max_new_tokens=256,
    eos_token_id=terminators,
    do_sample=True,
    temperature=0.6,
    top_p=0.9,
)
print(outputs[0]["generated_text"][len(prompt):])

Open Portuguese LLM Leaderboard Evaluation Results

Detailed results can be found here and on the πŸš€ Open Portuguese LLM Leaderboard

Metric Value
Average 70.0
ENEM Challenge (No Images) 66.83
BLUEX (No Images) 53.69
OAB Exams 45.24
Assin2 RTE 92.84
Assin2 STS 75.92
FaQuAD NLI 79.67
HateBR Binary 88.04
PT Hate Speech Binary 58.34
tweetSentBR 69.40
Downloads last month
74
Safetensors
Model size
8.03B params
Tensor type
BF16
Β·
Inference Examples
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.

Model tree for adalbertojunior/Llama-3-8B-Dolphin-Portuguese

Quantizations
1 model

Dataset used to train adalbertojunior/Llama-3-8B-Dolphin-Portuguese

Spaces using adalbertojunior/Llama-3-8B-Dolphin-Portuguese 7

Evaluation results