Training procedure

The following bitsandbytes quantization config was used during training:

  • quant_method: bitsandbytes
  • _load_in_8bit: False
  • _load_in_4bit: True
  • llm_int8_threshold: 6.0
  • llm_int8_skip_modules: None
  • llm_int8_enable_fp32_cpu_offload: False
  • llm_int8_has_fp16_weight: False
  • bnb_4bit_quant_type: nf4
  • bnb_4bit_use_double_quant: True
  • bnb_4bit_compute_dtype: float16
  • bnb_4bit_quant_storage: uint8
  • load_in_4bit: True
  • load_in_8bit: False

Framework versions

  • PEFT 0.5.0

Open Portuguese LLM Leaderboard Evaluation Results

Detailed results can be found here and on the ๐Ÿš€ Open Portuguese LLM Leaderboard

Metric Value
Average 63.57
ENEM Challenge (No Images) 56.82
BLUEX (No Images) 47.15
OAB Exams 36.31
Assin2 RTE 88.92
Assin2 STS 76.37
FaQuAD NLI 67.17
HateBR Binary 82.02
PT Hate Speech Binary 69.24
tweetSentBR 48.14
Downloads last month
1
Inference API
Unable to determine this modelโ€™s pipeline type. Check the docs .

Space using recogna-nlp/mistralbode_7b_qlora_ultraalpaca 1

Evaluation results