vllm (pretrained=/root/autodl-tmp/SauerkrautLM-v2-14b-SFT,add_bos_token=true,tensor_parallel_size=2,max_model_len=2048,dtype=bfloat16), gen_kwargs: (None), limit: 250.0, num_fewshot: 5, batch_size: auto

Tasks Version Filter n-shot Metric Value Stderr
gsm8k 3 flexible-extract 5 exact_match ↑ 0.88 ± 0.0206
strict-match 5 exact_match ↑ 0.86 ± 0.0220

vllm (pretrained=/root/autodl-tmp/output,add_bos_token=true,tensor_parallel_size=2,max_model_len=2048,dtype=bfloat16), gen_kwargs: (None), limit: 250.0, num_fewshot: 5, batch_size: auto

Tasks Version Filter n-shot Metric Value Stderr
gsm8k 3 flexible-extract 5 exact_match ↑ 0.908 ± 0.0183
strict-match 5 exact_match ↑ 0.900 ± 0.0190
Downloads last month
1
Safetensors
Model size
14.8B params
Tensor type
BF16
·
I8
·
Inference Providers NEW
This model is not currently available via any of the supported Inference Providers.
The model cannot be deployed to the HF Inference API: The model has no library tag.

Model tree for noneUsername/SauerkrautLM-v2-14b-SFT-W8A8-Dynamic-Per-Token

Base model

Qwen/Qwen2.5-14B
Finetuned
(2)
this model