--- base_model: - IlyaGusev/saiga_nemo_12b --- Consider using noneUsername/saiga_nemo_12b-W8A8-Dynamic-Per-Token-better. I tweaked the quantization parameters to get better results. vllm (pretrained=/root/autodl-tmp/saiga_nemo_12b,add_bos_token=true,tensor_parallel_size=2,max_model_len=2048), gen_kwargs: (None), limit: 250.0, num_fewshot: 5, batch_size: auto |Tasks|Version| Filter |n-shot| Metric | |Value| |Stderr| |-----|------:|----------------|-----:|-----------|---|----:|---|-----:| |gsm8k| 3|flexible-extract| 5|exact_match|↑ |0.808|± |0.0250| | | |strict-match | 5|exact_match|↑ |0.760|± |0.0271| vllm (pretrained=/root/autodl-tmp/output,add_bos_token=true,tensor_parallel_size=2,max_model_len=2048,dtype=bfloat16), gen_kwargs: (None), limit: 250.0, num_fewshot: 5, batch_size: auto |Tasks|Version| Filter |n-shot| Metric | |Value| |Stderr| |-----|------:|----------------|-----:|-----------|---|----:|---|-----:| |gsm8k| 3|flexible-extract| 5|exact_match|↑ |0.792|± |0.0257| | | |strict-match | 5|exact_match|↑ |0.768|± |0.0268|