Datasets:
Model (CoT)
stringlengths 48
89
| TheoremQA
float64 7.8
48.4
| MATH
float64 5
69.2
| GSM
float64 17.4
94.5
| GPQA
float64 20.7
48.9
| MMLU-STEM
float64 29.7
79
|
---|---|---|---|---|---|
[Mistral-7B-v0.2-base](https://huggingface.co/TIGER-Lab/Mistral-7B-Base-V0.2) | 19.2 | 10.2 | 36.2 | 24.7 | 50.1 |
[Mixtral-8x7B-base](https://huggingface.co/mistralai/Mixtral-8x7B-v0.1) | 23.2 | 28.4 | 74.4 | 29.2 | 59.7 |
[Mixtral-8x7B-instruct](https://huggingface.co/mistralai/Mixtral-8x7B-v0.1) | 25.3 | 22.1 | 71.7 | 32.4 | 61.4 |
[Qwen-1.5-7B-base](https://huggingface.co/Qwen/Qwen1.5-7B) | 14.2 | 13.3 | 54.1 | 26.7 | 53.7 |
[Qwen-1.5-14B-base](https://huggingface.co/Qwen/Qwen1.5-14B) | 14 | 25.2 | 61.6 | 35.8 | 64.5 |
[Qwen-1.5-72B-base](https://huggingface.co/Qwen/Qwen1.5-72B) | 29.3 | 35.1 | 77.6 | 36.3 | 68.5 |
[Yi-6B-base](https://huggingface.co/01-ai/Yi-6B) | 12 | 5.8 | 32.6 | 20.7 | 46.9 |
[Yi-34B-base](https://huggingface.co/01-ai/Yi-34B) | 23.2 | 15.9 | 67.9 | 29.7 | 62.6 |
[ChatGLM3-6B-base](https://huggingface.co/THUDM/chatglm3-6b) | 11.3 | 25.7 | 72.3 | 28.7 | 42.4 |
[Gemma-7B-base](https://huggingface.co/google/gemma-7b) | 21.5 | 24.3 | 46.4 | 25.7 | 53.3 |
[LLaMA-2-13B-base](https://huggingface.co/meta-llama/Llama-2-13b) | 10.9 | 5 | 29.6 | 26.2 | 42.9 |
[LLeMMA-7B](https://huggingface.co/EleutherAI/llemma_7b) | 17.2 | 18 | 36.4 | 23.2 | 45.2 |
[LLeMMA-34B](https://huggingface.co/EleutherAI/llemma_34b) | 21.1 | 25 | 71.9 | 29.2 | 41.7 |
[InternLM2-7B](https://huggingface.co/internlm/internlm2-7b) | 7.8 | 20.2 | 70.8 | 22.7 | 34.3 |
[InternLM2-20B](https://huggingface.co/internlm/internlm2-20b) | 19.5 | 25.5 | 76.1 | 34.3 | 61.2 |
[Deepseek-7B-base](https://huggingface.co/deepseek-ai/deepseek-llm-7b-base) | 15.7 | 6.4 | 17.4 | 25.7 | 43.1 |
[Deepseek-67B-base](https://huggingface.co/deepseek-ai/deepseek-llm-67b-base) | 25.3 | 15.9 | 66.5 | 31.8 | 57.4 |
[GPT-4-turbo-0409](https://platform.openai.com/docs/models/gpt-4-turbo-and-gpt-4) | 48.4 | 69.2 | 94.5 | 46.2 | 76.5 |
[InternLM-Math-7B](https://huggingface.co/internlm/internlm2-math-7b) | 13.2 | 34.6 | 78.1 | 22.7 | 41.1 |
[InternLM-Math-20B](https://huggingface.co/internlm/internlm2-math-20b) | 17.1 | 37.7 | 82.9 | 28.9 | 50.1 |
[Deepseek-Math-7B](https://huggingface.co/deepseek-ai/deepseek-math-7b-base) | 25.3 | 36.2 | 64.2 | 29.7 | 56.4 |
[Deepseek-Math-7B-Instruct](https://huggingface.co/deepseek-ai/deepseek-math-7b-instruct) | 23.7 | 46.8 | 82.9 | 31.8 | 59.3 |
[WizardMath-7B-1.1](https://huggingface.co/WizardLM/WizardMath-7B-V1.1) | 11.7 | 33 | 83.2 | 28.7 | 52.7 |
[MetaMath-Mistral-7B](https://huggingface.co/meta-math/MetaMath-Mistral-7B) | 16.5 | 28.2 | 77.7 | 30.8 | 51.3 |
[Abel-7B-002](https://huggingface.co/GAIR/Abel-7B-002) | 19.3 | 29.5 | 83.2 | 30.3 | 29.7 |
[OpenMath-Mistral-7B](https://huggingface.co/nvidia/OpenMath-Mistral-7B-v0.1) | 13.1 | 9.1 | 24.5 | 26.5 | 43.7 |
[Rho-1-Math-7B](https://huggingface.co/microsoft/rho-math-7b-v0.1) | 21 | 31 | 66.9 | 29.2 | 53.1 |
[LLaMA-3-8B-base](https://huggingface.co/meta-llama/Meta-Llama-3-8B) | 20.1 | 21.3 | 54.8 | 27.2 | 55.6 |
[LLaMA-3-8B-Instrct](https://huggingface.co/meta-llama/Meta-Llama-3-8B-Instruct) | 22.8 | 30 | 79.5 | 34.5 | 60.2 |
[LLaMA-3-70B-base](https://huggingface.co/meta-llama/Meta-Llama-3-70B) | 32.3 | 42.5 | 77.6 | 36.3 | 73.7 |
[LLaMA-3-70B-instruct](https://huggingface.co/meta-llama/Meta-Llama-3-70B-Instruct) | 42 | 47.4 | 92 | 48.9 | 79 |
[Qwen-1.5-110B-base](https://huggingface.co/Qwen/Qwen1.5-110B) | 34.9 | 49.6 | 85.4 | 35.9 | 73.4 |
[MAmmoTH2-8x7B-base](https://huggingface.co/TIGER-Lab/MAmmoTH2-8x7B) | 32.2 | 39 | 75.4 | 36.8 | 67.4 |
[MAmmoTH2-8x7B-plus](https://huggingface.co/TIGER-Lab/MAmmoTH2-8x7B-Plus) | 34.1 | 47 | 86.4 | 37.4 | 72.4 |
This dataset contains the results used for Science Leaderboard
- Downloads last month
- 60