Model,Accuracy Qwen2-7B-Instruct,0.8285714285714286 Meta-Llama-3.1-8B-Instruct,0.4857142857142857 llama3-8b-cpt-sea-lionv2.1-instruct,0.5047619047619047 Qwen2_5_32B_Instruct,0.8476190476190476 Qwen2_5_7B_Instruct,0.8 Qwen2_5_1_5B_Instruct,0.5523809523809524 Qwen2-72B-Instruct,0.8285714285714286 Meta-Llama-3-8B-Instruct,0.4666666666666667 Meta-Llama-3.1-70B-Instruct,0.5428571428571428 Qwen2_5_3B_Instruct,0.7142857142857143 SeaLLMs-v3-7B-Chat,0.819047619047619 Qwen2_5_72B_Instruct,0.8761904761904762 gemma-2-9b-it,0.580952380952381 Meta-Llama-3-70B-Instruct,0.5333333333333333 Qwen2_5_14B_Instruct,0.8285714285714286 gemma2-9b-cpt-sea-lionv3-instruct,0.5904761904761905 gemma-2-2b-it,0.3619047619047619 llama3-8b-cpt-sea-lionv2-instruct,0.49523809523809526 cross_openhermes_llama3_8b_12288_inst,0.5523809523809524 Qwen2_5_0_5B_Instruct,0.3619047619047619 GPT4o_0513,0.8095238095238095