Model,Accuracy Qwen2-7B-Instruct,0.8154859967051071 Meta-Llama-3.1-8B-Instruct,0.5777045579352005 Qwen2_5_32B_Instruct,0.9062786015010068 Qwen2_5_7B_Instruct,0.8652754896576972 Qwen2_5_1_5B_Instruct,0.6148636280431997 Qwen2-72B-Instruct,0.8887058392824455 Meta-Llama-3-8B-Instruct,0.6025993044114956 Meta-Llama-3.1-70B-Instruct,0.9026176093721399 Qwen2_5_3B_Instruct,0.7645982061138569 SeaLLMs-v3-7B-Chat,0.7159070107999268 Qwen2_5_72B_Instruct,0.9082921471718836 gemma-2-9b-it,0.9070107999267801 Meta-Llama-3-70B-Instruct,0.876807614863628 Qwen2_5_14B_Instruct,0.9079260479589969 gemma2-9b-cpt-sea-lionv3-instruct,0.9055464030752334 gemma-2-2b-it,0.7792421746293245 llama3-8b-cpt-sea-lionv2-instruct,0.6101043382756727 cross_openhermes_llama3_8b_12288_inst,0.8282994691561413 Qwen2_5_0_5B_Instruct,0.5464030752333883 GPT4o_0513,0.9304411495515285