Model,Accuracy Qwen2-7B-Instruct,0.6542372881355932 Meta-Llama-3.1-8B-Instruct,0.40983050847457625 Qwen2_5_32B_Instruct,0.7742372881355932 Qwen2_5_7B_Instruct,0.6732203389830509 Qwen2_5_1_5B_Instruct,0.5135593220338983 Qwen2-72B-Instruct,0.7820338983050847 Meta-Llama-3-8B-Instruct,0.44033898305084745 Meta-Llama-3.1-70B-Instruct,0.6423728813559322 Qwen2_5_3B_Instruct,0.6145762711864406 SeaLLMs-v3-7B-Chat,0.5698305084745763 Qwen2_5_72B_Instruct,0.7684745762711864 gemma-2-9b-it,0.6189830508474576 Meta-Llama-3-70B-Instruct,0.5928813559322034 Qwen2_5_14B_Instruct,0.7538983050847458 gemma2-9b-cpt-sea-lionv3-instruct,0.6488135593220339 gemma-2-2b-it,0.43322033898305085 llama3-8b-cpt-sea-lionv2-instruct,0.45559322033898303 cross_openhermes_llama3_8b_12288_inst,0.5925423728813559 Qwen2_5_0_5B_Instruct,0.3847457627118644 GPT4o_0513,0.7308474576271187