Model,Accuracy Qwen2-7B-Instruct,0.7867647058823529 Meta-Llama-3.1-8B-Instruct,0.6740196078431373 llama3-8b-cpt-sea-lionv2.1-instruct,0.5808823529411765 Qwen2_5_32B_Instruct,0.7745098039215687 Qwen2_5_7B_Instruct,0.7058823529411765 Qwen2_5_1_5B_Instruct,0.6838235294117647 Qwen2-72B-Instruct,0.8063725490196079 Meta-Llama-3-8B-Instruct,0.678921568627451 Meta-Llama-3.1-70B-Instruct,0.7696078431372549 Qwen2_5_3B_Instruct,0.5661764705882353 SeaLLMs-v3-7B-Chat,0.7475490196078431 Qwen2_5_72B_Instruct,0.8014705882352942 gemma-2-9b-it,0.7401960784313726 Meta-Llama-3-70B-Instruct,0.7598039215686274 Qwen2_5_14B_Instruct,0.7794117647058824 gemma2-9b-cpt-sea-lionv3-instruct,0.7794117647058824 gemma-2-2b-it,0.7083333333333334 llama3-8b-cpt-sea-lionv2-instruct,0.5833333333333334 cross_openhermes_llama3_8b_12288_inst,0.6985294117647058 Qwen2_5_0_5B_Instruct,0.5759803921568627 GPT4o_0513,0.7377450980392157