--- license: cc-by-nc-4.0 model-index: - name: Mistral-11B-TestBench11 results: - task: type: text-generation name: Text Generation dataset: name: AI2 Reasoning Challenge (25-Shot) type: ai2_arc config: ARC-Challenge split: test args: num_few_shot: 25 metrics: - type: acc_norm value: 64.42 name: normalized accuracy source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=Undi95/Mistral-11B-TestBench11 name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: HellaSwag (10-Shot) type: hellaswag split: validation args: num_few_shot: 10 metrics: - type: acc_norm value: 83.93 name: normalized accuracy source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=Undi95/Mistral-11B-TestBench11 name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: MMLU (5-Shot) type: cais/mmlu config: all split: test args: num_few_shot: 5 metrics: - type: acc value: 63.82 name: accuracy source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=Undi95/Mistral-11B-TestBench11 name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: TruthfulQA (0-shot) type: truthful_qa config: multiple_choice split: validation args: num_few_shot: 0 metrics: - type: mc2 value: 56.68 source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=Undi95/Mistral-11B-TestBench11 name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: Winogrande (5-shot) type: winogrande config: winogrande_xl split: validation args: num_few_shot: 5 metrics: - type: acc value: 77.74 name: accuracy source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=Undi95/Mistral-11B-TestBench11 name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: GSM8k (5-shot) type: gsm8k config: main split: test args: num_few_shot: 5 metrics: - type: acc value: 14.94 name: accuracy source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=Undi95/Mistral-11B-TestBench11 name: Open LLM Leaderboard --- I FUCKED UP, THIS MODEL IS MEANT TO BE A BFLOAT16 MODEL, I'M CURRENTLY REDOING IT IN THE CORRECT WAY (look at the recipe, it end in float16, i'm so dumb lmao). It SHOULD be even better, I saw the problem after finetuning it, something was off. It's usable, it rank the best, but it's not even on the right float...KEK Fixed model should be here: [NeverSleep/Mistral-11B-OmniMix-bf16](https://huggingface.co/NeverSleep/Mistral-11B-OmniMix-bf16) Don't mind this one at the moment, I need to finetune it for RP, it's just a test. ## Description This repo contains fp16 files of Mistral-11B-OmniMix. My goal for this model was only to make it score the highest possible with merge and layer toying, proving that: - Benchmark are objective - You should try a model yourself and don't go blindly to the highest rated one - Merge/Layer toying CAN be usable to do better model (maybe?) ## Model used - [Mistral-7B-OpenOrca](https://huggingface.co/Open-Orca/Mistral-7B-OpenOrca) - [Mistral-7B-v0.1-Open-Platypus](https://huggingface.co/akjindal53244/Mistral-7B-v0.1-Open-Platypus) - [CollectiveCognition-v1.1-Mistral-7B](https://huggingface.co/teknium/CollectiveCognition-v1.1-Mistral-7B) - [zephyr-7b-alpha](https://huggingface.co/HuggingFaceH4/zephyr-7b-alpha) ## Prompt template The best one after further testing is this one: ``` <|system|> Below is an instruction that describes a task. Write a response that appropriately completes the request. <|user|> {prompt} <|assistant|> ``` ![image/png](https://cdn-uploads.huggingface.co/production/uploads/63ab1241ad514ca8d1430003/tWIx8yeoallv94zrhN6L-.png) But these one work too: ``` Below is an instruction that describes a task. Write a response that appropriately completes the request. ### Instruction: {prompt} ### Response: ``` ``` USER: ASSISTANT: ``` Or use any prompting system from one of the 4 source model, should work. ## The secret sauce Mistral-11B-OpenOrcaPlatypus : ``` slices: - sources: - model: Open-Orca/Mistral-7B-OpenOrca layer_range: [0, 24] - sources: - model: akjindal53244/Mistral-7B-v0.1-Open-Platypus layer_range: [8, 32] merge_method: passthrough dtype: bfloat16 ``` Mistral-11B-CC-Zephyr : ``` slices: - sources: - model: "/content/drive/MyDrive/CC-v1.1-7B-bf16" layer_range: [0, 24] - sources: - model: "/content/drive/MyDrive/Zephyr-7B" layer_range: [8, 32] merge_method: passthrough dtype: bfloat16 ``` Mistral-11B-OmniMix : ``` slices: - sources: - model: Mistral-11B-OpenOrcaPlatypus layer_range: [0, 48] - model: Mistral-11B-CC-Zephyr layer_range: [0, 48] merge_method: slerp base_model: Undi95/Mistral-11B-OpenOrcaPlatypus parameters: t: - filter: lm_head value: [0.75] - filter: embed_tokens value: [0.75] - filter: self_attn value: [0.75, 0.25] - filter: mlp value: [0.25, 0.75] - filter: layernorm value: [0.5, 0.5] - filter: modelnorm value: [0.75] - value: 0.5 # fallback for rest of tensors dtype: float16 ``` I use [mergekit](https://github.com/cg123/mergekit) for all the manipulation told here. ## Some scoring I done myself This was named "Mistral-11B-TestBench11", keep that in mind while looking trough this. hf-causal-experimental (pretrained=/content/drive/MyDrive/Mistral-11B-Test), limit: None, provide_description: False, num_fewshot: 0, batch_size: 4 | Task |Version| Metric |Value | |Stderr| |-------------|------:|--------|-----:|---|-----:| |arc_challenge| 0|acc |0.5597|± |0.0145| | | |acc_norm|0.5819|± |0.0144| |arc_easy | 0|acc |0.8308|± |0.0077| | | |acc_norm|0.8215|± |0.0079| |hellaswag | 0|acc |0.6371|± |0.0048| | | |acc_norm|0.8213|± |0.0038| |piqa | 0|acc |0.8134|± |0.0091| | | |acc_norm|0.8275|± |0.0088| |truthfulqa_mc| 1|mc1 |0.3990|± |0.0171| | | |mc2 |0.5685|± |0.0155| |winogrande | 0|acc |0.7474|± |0.0122| ![image/png](https://cdn-uploads.huggingface.co/production/uploads/63ab1241ad514ca8d1430003/LggyIlV-oY7NbLwi7mnix.png) This model seem to be the best out of my 3 latest try: ![image/png](https://cdn-uploads.huggingface.co/production/uploads/63ab1241ad514ca8d1430003/hnqNyljs5Y8JppuA_io8w.png) ![image/png](https://cdn-uploads.huggingface.co/production/uploads/63ab1241ad514ca8d1430003/b-a-sB2qRHApPX52S2nD7.png) You can find all the work I have done trying on this [Pastebin](https://pastebin.com/nHLCxQJv). ## Others Special thanks to Sushi, [Henky](https://github.com/KoboldAI/KoboldAI-Client) for the machine he give me for big task, and [Charles Goddard](https://github.com/cg123) for his amazing tool. If you want to support me, you can [here](https://ko-fi.com/undiai). # [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard) Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_Undi95__Mistral-11B-TestBench11) | Metric | Value | |-----------------------|---------------------------| | Avg. | 53.01 | | ARC (25-shot) | 64.42 | | HellaSwag (10-shot) | 83.93 | | MMLU (5-shot) | 63.82 | | TruthfulQA (0-shot) | 56.68 | | Winogrande (5-shot) | 77.74 | | GSM8K (5-shot) | 14.94 | | DROP (3-shot) | 9.57 | # [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard) Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_Undi95__Mistral-11B-TestBench11) | Metric |Value| |---------------------------------|----:| |Avg. |60.25| |AI2 Reasoning Challenge (25-Shot)|64.42| |HellaSwag (10-Shot) |83.93| |MMLU (5-Shot) |63.82| |TruthfulQA (0-shot) |56.68| |Winogrande (5-shot) |77.74| |GSM8k (5-shot) |14.94|