--- language: - en license: other license_name: microsoft-research-license license_link: LICENSE model-index: - name: PsyOrca2-13b-DARE results: - task: type: text-generation name: Text Generation dataset: name: AI2 Reasoning Challenge (25-Shot) type: ai2_arc config: ARC-Challenge split: test args: num_few_shot: 25 metrics: - type: acc_norm value: 60.58 name: normalized accuracy source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=royallab/PsyOrca2-13b-DARE name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: HellaSwag (10-Shot) type: hellaswag split: validation args: num_few_shot: 10 metrics: - type: acc_norm value: 83.83 name: normalized accuracy source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=royallab/PsyOrca2-13b-DARE name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: MMLU (5-Shot) type: cais/mmlu config: all split: test args: num_few_shot: 5 metrics: - type: acc value: 55.69 name: accuracy source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=royallab/PsyOrca2-13b-DARE name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: TruthfulQA (0-shot) type: truthful_qa config: multiple_choice split: validation args: num_few_shot: 0 metrics: - type: mc2 value: 53.27 source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=royallab/PsyOrca2-13b-DARE name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: Winogrande (5-shot) type: winogrande config: winogrande_xl split: validation args: num_few_shot: 5 metrics: - type: acc value: 74.9 name: accuracy source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=royallab/PsyOrca2-13b-DARE name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: GSM8k (5-shot) type: gsm8k config: main split: test args: num_few_shot: 5 metrics: - type: acc value: 2.12 name: accuracy source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=royallab/PsyOrca2-13b-DARE name: Open LLM Leaderboard --- # PsyOrca2-DARE-13b This is a [Llama 2](https://huggingface.co/meta-llama/Llama-2-70b)-based model consisting of a merge between: - [KoboldAI/PsyFighter-2-13b](https://huggingface.co/KoboldAI/Psyfighter-2-13B) (FP16 not available to the public yet. However, the merge config is.) - [microsoft/Orca-2-13b](https://huggingface.co/microsoft/Orca-2-13b) (with a fixed vocab size by merging on llama-2-13b) The goal of this merge is to test out the DARE merge algorithm and see how it works with these two models. Mergekit config (Inspired from Charles Goddard): ```yml models: - model: KoboldAI/Psyfighter-2-13B parameters: weight: 1 density: 1 - model: microsoft/Orca-2-13b parameters: weight: 0.05 density: 0.30 merge_method: dare_ties base_model: meta-llama/Llama-2-13b-hf parameters: int8_mask: true dtype: bfloat16 ``` ## Usage This model will most likely follow the Alpaca instruct format. It can also follow Orca ChatML due to having Orca merged in. Alpaca: ``` ### Instruction: ### Response: ``` ## Bias, Risks, and Limitations The model will show biases similar to those observed in niche roleplaying forums on the Internet, besides those exhibited by the base model. It is not intended for supplying factual information or advice in any form. ## Training Details This model is a merge. Please refer to the link repositories of the merged models for details. ## Donate? All my infrastructure and cloud expenses are paid out of pocket. If you'd like to donate, you can do so here: https://ko-fi.com/kingbri You should not feel obligated to donate, but if you do, I'd appreciate it. --- # [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard) Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_royallab__PsyOrca2-13b-DARE) | Metric |Value| |---------------------------------|----:| |Avg. |55.07| |AI2 Reasoning Challenge (25-Shot)|60.58| |HellaSwag (10-Shot) |83.83| |MMLU (5-Shot) |55.69| |TruthfulQA (0-shot) |53.27| |Winogrande (5-shot) |74.90| |GSM8k (5-shot) | 2.12|