Edit model card

THEBLOKE HAS QUANTS!
https://huggingface.co/TheBloke/Mythical-Destroyer-L2-13B-GPTQ
https://huggingface.co/TheBloke/Mythical-Destroyer-L2-13B-GGUF


A Merge done for @dampf

FULL FP16 Model


Base Model TheBloke/Llama-2-13B-fp16
MERGED WITH
-----Gryphe/MythoMax-L2-13b
-----totally-not-an-llm/PuddleJumper-13b
-----TheBloke/Llama-2-13B-Chat-fp16
-----rombodawg/LosslessMegaCoder-llama2-13b-mini
-----The-Face-Of-Goonery/Chronos-Beluga-v2-13bfp16
using ties-merge

Dampf's Rationale:
if you think about it, the merges kinda act as experts in my destroyer.
mythomax and chronos-beluga for creativity,
llama 2 13b chat and puddlejumper for instruct and losslessmegacoder for logic/code
if this works well...
it should be really, really good
---
mythical destroyer will be used for rp and instruct as well as coding tasks a like
and it should be good at everything
---


Script used to Merge here
Thank you for the easy to set up script, Chargoddard !

Command:

python ties_merge.py TheBloke/Llama-2-13B-fp16 ./Mythical-Destroyer-13B --merge Gryphe/MythoMax-L2-13b --merge totally-not-an-llm/PuddleJumper-13b --merge TheBloke/Llama-2-13B-Chat-fp16 --merge rombodawg/LosslessMegaCoder-llama2-13b-mini --merge The-Face-Of-Goonery/Chronos-Beluga-v2-13bfp16 --cuda

Open LLM Leaderboard Evaluation Results

Detailed results can be found here

Metric Value
Avg. 50.13
ARC (25-shot) 58.7
HellaSwag (10-shot) 82.0
MMLU (5-shot) 57.66
TruthfulQA (0-shot) 56.35
Winogrande (5-shot) 74.66
GSM8K (5-shot) 8.95
DROP (3-shot) 12.56
Downloads last month
1,634
Safetensors
Model size
13B params
Tensor type
F32
Β·
BF16
Β·
Inference API
Model is too large to load in Inference API (serverless). To try the model, launch it on Inference Endpoints (dedicated) instead.

Spaces using Sao10K/Mythical-Destroyer-L2-13B 19