--- base_model: mlabonne/Beagle14-7B inference: false language: - en license: apache-2.0 model_creator: mlabonne model_name: Beagle14-7B model_type: mistral pipeline_tag: text-generation prompt_template: "<|system|> <|user|> {prompt} <|assistant|> " tags: - merge - mergekit - lazymergekit - fblgit/UNA-TheBeagle-7b-v1 - argilla/distilabeled-Marcoro14-7B-slerp quantized_by: brittlewis12 --- # Beagle14-7B GGUF Original model: [Beagle14-7B](https://huggingface.co/mlabonne/Beagle14-7B) Model creator: [Maxime Labonne](https://huggingface.co/mlabonne) This repo contains GGUF format model files for Maxime Labonne’s Beagle14-7B. Beagle14-7B is a merge of the following models using LazyMergekit: - [fblgit/UNA-TheBeagle-7b-v1](https://huggingface.co/fblgit/UNA-TheBeagle-7b-v1) - [argilla/distilabeled-Marcoro14-7B-slerp](https://huggingface.co/argilla/distilabeled-Marcoro14-7B-slerp) ### What is GGUF? GGUF is a file format for representing AI models. It is the third version of the format, introduced by the llama.cpp team on August 21st 2023. It is a replacement for GGML, which is no longer supported by llama.cpp. Converted using llama.cpp build 1879 (revision [3e5ca79](https://github.com/ggerganov/llama.cpp/commit/3e5ca7931c68152e4ec18d126e9c832dd84914c8)) ### Prompt template: Zephyr Zephyr-style appears to work well! ``` <|system|> {{system_message}} <|user|> {{prompt}} <|assistant|> ``` --- ## Download & run with [cnvrs](https://testflight.apple.com/join/sFWReS7K) on iPhone, iPad, and Mac! [cnvrs](https://testflight.apple.com/join/sFWReS7K) is the best app for private, local AI on your device: - create & save **Characters** with custom system prompts & temperature settings - download and experiment with any **GGUF model** you can [find on HuggingFace](https://huggingface.co/models?library=gguf)! - make it your own with custom **Theme colors** - powered by Metal ⚡️ & [Llama.cpp](https://github.com/ggerganov/llama.cpp), with **haptics** during response streaming! - **try it out** yourself today, on [Testflight](https://testflight.apple.com/join/sFWReS7K)! --- ## Original Model Evaluations: The evaluation was performed by the model’s creator using [LLM AutoEval](https://github.com/mlabonne/llm-autoeval) on Nous suite, as reported from mlabonne’s alternative leaderboard, YALL: [Yet Another LLM Leaderboard](https://huggingface.co/spaces/mlabonne/Yet_Another_LLM_Leaderboard). | Model |AGIEval|GPT4All|TruthfulQA|Bigbench|Average| |----------------------------------------------------------|------:|------:|---------:|-------:|------:| |[**Beagle14-7B**](https://huggingface.co/mlabonne/Beagle14-7B)| 44.38| **76.53**| **69.44**| 47.25| **59.4**| |[OpenHermes-2.5-Mistral-7B](https://huggingface.co/teknium/OpenHermes-2.5-Mistral-7B)| 42.75| 72.99| 52.99| 40.94| 52.42| |[NeuralHermes-2.5-Mistral-7B](https://huggingface.co/mlabonne/NeuralHermes-2.5-Mistral-7B)| 43.67| 73.24| 55.37| 41.76| 53.51| |[Nous-Hermes-2-SOLAR-10.7B](https://huggingface.co/NousResearch/Nous-Hermes-2-SOLAR-10.7B)| **47.79**| 74.69| 55.92| 44.84| 55.81| |[Marcoro14-7B-slerp](https://huggingface.co/mlabonne/Marcoro14-7B-slerp) | 44.66| 76.24| 64.15| 45.64| 57.67| |[CatMarcoro14-7B-slerp](https://huggingface.co/occultml/CatMarcoro14-7B-slerp)| 45.21| 75.91| 63.81| **47.31**| 58.06|