--- tags: - quantized - chatml datasets: - allenai/ai2_arc - allenai/ultrafeedback_binarized_cleaned - argilla/distilabel-intel-orca-dpo-pairs - jondurbin/airoboros-3.2 - codeparrot/apps - facebook/belebele - bluemoon-fandom-1-1-rp-cleaned - boolq - camel-ai/biology - camel-ai/chemistry - camel-ai/math - camel-ai/physics - jondurbin/contextual-dpo-v0.1 - jondurbin/gutenberg-dpo-v0.1 - jondurbin/py-dpo-v0.1 - jondurbin/truthy-dpo-v0.1 - LDJnr/Capybara - jondurbin/cinematika-v0.1 - WizardLM/WizardLM_evol_instruct_70k - glaiveai/glaive-function-calling-v2 - jondurbin/gutenberg-dpo-v0.1 - grimulkan/LimaRP-augmented - lmsys/lmsys-chat-1m - ParisNeo/lollms_aware_dataset - TIGER-Lab/MathInstruct - Muennighoff/natural-instructions - openbookqa - kingbri/PIPPA-shareGPT - piqa - Vezora/Tested-22k-Python-Alpaca - ropes - cakiki/rosetta-code - Open-Orca/SlimOrca - b-mc2/sql-create-context - squad_v2 - mattpscott/airoboros-summarization - migtissera/Synthia-v1.3 - unalignment/toxic-dpo-v0.2 - WhiteRabbitNeo/WRN-Chapter-1 - WhiteRabbitNeo/WRN-Chapter-2 - winogrande base_model: 01-ai/yi-34b-200k model_type: mistral pipeline_tag: text-generation inference: false license: apache-2.0 --- # jondurbin/bagel-34b-v0.5 Exl2 - Model creator: [jondurbin](https://huggingface.co/jondurbin) - Original model: [bagel-34b-v0.5](https://huggingface.co/jondurbin/bagel-34b-v0.5) ![bagel](bagel.png) ## Model Summary This is a fine-tune of the updated yi-34b-200k with better long-context support. See [bagel](https://github.com/jondurbin/bagel) for additional details on the datasets. The DPO version is available [here](https://huggingface.co/jondurbin/bagel-dpo-34b-v0.5) ## How to Use Using turboderp's ExLlamaV2 v0.0.14 for quantization. The "main" branch only contains the measurement.json, download one of the other branches for the model (see below) Each branch contains an individual bits per weight, with the main one containing only the meaurement.json for further conversions. Original model: https://huggingface.co/jondurbin/bagel-34b-v0.5 | Branch | Bits | lm_head bits | VRAM (4k) | VRAM (16k) | VRAM (32k) | Description | | ------ | ---- | ------------ | ---- | ---- | ---- | ----------- | | [6_5](https://huggingface.co/suparious/bagel-34b-v0.5-exl2/tree/6_5) | 6.5 | 8.0 | 28.9 GB | 31.6 GB | 35.6 GB | Near unquantized performance at vastly reduced size, **recommended**. | | [4_25](https://huggingface.co/suparious/bagel-34b-v0.5-exl2/tree/4_25) | 4.25 | 6.0 | 19.5 GB | 22.2 GB | 26.2 GB | GPTQ equivalent bits per weight, slightly higher quality. | | [3_5](https://huggingface.co/suparious/bagel-34b-v0.5-exl2/tree/3_5) | 3.5 | 6.0 | 16.5 GB | 19.2 GB | 23.2 GB | Lower quality, only use if you have to. | | [3_0](https://huggingface.co/suparious/bagel-34b-v0.5-exl2/tree/3_0) | 3.0 | 6.0 | 14.3 GB | 17.0 GB | 21.0 GB | Very low quality, usable with 16gb of VRAM. | ## Download instructions With git: ```shell git clone --single-branch --branch 6_5 https://huggingface.co/suparious/bagel-34b-v0.5-exl2 bagel-34b-v0.5-exl2-6_5 ``` With huggingface hub (credit to TheBloke for instructions): ```shell pip3 install huggingface-hub ``` To download the `main` (only useful if you only care about measurement.json) branch to a folder called `bagel-34b-v0.5-exl2`: ```shell mkdir bagel-34b-v0.5-exl2 huggingface-cli download suparious/bagel-34b-v0.5-exl2 --local-dir bagel-34b-v0.5-exl2 --local-dir-use-symlinks False ``` To download from a different branch, add the `--revision` parameter: Linux: ```shell mkdir bagel-34b-v0.5-exl2-6_5 huggingface-cli download suparious/bagel-34b-v0.5-exl2 --revision 6_5 --local-dir bagel-34b-v0.5-exl2-6_5 --local-dir-use-symlinks False ``` Windows (which apparently doesn't like _ in folders sometimes?): ```shell mkdir bagel-34b-v0.5-exl2-6.5 huggingface-cli download suparious/bagel-34b-v0.5-exl2 --revision 6_5 --local-dir bagel-34b-v0.5-exl2-6.5 --local-dir-use-symlinks False ```