--- license: apache-2.0 datasets: - cerebras/SlimPajama-627B - bigcode/starcoderdata language: - en quantized_by: bartowski pipeline_tag: text-generation --- ## Exllama v2 Quantizations of TinyLlama-1.1B-intermediate-step-955k-token-2T Using turboderp's ExLlamaV2 v0.0.8 for quantization. Each branch contains an individual bits per weight, with the main one containing only the meaurement.json for further conversions. Conversion was done using wikitext-103-raw-v1-test.parquet as calibration dataset. Default arguments used except when the bits per weight is above 6.0, at that point the lm_head layer is quantized at 8 bits per weight instead of the default 6. Original model: https://huggingface.co/TinyLlama/TinyLlama-1.1B-intermediate-step-955k-token-2T 4.0 bits per weight 6.0 bits per weight 8.0 bits per weight ## Download instructions With git: ```shell git clone --single-branch --branch 4_0 https://huggingface.co/bartowski/TinyLlama-1.1B-intermediate-step-955k-token-2T-exl2 ``` With huggingface hub (credit to TheBloke for instructions): ```shell pip3 install huggingface-hub ``` To download the `main` (only useful if you only care about measurement.json) branch to a folder called `TinyLlama-1.1B-intermediate-step-955k-token-2T-exl2`: ```shell mkdir TinyLlama-1.1B-intermediate-step-955k-token-2T-exl2 huggingface-cli download bartowski/TinyLlama-1.1B-intermediate-step-955k-token-2T-exl2 --local-dir TinyLlama-1.1B-intermediate-step-955k-token-2T-exl2 --local-dir-use-symlinks False ``` To download from a different branch, add the `--revision` parameter: ```shell mkdir TinyLlama-1.1B-intermediate-step-955k-token-2T-exl2 huggingface-cli download bartowski/TinyLlama-1.1B-intermediate-step-955k-token-2T-exl2 --revision 4_0 --local-dir TinyLlama-1.1B-intermediate-step-955k-token-2T-exl2 --local-dir-use-symlinks False ```