Model Overview

Quantized version exllamav2 https://github.com/turboderp/exllamav2/

Original model: https://huggingface.co/Nondzu/zephyr-7b-beta-pl

2.5 bits per weight

3.0 bits per weight

3.5 bits per weight

4.0 bits per weight

5.0 bits per weight

6.0 bits per weight

7.0 bits per weight

8.0 bits per weight

Download instructions

With git:

git clone --single-branch --branch 4.0 https://huggingface.co/Nondzu/zephyr-7b-beta-pl-exl2

With huggingface hub (credit to TheBloke for instructions):

pip3 install huggingface-hub

To download the main (only useful if you only care about measurement.json) branch to a folder called zephyr-7b-beta-pl-exl2:

mkdir zephyr-7b-beta-pl-exl2
huggingface-cli download Nondzu/zephyr-7b-beta-pl-exl2 --local-dir zephyr-7b-beta-pl-exl2 --local-dir-use-symlinks False

To download from a different branch, add the --revision parameter:

mkdir zephyr-7b-beta-pl-exl2
huggingface-cli download Nondzu/zephyr-7b-beta-pl-exl2 --revision 8.0 --local-dir zephyr-7b-beta-pl-exl2 --local-dir-use-symlinks False

Current Status: Alpha

  • Stage: Alpha-Alpaca

Training Details

I trained the model using 3xRTX 3090 for 163 hours. Built with Axolotl

Quantised Model Links:

  1. https://huggingface.co/Nondzu/zephyr-7b-beta-pl-exl2
  2. https://huggingface.co/TheBloke/zephyr-7B-beta-pl-GGUF
  3. https://huggingface.co/TheBloke/zephyr-7B-beta-pl-AWQ
  4. https://huggingface.co/TheBloke/zephyr-7B-beta-pl-GPTQ

Model Specifics

  • Base Model: HuggingFaceH4/zephyr-7b-beta
  • Fine-Tuning Method: QLORA
  • Primary Focus: Polish language datasets

Datasets:

Usage Warning

As this is an experimental model, users should be aware of the following:

  • Reliability: The model has not been fully tested and may exhibit unexpected behaviors or performance issues.
  • Updates: The model is subject to change based on ongoing testing and feedback.
  • Data Sensitivity: Users should exercise caution when using sensitive or private data, as the model's output and behavior are not fully predictable at this stage.

Prompt template: Alpaca

Below is an instruction that describes a task. Write a response that appropriately completes the request.

### Instruction:
{prompt}

### Response:

Example

image/png

Feedback and Contribution

User feedback is crucial during this testing phase. We encourage users to provide feedback on model performance, issues encountered, and any suggestions for improvements. Contributions in terms of shared test results, datasets, or code improvements are also welcome.


Disclaimer: This experimental model is provided 'as is', without warranty of any kind. Users should use the model at their own risk. The creators or maintainers of the model are not responsible for any consequences arising from its use.

image/png

Downloads last month

-

Downloads are not tracked for this model. How to track
Inference API
Unable to determine this model's library. Check the docs .