Quantization made by Richard Erkhov. [Github](https://github.com/RichardErkhov) [Discord](https://discord.gg/pvy7H8DZMG) [Request more models](https://github.com/RichardErkhov/quant_request) TinyAlpaca-1.1B - GGUF - Model creator: https://huggingface.co/luckychao/ - Original model: https://huggingface.co/luckychao/TinyAlpaca-1.1B/ | Name | Quant method | Size | | ---- | ---- | ---- | | [TinyAlpaca-1.1B.Q2_K.gguf](https://huggingface.co/RichardErkhov/luckychao_-_TinyAlpaca-1.1B-gguf/blob/main/TinyAlpaca-1.1B.Q2_K.gguf) | Q2_K | 0.4GB | | [TinyAlpaca-1.1B.IQ3_XS.gguf](https://huggingface.co/RichardErkhov/luckychao_-_TinyAlpaca-1.1B-gguf/blob/main/TinyAlpaca-1.1B.IQ3_XS.gguf) | IQ3_XS | 0.44GB | | [TinyAlpaca-1.1B.IQ3_S.gguf](https://huggingface.co/RichardErkhov/luckychao_-_TinyAlpaca-1.1B-gguf/blob/main/TinyAlpaca-1.1B.IQ3_S.gguf) | IQ3_S | 0.47GB | | [TinyAlpaca-1.1B.Q3_K_S.gguf](https://huggingface.co/RichardErkhov/luckychao_-_TinyAlpaca-1.1B-gguf/blob/main/TinyAlpaca-1.1B.Q3_K_S.gguf) | Q3_K_S | 0.47GB | | [TinyAlpaca-1.1B.IQ3_M.gguf](https://huggingface.co/RichardErkhov/luckychao_-_TinyAlpaca-1.1B-gguf/blob/main/TinyAlpaca-1.1B.IQ3_M.gguf) | IQ3_M | 0.48GB | | [TinyAlpaca-1.1B.Q3_K.gguf](https://huggingface.co/RichardErkhov/luckychao_-_TinyAlpaca-1.1B-gguf/blob/main/TinyAlpaca-1.1B.Q3_K.gguf) | Q3_K | 0.51GB | | [TinyAlpaca-1.1B.Q3_K_M.gguf](https://huggingface.co/RichardErkhov/luckychao_-_TinyAlpaca-1.1B-gguf/blob/main/TinyAlpaca-1.1B.Q3_K_M.gguf) | Q3_K_M | 0.51GB | | [TinyAlpaca-1.1B.Q3_K_L.gguf](https://huggingface.co/RichardErkhov/luckychao_-_TinyAlpaca-1.1B-gguf/blob/main/TinyAlpaca-1.1B.Q3_K_L.gguf) | Q3_K_L | 0.55GB | | [TinyAlpaca-1.1B.IQ4_XS.gguf](https://huggingface.co/RichardErkhov/luckychao_-_TinyAlpaca-1.1B-gguf/blob/main/TinyAlpaca-1.1B.IQ4_XS.gguf) | IQ4_XS | 0.57GB | | [TinyAlpaca-1.1B.Q4_0.gguf](https://huggingface.co/RichardErkhov/luckychao_-_TinyAlpaca-1.1B-gguf/blob/main/TinyAlpaca-1.1B.Q4_0.gguf) | Q4_0 | 0.59GB | | [TinyAlpaca-1.1B.IQ4_NL.gguf](https://huggingface.co/RichardErkhov/luckychao_-_TinyAlpaca-1.1B-gguf/blob/main/TinyAlpaca-1.1B.IQ4_NL.gguf) | IQ4_NL | 0.6GB | | [TinyAlpaca-1.1B.Q4_K_S.gguf](https://huggingface.co/RichardErkhov/luckychao_-_TinyAlpaca-1.1B-gguf/blob/main/TinyAlpaca-1.1B.Q4_K_S.gguf) | Q4_K_S | 0.6GB | | [TinyAlpaca-1.1B.Q4_K.gguf](https://huggingface.co/RichardErkhov/luckychao_-_TinyAlpaca-1.1B-gguf/blob/main/TinyAlpaca-1.1B.Q4_K.gguf) | Q4_K | 0.62GB | | [TinyAlpaca-1.1B.Q4_K_M.gguf](https://huggingface.co/RichardErkhov/luckychao_-_TinyAlpaca-1.1B-gguf/blob/main/TinyAlpaca-1.1B.Q4_K_M.gguf) | Q4_K_M | 0.62GB | | [TinyAlpaca-1.1B.Q4_1.gguf](https://huggingface.co/RichardErkhov/luckychao_-_TinyAlpaca-1.1B-gguf/blob/main/TinyAlpaca-1.1B.Q4_1.gguf) | Q4_1 | 0.65GB | | [TinyAlpaca-1.1B.Q5_0.gguf](https://huggingface.co/RichardErkhov/luckychao_-_TinyAlpaca-1.1B-gguf/blob/main/TinyAlpaca-1.1B.Q5_0.gguf) | Q5_0 | 0.71GB | | [TinyAlpaca-1.1B.Q5_K_S.gguf](https://huggingface.co/RichardErkhov/luckychao_-_TinyAlpaca-1.1B-gguf/blob/main/TinyAlpaca-1.1B.Q5_K_S.gguf) | Q5_K_S | 0.71GB | | [TinyAlpaca-1.1B.Q5_K.gguf](https://huggingface.co/RichardErkhov/luckychao_-_TinyAlpaca-1.1B-gguf/blob/main/TinyAlpaca-1.1B.Q5_K.gguf) | Q5_K | 0.73GB | | [TinyAlpaca-1.1B.Q5_K_M.gguf](https://huggingface.co/RichardErkhov/luckychao_-_TinyAlpaca-1.1B-gguf/blob/main/TinyAlpaca-1.1B.Q5_K_M.gguf) | Q5_K_M | 0.73GB | | [TinyAlpaca-1.1B.Q5_1.gguf](https://huggingface.co/RichardErkhov/luckychao_-_TinyAlpaca-1.1B-gguf/blob/main/TinyAlpaca-1.1B.Q5_1.gguf) | Q5_1 | 0.77GB | | [TinyAlpaca-1.1B.Q6_K.gguf](https://huggingface.co/RichardErkhov/luckychao_-_TinyAlpaca-1.1B-gguf/blob/main/TinyAlpaca-1.1B.Q6_K.gguf) | Q6_K | 0.84GB | | [TinyAlpaca-1.1B.Q8_0.gguf](https://huggingface.co/RichardErkhov/luckychao_-_TinyAlpaca-1.1B-gguf/blob/main/TinyAlpaca-1.1B.Q8_0.gguf) | Q8_0 | 1.09GB | Original model description: --- language: - en datasets: - tatsu-lab/alpaca --- # Model Card for Model ID This model checkpoint is the TinyLlama-1.1B fine-tuned on [alpaca dataset](https://huggingface.co/datasets/tatsu-lab/alpaca). ## Model Details ### Model Sources - **Repository:** https://github.com/jzhang38/TinyLlama - **Paper:** [https://arxiv.org/abs/2404.02406] ## Uses The use of this model should comply with the restrictions from [TinyLlama-1.1b](https://github.com/jzhang38/TinyLlama) and [Stanford Alpaca](https://github.com/tatsu-lab/stanford_alpaca). ## How to Get Started with the Model Use the code below to get started with the model. ``` # Load model directly from transformers import AutoTokenizer, AutoModelForCausalLM tokenizer = AutoTokenizer.from_pretrained("luckychao/TinyAlpaca-1.1B") model = AutoModelForCausalLM.from_pretrained("luckychao/TinyAlpaca-1.1B") ``` ## Training Details ### Training Data We use the [alpaca dataset](https://huggingface.co/datasets/tatsu-lab/alpaca), which is created by researchers from Stanford University. ### Training Procedure We follow the same training procedure and mostly same hyper-parameters to fine-tune the original Alpaca model on Llama. The procedure can be found in [stanford_alpaca project](https://huggingface.co/datasets/tatsu-lab/alpaca). #### Training Hyperparameters ``` --num_train_epochs 3 \ --per_device_train_batch_size 2 \ --per_device_eval_batch_size 2 \ --gradient_accumulation_steps 4 \ --evaluation_strategy "no" \ --save_strategy "steps" \ --save_steps 1000 \ --save_total_limit 1 \ --learning_rate 2e-5 \ --weight_decay 0. \ --warmup_ratio 0.03 \ --lr_scheduler_type "cosine" \ --logging_steps 1 \ --bf16 True \ --fsdp "full_shard auto_wrap" \ --fsdp_transformer_layer_cls_to_wrap 'LlamaDecoderLayer' \ --model_max_length 2048 ``` ## Citation The model is mostly developed for the paper below. Please cite it if you find the repository helpful. **BibTeX:** ``` @article{hao2024exploring, title={Exploring Backdoor Vulnerabilities of Chat Models}, author={Hao, Yunzhuo and Yang, Wenkai and Lin, Yankai}, journal={arXiv preprint arXiv:2404.02406}, year={2024} } ```