--- license: apache-2.0 datasets: - iamtarun/python_code_instructions_18k_alpaca language: - en pipeline_tag: text-generation tags: - code --- # rahuldshetty/tinyllama-python-gguf Quantized GGUF model files for [tinyllama-python](https://huggingface.co/rahuldshetty/tinyllama-python). - Base model: [unsloth/tinyllama-bnb-4bit](https://huggingface.co/unsloth/tinyllama-bnb-4bit) - Dataset: [iamtarun/python_code_instructions_18k_alpaca](https://huggingface.co/datasets/iamtarun/python_code_instructions_18k_alpaca) - Training Script: [unslothai: Alpaca + TinyLlama + RoPE Scaling full example.ipynb](https://colab.research.google.com/drive/1AZghoNBQaMDgWJpi4RbffGM1h6raLUj9?usp=sharing) | Name | Quant method | Size | | ---- | ---- | ---- | | [tinyllama-python-unsloth.Q2_K.gguf](https://huggingface.co/rahuldshetty/tinyllama-python-gguf/resolve/main/tinyllama-python-unsloth.Q2_K.gguf) | fp16 | 432 MB | ## Prompt Format ``` ### Instruction: {instruction} ### Response: ``` ## Example ``` ### Instruction: Write a function to find cube of a number. ### Response: ```