--- language: - pt license: apache-2.0 library_name: transformers tags: - text-generation-inference - TensorBlock - GGUF datasets: - nicholasKluge/Pt-Corpus-Instruct metrics: - perplexity pipeline_tag: text-generation widget: - text: 'A PUCRS é uma universidade ' example_title: Exemplo - text: A muitos anos atrás, em uma galáxia muito distante, vivia uma raça de example_title: Exemplo - text: Em meio a um escândalo, a frente parlamentar pediu ao Senador Silva para example_title: Exemplo inference: parameters: repetition_penalty: 1.2 temperature: 0.2 top_k: 20 top_p: 0.2 max_new_tokens: 150 co2_eq_emissions: emissions: 5600 source: CodeCarbon training_type: pre-training geographical_location: Germany hardware_used: NVIDIA A100-SXM4-40GB base_model: nicholasKluge/TeenyTinyLlama-160m model-index: - name: TeenyTinyLlama-160m results: - task: type: text-generation name: Text Generation dataset: name: ENEM Challenge (No Images) type: eduagarcia/enem_challenge split: train args: num_few_shot: 3 metrics: - type: acc value: 19.24 name: accuracy source: url: https://huggingface.co/spaces/eduagarcia/open_pt_llm_leaderboard?query=nicholasKluge/TeenyTinyLlama-160m name: Open Portuguese LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: BLUEX (No Images) type: eduagarcia-temp/BLUEX_without_images split: train args: num_few_shot: 3 metrics: - type: acc value: 23.09 name: accuracy source: url: https://huggingface.co/spaces/eduagarcia/open_pt_llm_leaderboard?query=nicholasKluge/TeenyTinyLlama-160m name: Open Portuguese LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: OAB Exams type: eduagarcia/oab_exams split: train args: num_few_shot: 3 metrics: - type: acc value: 22.37 name: accuracy source: url: https://huggingface.co/spaces/eduagarcia/open_pt_llm_leaderboard?query=nicholasKluge/TeenyTinyLlama-160m name: Open Portuguese LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: Assin2 RTE type: assin2 split: test args: num_few_shot: 15 metrics: - type: f1_macro value: 53.97 name: f1-macro source: url: https://huggingface.co/spaces/eduagarcia/open_pt_llm_leaderboard?query=nicholasKluge/TeenyTinyLlama-160m name: Open Portuguese LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: Assin2 STS type: eduagarcia/portuguese_benchmark split: test args: num_few_shot: 15 metrics: - type: pearson value: 0.24 name: pearson source: url: https://huggingface.co/spaces/eduagarcia/open_pt_llm_leaderboard?query=nicholasKluge/TeenyTinyLlama-160m name: Open Portuguese LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: FaQuAD NLI type: ruanchaves/faquad-nli split: test args: num_few_shot: 15 metrics: - type: f1_macro value: 43.97 name: f1-macro source: url: https://huggingface.co/spaces/eduagarcia/open_pt_llm_leaderboard?query=nicholasKluge/TeenyTinyLlama-160m name: Open Portuguese LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: HateBR Binary type: ruanchaves/hatebr split: test args: num_few_shot: 25 metrics: - type: f1_macro value: 36.92 name: f1-macro source: url: https://huggingface.co/spaces/eduagarcia/open_pt_llm_leaderboard?query=nicholasKluge/TeenyTinyLlama-160m name: Open Portuguese LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: PT Hate Speech Binary type: hate_speech_portuguese split: test args: num_few_shot: 25 metrics: - type: f1_macro value: 42.63 name: f1-macro source: url: https://huggingface.co/spaces/eduagarcia/open_pt_llm_leaderboard?query=nicholasKluge/TeenyTinyLlama-160m name: Open Portuguese LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: tweetSentBR type: eduagarcia-temp/tweetsentbr split: test args: num_few_shot: 25 metrics: - type: f1_macro value: 11.39 name: f1-macro source: url: https://huggingface.co/spaces/eduagarcia/open_pt_llm_leaderboard?query=nicholasKluge/TeenyTinyLlama-160m name: Open Portuguese LLM Leaderboard ---
TensorBlock

Feedback and support: TensorBlock's Twitter/X, Telegram Group and Discord server

## nicholasKluge/TeenyTinyLlama-160m - GGUF This repo contains GGUF format model files for [nicholasKluge/TeenyTinyLlama-160m](https://huggingface.co/nicholasKluge/TeenyTinyLlama-160m). The files were quantized using machines provided by [TensorBlock](https://tensorblock.co/), and they are compatible with llama.cpp as of [commit b4242](https://github.com/ggerganov/llama.cpp/commit/a6744e43e80f4be6398fc7733a01642c846dce1d).
Run them on the TensorBlock client using your local machine ↗
## Prompt template ``` ``` ## Model file specification | Filename | Quant type | File Size | Description | | -------- | ---------- | --------- | ----------- | | [TeenyTinyLlama-160m-Q2_K.gguf](https://huggingface.co/tensorblock/TeenyTinyLlama-160m-GGUF/blob/main/TeenyTinyLlama-160m-Q2_K.gguf) | Q2_K | 0.071 GB | smallest, significant quality loss - not recommended for most purposes | | [TeenyTinyLlama-160m-Q3_K_S.gguf](https://huggingface.co/tensorblock/TeenyTinyLlama-160m-GGUF/blob/main/TeenyTinyLlama-160m-Q3_K_S.gguf) | Q3_K_S | 0.080 GB | very small, high quality loss | | [TeenyTinyLlama-160m-Q3_K_M.gguf](https://huggingface.co/tensorblock/TeenyTinyLlama-160m-GGUF/blob/main/TeenyTinyLlama-160m-Q3_K_M.gguf) | Q3_K_M | 0.086 GB | very small, high quality loss | | [TeenyTinyLlama-160m-Q3_K_L.gguf](https://huggingface.co/tensorblock/TeenyTinyLlama-160m-GGUF/blob/main/TeenyTinyLlama-160m-Q3_K_L.gguf) | Q3_K_L | 0.091 GB | small, substantial quality loss | | [TeenyTinyLlama-160m-Q4_0.gguf](https://huggingface.co/tensorblock/TeenyTinyLlama-160m-GGUF/blob/main/TeenyTinyLlama-160m-Q4_0.gguf) | Q4_0 | 0.099 GB | legacy; small, very high quality loss - prefer using Q3_K_M | | [TeenyTinyLlama-160m-Q4_K_S.gguf](https://huggingface.co/tensorblock/TeenyTinyLlama-160m-GGUF/blob/main/TeenyTinyLlama-160m-Q4_K_S.gguf) | Q4_K_S | 0.099 GB | small, greater quality loss | | [TeenyTinyLlama-160m-Q4_K_M.gguf](https://huggingface.co/tensorblock/TeenyTinyLlama-160m-GGUF/blob/main/TeenyTinyLlama-160m-Q4_K_M.gguf) | Q4_K_M | 0.103 GB | medium, balanced quality - recommended | | [TeenyTinyLlama-160m-Q5_0.gguf](https://huggingface.co/tensorblock/TeenyTinyLlama-160m-GGUF/blob/main/TeenyTinyLlama-160m-Q5_0.gguf) | Q5_0 | 0.116 GB | legacy; medium, balanced quality - prefer using Q4_K_M | | [TeenyTinyLlama-160m-Q5_K_S.gguf](https://huggingface.co/tensorblock/TeenyTinyLlama-160m-GGUF/blob/main/TeenyTinyLlama-160m-Q5_K_S.gguf) | Q5_K_S | 0.116 GB | large, low quality loss - recommended | | [TeenyTinyLlama-160m-Q5_K_M.gguf](https://huggingface.co/tensorblock/TeenyTinyLlama-160m-GGUF/blob/main/TeenyTinyLlama-160m-Q5_K_M.gguf) | Q5_K_M | 0.118 GB | large, very low quality loss - recommended | | [TeenyTinyLlama-160m-Q6_K.gguf](https://huggingface.co/tensorblock/TeenyTinyLlama-160m-GGUF/blob/main/TeenyTinyLlama-160m-Q6_K.gguf) | Q6_K | 0.134 GB | very large, extremely low quality loss | | [TeenyTinyLlama-160m-Q8_0.gguf](https://huggingface.co/tensorblock/TeenyTinyLlama-160m-GGUF/blob/main/TeenyTinyLlama-160m-Q8_0.gguf) | Q8_0 | 0.173 GB | very large, extremely low quality loss - not recommended | ## Downloading instruction ### Command line Firstly, install Huggingface Client ```shell pip install -U "huggingface_hub[cli]" ``` Then, downoad the individual model file the a local directory ```shell huggingface-cli download tensorblock/TeenyTinyLlama-160m-GGUF --include "TeenyTinyLlama-160m-Q2_K.gguf" --local-dir MY_LOCAL_DIR ``` If you wanna download multiple model files with a pattern (e.g., `*Q4_K*gguf`), you can try: ```shell huggingface-cli download tensorblock/TeenyTinyLlama-160m-GGUF --local-dir MY_LOCAL_DIR --local-dir-use-symlinks False --include='*Q4_K*gguf' ```