nicholasKluge commited on
Commit
409479a
1 Parent(s): f643bcf

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +2 -2
README.md CHANGED
@@ -37,7 +37,7 @@ co2_eq_emissions:
37
 
38
  ## Model Summary
39
 
40
- **Note: This model is a quantized version of [TeenyTinyLlama-460m](https://huggingface.co/nicholasKluge/TeenyTinyLlama-460m). Quantization was performed using [AutoAWQ](https://github.com/casper-hansen/AutoAWQ), allowing this version to be 80% lighter, with almost no performance loss.**
41
 
42
  Given the lack of available monolingual foundational models in non-English languages and the fact that some of the most used and downloaded models by the community are those small enough to allow individual researchers and hobbyists to use them in low-resource environments, we developed the TeenyTinyLlama: _a pair of small foundational models trained in Brazilian Portuguese._
43
 
@@ -114,7 +114,7 @@ The primary intended use of TeenyTinyLlama is to research the behavior, function
114
 
115
  ## Basic usage
116
 
117
- **Note: The use of quantized models required the installation of `autoawq==0.1.7`.**
118
 
119
  Using the `pipeline`:
120
 
 
37
 
38
  ## Model Summary
39
 
40
+ **Note: This model is a quantized version of [TeenyTinyLlama-460m](https://huggingface.co/nicholasKluge/TeenyTinyLlama-460m). Quantization was performed using [AutoAWQ](https://github.com/casper-hansen/AutoAWQ), allowing this version to be 80% lighter with almost no performance loss. A GPU is required to run the AWQ-quantized models.**
41
 
42
  Given the lack of available monolingual foundational models in non-English languages and the fact that some of the most used and downloaded models by the community are those small enough to allow individual researchers and hobbyists to use them in low-resource environments, we developed the TeenyTinyLlama: _a pair of small foundational models trained in Brazilian Portuguese._
43
 
 
114
 
115
  ## Basic usage
116
 
117
+ **Note: The use of quantized models required the installation of `autoawq==0.1.7`. A GPU is required to run the AWQ-quantized models.**
118
 
119
  Using the `pipeline`:
120