nicholasKluge commited on
Commit
c7678b0
1 Parent(s): 409479a

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +3 -3
README.md CHANGED
@@ -63,7 +63,7 @@ TeenyTinyLlama is a compact language model based on the Llama 2 architecture ([T
63
  - `version`: "gemm"
64
  - `zero_point`: True
65
 
66
- This repository has the [source code](https://github.com/Nkluge-correa/Aira) used to train this model. The main libraries used are:
67
 
68
  - [Transformers](https://github.com/huggingface/transformers)
69
  - [PyTorch](https://github.com/pytorch/pytorch)
@@ -112,9 +112,9 @@ These are the main arguments used in the training of this model:
112
 
113
  The primary intended use of TeenyTinyLlama is to research the behavior, functionality, and limitations of large language models. Checkpoints saved during training are intended to provide a controlled setting for performing scientific experiments. You may also further fine-tune and adapt TeenyTinyLlama-460m for deployment, as long as your use is in accordance with the Apache 2.0 license. If you decide to use pre-trained TeenyTinyLlama-460m as a basis for your fine-tuned model, please conduct your own risk and bias assessment.
114
 
115
- ## Basic usage
116
 
117
- **Note: The use of quantized models required the installation of `autoawq==0.1.7`. A GPU is required to run the AWQ-quantized models.**
118
 
119
  Using the `pipeline`:
120
 
 
63
  - `version`: "gemm"
64
  - `zero_point`: True
65
 
66
+ This repository has the [source code](https://github.com/Nkluge-correa/TeenyTinyLlama) used to train this model. The main libraries used are:
67
 
68
  - [Transformers](https://github.com/huggingface/transformers)
69
  - [PyTorch](https://github.com/pytorch/pytorch)
 
112
 
113
  The primary intended use of TeenyTinyLlama is to research the behavior, functionality, and limitations of large language models. Checkpoints saved during training are intended to provide a controlled setting for performing scientific experiments. You may also further fine-tune and adapt TeenyTinyLlama-460m for deployment, as long as your use is in accordance with the Apache 2.0 license. If you decide to use pre-trained TeenyTinyLlama-460m as a basis for your fine-tuned model, please conduct your own risk and bias assessment.
114
 
115
+ ## Basic Usage
116
 
117
+ **Note: Using quantized models required the installation of `autoawq==0.1.7`. A GPU is required to run the AWQ-quantized models.**
118
 
119
  Using the `pipeline`:
120