Commit
•
1dc2e33
1
Parent(s):
cc44bcc
Update README.md
Browse files
README.md
CHANGED
@@ -74,8 +74,6 @@ This repository has the [source code](https://github.com/Nkluge-correa/TeenyTiny
|
|
74 |
- [Codecarbon](https://github.com/mlco2/codecarbon)
|
75 |
- [AutoAWQ](https://github.com/casper-hansen/AutoAWQ)
|
76 |
|
77 |
-
Check out the training logs in [Weights and Biases](https://api.wandb.ai/links/nkluge-correa/vws4g032).
|
78 |
-
|
79 |
## Intended Uses
|
80 |
|
81 |
The primary intended use of TeenyTinyLlama is to research the challenges related to developing language models for low-resource languages. Checkpoints saved during training are intended to provide a controlled setting for performing scientific experiments. You may also further fine-tune and adapt TeenyTinyLlama for deployment, as long as your use is following the Apache 2.0 license. If you decide to use pre-trained TeenyTinyLlama as a basis for your fine-tuned model, please conduct your own risk and bias assessment.
|
|
|
74 |
- [Codecarbon](https://github.com/mlco2/codecarbon)
|
75 |
- [AutoAWQ](https://github.com/casper-hansen/AutoAWQ)
|
76 |
|
|
|
|
|
77 |
## Intended Uses
|
78 |
|
79 |
The primary intended use of TeenyTinyLlama is to research the challenges related to developing language models for low-resource languages. Checkpoints saved during training are intended to provide a controlled setting for performing scientific experiments. You may also further fine-tune and adapt TeenyTinyLlama for deployment, as long as your use is following the Apache 2.0 license. If you decide to use pre-trained TeenyTinyLlama as a basis for your fine-tuned model, please conduct your own risk and bias assessment.
|