nicholasKluge commited on
Commit
3c14459
1 Parent(s): 4c1bc0e

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +1 -3
README.md CHANGED
@@ -41,15 +41,13 @@ Teeny-tiny-llama-162m is a compact language model based on the Llama 2 architect
41
 
42
  Teeny-tiny-llama has been trained by leveraging scaling laws to determine the optimal number of tokens per parameter while incorporating preference pre-training.
43
 
44
- ## Features
45
-
46
  - **Compact Design:** Teeny-tiny-llama is a downsized version of the Llama 2 architecture, making it suitable for applications with limited computational resources.
47
 
48
  - **Optimized Scaling:** The model has been pre-trained using scaling logs to identify the ideal token-to-parameter ratio.
49
 
50
  - **Custom Portuguese Dataset:** Teeny-tiny-llama has been trained on a custom Portuguese dataset. This dataset includes diverse linguistic contexts and preference pre-training, allowing the model to better cater to Portuguese language nuances and be better suited for fine-tuning tasks like instruction-tuning.
51
 
52
- - ## Details
53
 
54
  - **Size:** 162 million parameters
55
  - **Dataset:** [Portuguese-Corpus-v3](https://huggingface.co/datasets/nicholasKluge/portuguese-corpus-v3)
 
41
 
42
  Teeny-tiny-llama has been trained by leveraging scaling laws to determine the optimal number of tokens per parameter while incorporating preference pre-training.
43
 
 
 
44
  - **Compact Design:** Teeny-tiny-llama is a downsized version of the Llama 2 architecture, making it suitable for applications with limited computational resources.
45
 
46
  - **Optimized Scaling:** The model has been pre-trained using scaling logs to identify the ideal token-to-parameter ratio.
47
 
48
  - **Custom Portuguese Dataset:** Teeny-tiny-llama has been trained on a custom Portuguese dataset. This dataset includes diverse linguistic contexts and preference pre-training, allowing the model to better cater to Portuguese language nuances and be better suited for fine-tuning tasks like instruction-tuning.
49
 
50
+ ## Details
51
 
52
  - **Size:** 162 million parameters
53
  - **Dataset:** [Portuguese-Corpus-v3](https://huggingface.co/datasets/nicholasKluge/portuguese-corpus-v3)