File size: 8,594 Bytes
8ddc3e6
 
f7b98ec
 
 
 
 
 
 
 
 
8ddc3e6
f7b98ec
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
4e44a4a
b6e6573
f7b98ec
 
b6e6573
f7b98ec
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
---
license: cc-by-nc-4.0
datasets:
- chrisociepa/wikipedia-pl-20230401
language:
- pl
library_name: transformers
tags:
- llama
- ALLaMo
inference: false
---

# APT3-275M-Base

## Introduction

At [Azurro](https://azurro.pl), we consistently place importance on using the Open Source technologies, both while working on the projects and in our everyday lives. We have decided to share a base language model trained by us. We are confident that smaller language models have great potential, and direct access to them for all people that are interested in such models democratizes this significant and dynamically changing field even more.

## Statements

Training large language models requires a lot of computing power and it is meant for the major players on the market. However, does it mean that individuals or small companies cannot train language models capable of performing specific tasks? We decided to answer this question and train our own language model from scratch. 
We have made the following statements:

* we use 1 consumer graphic card
* we train the model only with the Polish corpus
* we use manually selected, high quality texts for training the model.

Why have we made such statements?
It is worth noting that training a model requires several times more resources than using it. To put it simply, it can be assumed that it is about 3-4 times more. Therefore, if a model can be run with a graphic card that has 6 GB VRAM, then training this model requires about 24 GB VRAM (this is the minimum value).

Many consumer computers are equipped with good quality graphic cards that can be used for training a model at one’s own home. This is why we have decided to use a top consumer graphic card - Nvidia’s RTX 4090 24GB VRAM.

All the currently available language models have been trained mainly with English corpora with a little bit of other languages, including Polish. The effect is that these models are not the best at dealing with the Polish texts. Even the popular GPT-3.5 model from OpenAI often has issues with correct forms. Therefore we have decided to prepare a model based only on the Polish corpus. An additional advantage of using only the Polish corpus is the size of the model - it is better to focus on one language in the case of smaller models.

It is important to remember that models are as good as the data with which they are trained. Having regard to the small size of the model, we trained it with carefully selected texts. This is why we have not used corpora such as Common Crawl that contain a lot of poor quality data. Our team has prepared a set of sources that then have been processed and used for training the model.

## Model

APT3-275M-Base has been trained with the use of an original open source framework called [ALLaMo](https://github.com/chrisociepa/allamo). This framework allows the user to train language models similar to the Meta AI’s LLaMA models quickly and efficiently.

APT3-275M-Base is an autoregressive language model based on the architecture of a transformer. It has been trained with data collected before November 2023.

21 billion tokens were used for training on the Polish corpus for one epoch.

A specialized tokenizer has been prepared and trained specifically for training the models in the APT3 series.

### Model description:

* **Developed by:** [Azurro](https://azurro.pl)
* **Language:** Polish
* **Model type:** causal decoder-only
* **License:** CC BY NC 4.0 (non-commercial use)


### Model details:

| **Hyperparameter** | **Value**   |
|--------------------|-------------|
| Model Parameters   | 275M        |
| Sequence Length    | 1024        |
| Vocabulary Size    | 31980       |
| Layers             | 32          |
| Heads              | 16          |
| d_head             | 64          |
| d_model            | 768         |
| Dropout            | 0.0         |
| Bias               | No          |
| Positional Encoding | RoPE       |
| Activation Function | SwiGLU     |
| Normalizing Function | RMSNorm   |
| Intermediate Size  | 2048        |
| Norm Epsilon       | 1e-06       |

### Tokenizer details:

* type: BPE
* special tokens: 8 (`<unk>`, `<s>`, `</s>`, `<pad>`, `[INST]`, `[/INST]`, `<<SYS>>`, `<</SYS>>`)
* alphabet size: 113 
* vocabulary size: 31980

## Training

* Framework: [ALLaMo](https://github.com/chrisociepa/allamo)
* Visualizations: [W&B](https://wandb.ai)

<p align="center">
  <img src="https://huggingface.co/Azurro/APT3-275M-Base/raw/main/apt3-275m-base.png">
</p>

### Training hyperparameters:

| **Hyperparameter**          | **Value**        |
|-----------------------------|------------------|
| Micro Batch Size            | 13               |
| Gradient Accumulation Steps | 40               |
| Batch Size                  | 532480           |
| Learning Rate (cosine)      | 4e-04 -> 2e-05   |
| Warmup Iterations           | 1000             |
| All Iterations              | 40000            |
| Optimizer                   | AdamW            |
| β1, β2                      | 0.9, 0.95        |
| Adam_eps                    | 1e−8             |
| Weight Decay                | 0.1              |
| Grad Clip                   | 1.0              |
| Precision                   | bfloat16         |

### Dataset

Collecting a large amount of high quality training data is a great challenge. Over the past years at Azurro, we have done a lot of projects connected with processing Big Data. Therefore, with our extensive experience, we have been able to prepare carefully selected training dataset quickly and efficiently.

Our training dataset contains:

* ebooks 8%
* Polish Wikipedia 4%
* web crawl data 88%

### Quickstart

This model can be easily loaded using the AutoModelForCausalLM functionality.

```python
from transformers import AutoTokenizer, AutoModelForCausalLM

model_name = "Azurro/APT3-275M-Base"

tokenizer = AutoTokenizer.from_pretrained(model_name)
model = AutoModelForCausalLM.from_pretrained(model_name)
```

In order to reduce the memory usage, you can use smaller precision (`bfloat16`).

```python
import torch

model = AutoModelForCausalLM.from_pretrained(model_name, torch_dtype=torch.bfloat16)
```

And then you can use Hugging Face Pipelines to generate text.

```python
import transformers

text = "Najważniejszym celem człowieka na ziemi jest"

pipeline = transformers.pipeline("text-generation", model=model, tokenizer=tokenizer)
sequences = pipeline(text_inputs=text, max_new_tokens=100, do_sample=True, top_k=200, eos_token_id=tokenizer.eos_token_id)
for seq in sequences:
    print(f"Result: {seq['generated_text']}")
```
Generated output:
> Najważniejszym celem człowieka na ziemi jest osiągnięcie pokoju. W przeszłości była to również kwestia honoru. Jednak o ile po dziś dzień nie budzi to żadnych wątpliwości, to życie ludzkie w XXI wieku jest częścią większej całości, która składa się z wielu elementów. Nie ma tu żadnej jedności, więc tak naprawdę każdy człowiek ma pewne zasady, które decydują o jego życiu i to właśnie one determinują jego przyszłość. Myślę, że nie sposób pominąć tutaj również religii. Religia w starożytności traktowana była jako jedna z naczelnych zasad chrześcijaństwa.

## Limitations and Biases

APT3-275M-Base is not intended for deployment without fine-tuning. It should not be used for human-facing interactions without further guardrails and user consent.

APT3-275M-Base can produce factually incorrect output, and should not be relied on to produce factually accurate information. APT3-275M-Base was trained on various public datasets. While great efforts have been taken to clean the pretraining data, it is possible that this model could generate lewd, biased or otherwise offensive outputs.

## License

Because of an unclear legal situation, we have decided to publish the model under CC BY NC 4.0 license - it allows for non-commercial use. The model can be used for scientific purposes and privately, as long as the license conditions are met.

## Disclaimer

The license on this model does not constitute legal advice. We are not responsible for the actions of third parties who use this model.

## Citation
Please cite this model using the following format:

```
@online{AzurroAPT3Base275M,
    author    = {Ociepa, Krzysztof and {Azurro Team}},
    title     = {Introducing APT3-275M-Base: Polish Language Model},
    year      = {2023},
    url       = {https://azurro.pl/apt3-275m-base-en},
    note      = {Accessed: 2023-11-20}, % change this date
    urldate   = {2023-11-20} % change this date
}
```