Update README.md
Browse files
README.md
CHANGED
@@ -21,7 +21,7 @@ By sharing this model, we aim to foster further research and applications in Ser
|
|
21 |
|
22 |
### Introduction:
|
23 |
|
24 |
-
This GPT-2 model has been tuned on an extensive Serbian corpus, boasting a richness of
|
25 |
|
26 |
### Dataset Details:
|
27 |
|
|
|
21 |
|
22 |
### Introduction:
|
23 |
|
24 |
+
This GPT-2 model has been tuned on an extensive Serbian corpus, boasting a richness of 750 million tokens. It is designed to generate high-quality text in Serbian, capturing the nuances and intricacies of the language.
|
25 |
|
26 |
### Dataset Details:
|
27 |
|