edukom commited on
Commit
4193aa5
·
1 Parent(s): 122ffcf

Upload README.md

Browse files
Files changed (1) hide show
  1. README.md +2 -4
README.md CHANGED
@@ -20,11 +20,9 @@ By sharing this model, we aim to foster further research and applications in Ser
20
 
21
  This GPT-2 model has been fine-tuned on an extensive Serbian corpus, boasting a richness of 43 million tokens. It is designed to generate high-quality text in Serbian, capturing the nuances and intricacies of the language.
22
 
23
- ### Dataset Details:
24
 
25
- Size: 43 million tokens.
26
-
27
- Nature: The dataset encompasses a diverse range of topics, representing various aspects of the Serbian language and culture.
28
 
29
  ### Model Usage:
30
 
 
20
 
21
  This GPT-2 model has been fine-tuned on an extensive Serbian corpus, boasting a richness of 43 million tokens. It is designed to generate high-quality text in Serbian, capturing the nuances and intricacies of the language.
22
 
23
+ ### Dataset Details:
24
 
25
+ The dataset encompasses a diverse range of topics, representing various aspects of the Serbian language and culture. Size: 43 million tokens.
 
 
26
 
27
  ### Model Usage:
28