edukom commited on
Commit
2cda0fa
·
verified ·
1 Parent(s): c538d3e

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +1 -1
README.md CHANGED
@@ -21,7 +21,7 @@ By sharing this model, we aim to foster further research and applications in Ser
21
 
22
  ### Introduction:
23
 
24
- This GPT-2 model has been tuned on an extensive Serbian corpus, boasting a richness of 500 million tokens. It is designed to generate high-quality text in Serbian, capturing the nuances and intricacies of the language.
25
 
26
  ### Dataset Details:
27
 
 
21
 
22
  ### Introduction:
23
 
24
+ This GPT-2 model has been tuned on an extensive Serbian corpus, boasting a richness of 750 million tokens. It is designed to generate high-quality text in Serbian, capturing the nuances and intricacies of the language.
25
 
26
  ### Dataset Details:
27