Eksperimentalni modeli
Collection
4 items
•
Updated
Model is developed in support of the University of Belgrade doctoral dissertation "Composite pseudogrammars based on parallel language models of Serbian" by Mihailo Škorić.
This small gpt-2 model was trained on several corpora for Serbian, including "The corpus of Contemporary Serbian", SrpELTeC and WikiKorpus by JeRTeh – Society for Language Resources and Technologies.
This model is purely experimental! For actual models for Serbian see GPT2-ORAO and GPT2-VRABAC
If you use this model for your reseach please cite: https://doi.org/10.3390/math11224660