w11wo commited on
Commit
629d5a8
1 Parent(s): ef37245

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +1 -1
README.md CHANGED
@@ -14,7 +14,7 @@ Javanese RoBERTa Small is a masked language model based on the [RoBERTa model](h
14
 
15
  The model was originally HuggingFace's pretrained [English RoBERTa model](https://huggingface.co/roberta-base) and is later fine-tuned on the Javanese dataset. It achieved a perplexity of 33.30 on the validation dataset (20% of the articles). Many of the techniques used are based on a Hugging Face tutorial [notebook](https://github.com/huggingface/notebooks/blob/master/examples/language_modeling.ipynb) written by [Sylvain Gugger](https://github.com/sgugger), and [fine-tuning tutorial notebook](https://github.com/piegu/fastai-projects/blob/master/finetuning-English-GPT2-any-language-Portuguese-HuggingFace-fastaiv2.ipynb) written by [Pierre Guillou](https://huggingface.co/pierreguillou).
16
 
17
- Hugging Face's [Transformers]((https://huggingface.co/transformers)) library was used to train the model -- utilizing the base RoBERTa model and their `Trainer` class. PyTorch was used as the backend framework during training, but the model remains compatible with TensorFlow nonetheless.
18
 
19
  ## Model
20
  | Model | #params | Arch. | Training/Validation data (text) |
 
14
 
15
  The model was originally HuggingFace's pretrained [English RoBERTa model](https://huggingface.co/roberta-base) and is later fine-tuned on the Javanese dataset. It achieved a perplexity of 33.30 on the validation dataset (20% of the articles). Many of the techniques used are based on a Hugging Face tutorial [notebook](https://github.com/huggingface/notebooks/blob/master/examples/language_modeling.ipynb) written by [Sylvain Gugger](https://github.com/sgugger), and [fine-tuning tutorial notebook](https://github.com/piegu/fastai-projects/blob/master/finetuning-English-GPT2-any-language-Portuguese-HuggingFace-fastaiv2.ipynb) written by [Pierre Guillou](https://huggingface.co/pierreguillou).
16
 
17
+ Hugging Face's [Transformers](https://huggingface.co/transformers) library was used to train the model -- utilizing the base RoBERTa model and their `Trainer` class. PyTorch was used as the backend framework during training, but the model remains compatible with TensorFlow nonetheless.
18
 
19
  ## Model
20
  | Model | #params | Arch. | Training/Validation data (text) |