roberta-base-ne / README.md
machineuser
Automatic correction of README.md metadata for keys. Contact website@huggingface.co for any question
929fb30
|
raw
history blame
717 Bytes
metadata
language:
  - ne
thumbnail: null
tags:
  - roberta
  - nepali-laguage-model
license: mit
datasets:
  - cc100
widget:
  - text: तिमीलाई कस्तो <mask>?

nepbert

Model description

Roberta trained from scratch on the Nepali CC-100 dataset with 12 million sentences.

Intended uses & limitations

How to use

from transformers import pipeline

pipe = pipeline(
    "fill-mask",
    model="amitness/nepbert",
    tokenizer="amitness/nepbert"
)
print(pipe(u"तिमीलाई कस्तो <mask>?"))

Training data

The data was taken from the nepali language subset of CC-100 dataset.

Training procedure

The model was trained on Google Colab using 1x Tesla V100.