Edit model card

Zabantu - Tshivenda

This is a variant of Zabantu pre-trained on a monolingual dataset of Tshivenda(ven) sentences on a transformer network with 120 million traininable parameters.

Usage Example(s)

from transformers import pipeline
# Initialize the pipeline for masked language model
unmasker = pipeline('fill-mask', model='dsfsi/zabantu-ven-120m')

sample_sentences = ["Rabulasi wa <mask> u khou bvelela nga u lima",
                    "Vhana vhane vha kha ḓi bva u bebwa vha kha khombo ya u <mask> nga Listeriosis"]

# Perform the fill-mask task
results = unmasker(sentence)
# Display the results
for result in results:
    print(f"Predicted word: {result['token_str']} - Score: {result['score']}")
    print(f"Full sentence: {result['sequence']}\n")
    print("=" * 80)
Downloads last month
4
Inference Examples
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.

Spaces using dsfsi/zabantu-ven-120m 2