Edit model card

Model Card for Model ID

This version of the No Language Left Behind (NLLB) model has been fine-tuned on a bilingual dataset of Russian and Lezgian sentences to improve translation quality in both directions (from Russian to Lezgian and from Lezgian to Russian). The model is designed to provide accurate and high-quality translations between these two languages.

  • Architecture: Sequence-to-Sequence Transformer.
  • Languages Supported: Russian and Lezghian. The fine-tuning focuses on enhancing the accuracy of translations in both directions.
  • Use Cases: The model is suitable for machine translation tasks between Russian and Lezgian, as well as for applications requiring automated translations in these language pairs, such as support systems, chatbots, or content localization.

Model Description

Model Sources

How to Get Started with the Model

from transformers import AutoModelForSeq2SeqLM, NllbTokenizer

model = AutoModelForSeq2SeqLM.from_pretrained("leks-forever/nllb-200-distilled-600M")
tokenizer = NllbTokenizer.from_pretrained("leks-forever/nllb-200-distilled-600M")

def predict(
    text, 
    src_lang='lez_Cyrl', 
    tgt_lang='rus_Cyrl', 
    a=32, b=3, 
    max_input_length=1024, 
    num_beams=1, 
    **kwargs
):
    tokenizer.src_lang = src_lang
    tokenizer.tgt_lang = tgt_lang
    inputs = tokenizer(text, return_tensors='pt', padding=True, truncation=True, max_length=max_input_length)
    result = model.generate(
        **inputs.to(model.device),
        forced_bos_token_id=tokenizer.convert_tokens_to_ids(tgt_lang),
        max_new_tokens=int(a + b * inputs.input_ids.shape[1]),
        num_beams=num_beams,
        **kwargs
    )
    return tokenizer.batch_decode(result, skip_special_tokens=True)

sentence: str = "Я люблю гулять по парку ранним утром, когда воздух свежий и тишина вокруг."

translation = predict(sentence, src_lang='rus_Cyrl', tgt_lang='lez_Cyrl')

print(translation)

# ['Заз пакамахъ, хъсан гар алаз, сагъ-саламатдиз къекъвез кӀанзава.'

Training Details

Training Data

The model was fine-tuned on the bible-lezghian-russian dataset, which contains 13,800 parallel sentences in Russian and Lezgian. The dataset was split into three parts: 90% for training, 5% for validation, and 5% for testing.

Preprocessing

The preprocessing step included tokenization with a custom-trained SentencePiece NLLB-based tokenizer on the Russian-Lezgian corpus.

Training Hyperparameters

  • Training regime: fp32
  • Batch size: 16
  • Training steps: The model converged on 14k out of 110000k steps
  • Optimizer: Adafactor with the following settings:
    • lr: 1e-4
    • scale_parameter: False
    • relative_step: False
    • clip_threshold: 1.0
    • weight_decay: 1e-3
  • Scheduler: Cosine scheduler with a warmup of 1,000 steps

Speeds, Sizes, Times [optional]

  • Training time: 2 hours on a single NVIDIA RTX5000 (24 GB).

Evaluation

The evaluation was conducted on the val set of the bible-lezghian-russian dataset, consisting of 5% of the total 13,800 parallel sentences.

Factors

The evaluation considered translations in both directions:

  • Lezgian to Russian
  • Russian to Lezgian

Metrics

The following metrics were used to evaluate the model’s performance:

  • BLEU (n-grams = 4): This metric measures the accuracy of the machine translation output by comparing it to human translations. A higher score indicates better performance.
  • chrF: This is a character-level metric that evaluates the quality of translation by comparing the overlap of character n-grams between the hypothesis and the reference. It’s effective for morphologically rich languages.

Results

  • Lezgian to Russian: BLEU = 27, chrF = 70
  • Russian to Lezgian: BLEU = 27, chrF = 67

Summary

These results indicate that the model can produce accurate translations for both language pairs. However, there are plans to improve the model further by conducting parallel alignment of the corpora to refine the sentence pair matching. Additionally, efforts will be made to collect more training data to enhance the model's performance, especially in handling more diverse and complex linguistic structures.

Downloads last month
29
Safetensors
Model size
628M params
Tensor type
F32
·
Inference Examples
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.

Model tree for leks-forever/nllb-200-distilled-600M

Finetuned
(65)
this model

Dataset used to train leks-forever/nllb-200-distilled-600M

Spaces using leks-forever/nllb-200-distilled-600M 2