vera-8's picture
Update README.md
41f9348 verified
|
raw
history blame
1.05 kB
metadata
datasets:
  - DEplain/DEplain-APA-sent
language:
  - de
metrics:
  - sari
  - bleu
  - bertscore
library_name: transformers
base_model: google/mT5-small
pipeline_tag: text2text-generation

Model Card for mT5-small-VT-span-mlm_deplain-apa

Finetuned mT5-Model for German sentence-level text-simplification.

Model Details

Model Description

  • Model type: Encoder-Decoder-Transformer
  • Language(s) (NLP): German
  • Finetuned from model: google/mT5-small
  • Task: Text-Simplification

Training Details

Training Data

DEplain/DEplain-APA-sent
Stodden et al. (2023): arXiv:2305.18939

Training Procedure

Parameter-efficient Fine-Tuning with LoRA. \

Vocabulary adjusted through Vocabulary Transfer (Mosin et al. ()).

Training Hyperparameters

  • Batch Size: 16
  • Epochs: 1
  • Learning Rate: 0.001
  • Optimizer: Adafactor

LoRA Hyperparameters

  • R: 32
  • Alpha: 64
  • Dropout: 0.1
  • Target modules: all linear layers