Edit model card

Encoder-Decoder model with DeBERTa encoder

pre-trained models

  • deliciouscat/deberta-v3-base-encoder-decoder-v0.2

-> 297511524(298M) params

Data used

  • HuggingFaceFW/fineweb

  • AiHub ko-en translation corpus (English part)

  • Some papers that I kept

Training hparams

  • optimizer: AdamW, lr=3e-5, betas=(0.875, 0.997)

  • batch size: 12

-> training on denoising objective (BART), 29523 step

How to use

from transformers import AutoTokenizer, EncoderDecoderModel

model = EncoderDecoderModel.from_pretrained("deliciouscat/deberta-v3-base-encoder-decoder-v0.3")
tokenizer = AutoTokenizer.from_pretrained("deliciouscat/deberta-v3-base-encoder-decoder-v0.3")

Future work!

  • train more scientific data

  • fine-tune on keyword extraction task

Downloads last month
4
Safetensors
Model size
298M params
Tensor type
F32
·
Inference API
This model can be loaded on Inference API (serverless).

Dataset used to train deliciouscat/deberta-v3-base-encoder-decoder-v0.3