duongttr's picture
Update README.md
8f9e41e
metadata
license: apache-2.0
datasets:
  - duongttr/vi-dataset-for-pretrain
language:
  - vi
metrics:
  - perplexity
pipeline_tag: text-generation
widget:
  - text: Hôm nay tôi rất vui 
  - text: Hoàng Sa, Trường Sa  của Việt
model-index:
  - name: chronopt-research/vietnamese-gpt2-base
    results:
      - task:
          type: text-generation
        metrics:
          - type: perplexity
            value: 51.35
            verified: true

Vietnamese gpt2-base

This is a pretrained gpt2-base for Vietnamese language using casual language modeling (CLM) objective. It was introduced in this paper and first released at this page.

Model Description

GPT-2 (at first) is a transformers model pretrained on a very large corpus of English data in a self-supervised fashion. This means it was pretrained on the raw texts only, with no humans labelling them in any way (which is why it can use lots of publicly available data) with an automatic process to generate inputs and labels from those texts. More precisely, it was trained to guess the next word in sentences.

This is the base version of GPT-2, with 137M parameters.

You could've found other pretrained version from here: gpt2-medium, gpt2-large

Dataset used for pretraining

This is a combination of multiple Vietnamese dataset for pretraining CLMs such as GPT, GPT2, etc.

The dataset consists of:

You can find out the combined version here: duongttr/vi-dataset-for-pretrain

Hyperparamters & Results

We trained the model ~100k steps, with lr=1e-4, bs=2560 (single_batch_size=32 * num_core=8 * grad_cum=10), optimizer=adamw on TPU-VM-3.8 from TRC Program. The training costs around 1 day.

Model Eval Loss Eval Perplexity
gpt2-base 3.939 51.35
gpt2-medium 2.8676 17.5948
gpt2-large - -

Contacts

Feel free to contact us via: email