Edit model card

electra-hongkongese-small-hkt-ws

This model is a fine-tuned version of toastynews/electra-hongkongese-small-discriminator on HKCanCor, CityU and AS for word segmentation.

Model description

Performs word segmentation on text from Hong Kong. There are two versions; hk trained with only text from Hong Kong, and hkt trained with text from Hong Kong and Taiwan. Each version have base and small model sizes.

Intended uses & limitations

Trained to handle both Hongkongese/Cantonese and Standard Chinese from Hong Kong. Text from other places and English do not work as well. The easiest way is to use with the CKIP Transformers libary.

Training and evaluation data

HKCanCor, CityU and AS are converted to BI-encoded word segmentation dataset in Hugging Face format using code from finetune-ckip-transformers.

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 5e-05
  • train_batch_size: 8
  • eval_batch_size: 8
  • seed: 42
  • optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
  • lr_scheduler_type: linear
  • num_epochs: 3.0

Training results

dataset token_f token_p token_r
ud yue_hk 0.9389 0.9429 0.9350
ud zh_hk 0.9314 0.9398 0.9231
hkcancor 0.9807 0.9798 0.9816
cityu 0.9712 0.9705 0.9718
as 0.9644 0.9611 0.9678

Was trained on hkcancor. Reported for reference only.

Framework versions

  • Transformers 4.27.0.dev0
  • Pytorch 1.10.0
  • Datasets 2.10.1
  • Tokenizers 0.13.2
Downloads last month
9
Inference Examples
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.