electra-hongkongese-small-hk-ws
This model is a fine-tuned version of toastynews/electra-hongkongese-small-discriminator on HKCanCor and CityU for word segmentation.
Model description
Performs word segmentation on text from Hong Kong. There are two versions; hk trained with only text from Hong Kong, and hkt trained with text from Hong Kong and Taiwan. Each version have base and small model sizes.
Intended uses & limitations
Trained to handle both Hongkongese/Cantonese and Standard Chinese from Hong Kong. Text from other places and English do not work as well. The easiest way is to use with the CKIP Transformers libary.
Training and evaluation data
HKCanCor and CityU are converted to BI-encoded word segmentation dataset in Hugging Face format using code from finetune-ckip-transformers.
Training procedure
Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
Training results
dataset | token_f | token_p | token_r |
---|---|---|---|
ud yue_hk | 0.9468 | 0.9484 | 0.9453 |
ud zh_hk | 0.9277 | 0.9350 | 0.9205 |
hkcancor | 0.9769 | 0.9742 | 0.9795 |
cityu | 0.9750 | 0.9741 | 0.9760 |
as | 0.9187 | 0.9154 | 0.9219 |
Was trained on hkcancor. Reported for reference only.
Framework versions
- Transformers 4.27.0.dev0
- Pytorch 1.10.0
- Datasets 2.10.0
- Tokenizers 0.13.2
- Downloads last month
- 12