Edit model card

Misinformation-Covid-LowLearningRatebert-base-chinese

This model is a fine-tuned version of bert-base-chinese on the None dataset. It achieves the following results on the evaluation set:

  • Loss: 0.5999
  • F1: 0.2128

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 2e-07
  • train_batch_size: 8
  • eval_batch_size: 8
  • seed: 42
  • optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
  • lr_scheduler_type: linear
  • num_epochs: 50

Training results

Training Loss Epoch Step Validation Loss F1
0.6765 1.0 189 0.6464 0.0
0.6809 2.0 378 0.6449 0.0
0.6734 3.0 567 0.6651 0.0
0.6827 4.0 756 0.6684 0.0
0.7095 5.0 945 0.6532 0.0
0.7 6.0 1134 0.6646 0.0
0.7192 7.0 1323 0.6497 0.0
0.6877 8.0 1512 0.6446 0.0
0.6831 9.0 1701 0.6305 0.0571
0.6633 10.0 1890 0.6203 0.1622
0.6668 11.0 2079 0.6219 0.1622
0.6482 12.0 2268 0.6242 0.1111
0.6543 13.0 2457 0.6117 0.15
0.6492 14.0 2646 0.6236 0.1622
0.6624 15.0 2835 0.6233 0.1622
0.6525 16.0 3024 0.6134 0.15
0.6466 17.0 3213 0.6118 0.1905
0.6406 18.0 3402 0.6191 0.15
0.6479 19.0 3591 0.6216 0.1538
0.6488 20.0 3780 0.6076 0.2128
0.6352 21.0 3969 0.6062 0.2174
0.6213 22.0 4158 0.6042 0.2174
0.6285 23.0 4347 0.6100 0.2326
0.6298 24.0 4536 0.6076 0.2128
0.6473 25.0 4725 0.6058 0.2128
0.5972 26.0 4914 0.6065 0.2222
0.6118 27.0 5103 0.6001 0.25
0.6116 28.0 5292 0.6059 0.2128
0.6289 29.0 5481 0.5992 0.25
0.5932 30.0 5670 0.6006 0.25
0.6076 31.0 5859 0.6009 0.2128
0.6033 32.0 6048 0.6082 0.2128
0.6235 33.0 6237 0.6023 0.2128
0.6237 34.0 6426 0.6079 0.2222
0.6176 35.0 6615 0.6081 0.2222
0.646 36.0 6804 0.6019 0.2128
0.6233 37.0 6993 0.6020 0.2128
0.6004 38.0 7182 0.6040 0.2174
0.6159 39.0 7371 0.5963 0.2449
0.5747 40.0 7560 0.6011 0.2174
0.6216 41.0 7749 0.5954 0.2449
0.5893 42.0 7938 0.5974 0.2083
0.5887 43.0 8127 0.5993 0.2128
0.5756 44.0 8316 0.5993 0.2128
0.6204 45.0 8505 0.5982 0.2083
0.584 46.0 8694 0.5966 0.2449
0.5809 47.0 8883 0.5989 0.2083
0.5873 48.0 9072 0.6002 0.2128
0.5999 49.0 9261 0.6001 0.2128
0.5888 50.0 9450 0.5999 0.2128

Framework versions

  • Transformers 4.32.1
  • Pytorch 2.1.2
  • Datasets 2.12.0
  • Tokenizers 0.13.3
Downloads last month
4
Inference Examples
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.

Model tree for Ghunghru/Misinformation-Covid-LowLearningRatebert-base-chinese

Finetuned
(149)
this model