Edit model card

canine-mouse-enhancers

This model is a fine-tuned version of google/canine-c on an unknown dataset. It achieves the following results on the evaluation set:

  • Loss: 0.9641
  • Accuracy: 0.7727

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 2e-06
  • train_batch_size: 4
  • eval_batch_size: 4
  • seed: 42
  • optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
  • lr_scheduler_type: linear
  • num_epochs: 50

Training results

Training Loss Epoch Step Validation Loss Accuracy
No log 1.0 242 0.6476 0.6281
No log 2.0 484 0.6080 0.6860
0.6372 3.0 726 0.5989 0.7231
0.6372 4.0 968 0.6285 0.6694
0.5955 5.0 1210 0.5904 0.6860
0.5955 6.0 1452 0.5782 0.7107
0.5812 7.0 1694 0.5845 0.6983
0.5812 8.0 1936 0.6186 0.6983
0.5901 9.0 2178 0.5814 0.7231
0.5901 10.0 2420 0.6152 0.7355
0.5535 11.0 2662 0.5556 0.7438
0.5535 12.0 2904 0.5476 0.7479
0.5566 13.0 3146 0.6583 0.7107
0.5566 14.0 3388 0.5571 0.7521
0.5419 15.0 3630 0.6231 0.7231
0.5419 16.0 3872 0.6068 0.7603
0.546 17.0 4114 0.6581 0.7273
0.546 18.0 4356 0.6350 0.7438
0.5359 19.0 4598 0.7081 0.7438
0.5359 20.0 4840 0.6711 0.7521
0.5262 21.0 5082 0.8095 0.7190
0.5262 22.0 5324 0.7282 0.7521
0.5666 23.0 5566 0.7604 0.7479
0.5666 24.0 5808 0.8097 0.7521
0.5456 25.0 6050 0.8513 0.7521
0.5456 26.0 6292 0.7954 0.7603
0.5612 27.0 6534 0.8435 0.7521
0.5612 28.0 6776 0.9000 0.7355
0.5358 29.0 7018 0.9241 0.7603
0.5358 30.0 7260 0.9005 0.7479
0.5434 31.0 7502 0.8875 0.7645
0.5434 32.0 7744 0.8878 0.7686
0.5434 33.0 7986 0.9162 0.7645
0.5066 34.0 8228 0.8665 0.7686
0.5066 35.0 8470 0.8756 0.7686
0.5276 36.0 8712 0.9723 0.7603
0.5276 37.0 8954 1.0044 0.7521
0.4916 38.0 9196 0.9647 0.7521
0.4916 39.0 9438 0.9819 0.7603
0.4865 40.0 9680 0.9644 0.7686
0.4865 41.0 9922 0.9084 0.7851
0.4505 42.0 10164 1.0152 0.7521
0.4505 43.0 10406 0.9332 0.7769
0.4798 44.0 10648 0.9803 0.7603
0.4798 45.0 10890 1.0211 0.7521
0.4234 46.0 11132 0.9143 0.7810
0.4234 47.0 11374 0.9969 0.7645
0.4269 48.0 11616 0.9515 0.7851
0.4269 49.0 11858 0.9998 0.7686
0.4135 50.0 12100 0.9641 0.7727

Framework versions

  • Transformers 4.26.1
  • Pytorch 2.0.0+cu117
  • Datasets 2.19.0
  • Tokenizers 0.13.3
Downloads last month
8
Inference Examples
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.