thundaa's picture
update model card README.md
8fce485
|
raw
history blame
3.73 kB
metadata
license: apache-2.0
tags:
  - protein language model
  - generated_from_trainer
datasets:
  - train
metrics:
  - spearmanr
model-index:
  - name: tape-fluorescence-prediction-tape-fluorescence-evotuning-DistilProtBert
    results:
      - task:
          name: Text Classification
          type: text-classification
        dataset:
          name: cradle-bio/tape-fluorescence
          type: train
        metrics:
          - name: Spearmanr
            type: spearmanr
            value: 0.6081143924159805

tape-fluorescence-prediction-tape-fluorescence-evotuning-DistilProtBert

This model is a fine-tuned version of thundaa/tape-fluorescence-evotuning-DistilProtBert on the cradle-bio/tape-fluorescence dataset. It achieves the following results on the evaluation set:

  • Loss: 0.2209
  • Spearmanr: 0.6081

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 5e-05
  • train_batch_size: 40
  • eval_batch_size: 40
  • seed: 11
  • gradient_accumulation_steps: 64
  • total_train_batch_size: 2560
  • optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
  • lr_scheduler_type: linear
  • num_epochs: 30
  • mixed_precision_training: Native AMP

Training results

Training Loss Epoch Step Validation Loss Spearmanr
6.3796 0.93 7 2.2462 0.2021
1.2421 1.93 14 0.7066 0.1024
0.7978 2.93 21 0.6895 0.1444
0.7613 3.93 28 0.6758 0.2527
0.7498 4.93 35 0.6772 0.2620
0.7486 5.93 42 0.6703 0.3991
0.7394 6.93 49 0.6506 0.4038
0.8251 7.93 56 1.3414 0.3358
0.8479 8.93 63 0.6745 0.3353
0.7954 9.93 70 0.6610 0.4157
0.7316 10.93 77 0.4977 0.4483
0.6027 11.93 84 0.4138 0.4517
0.5239 12.93 91 0.4185 0.4798
0.4802 13.93 98 0.3637 0.5082
0.5417 14.93 105 0.3360 0.5143
0.5022 15.93 112 0.5404 0.5207
0.4487 16.93 119 0.4884 0.5347
0.4229 17.93 126 0.2941 0.5530
0.3785 18.93 133 0.2920 0.5625
0.3448 19.93 140 0.3082 0.5589
0.3352 20.93 147 0.3006 0.5638
0.3219 21.93 154 0.2707 0.5737
0.3156 22.93 161 0.2623 0.5775
0.3142 23.93 168 0.3162 0.5752
0.3003 24.93 175 0.2487 0.5897
0.303 25.93 182 0.2633 0.5981
0.2757 26.93 189 0.2813 0.5921
0.2836 27.93 196 0.2696 0.5968
0.2759 28.93 203 0.2230 0.6060
0.232 29.93 210 0.2209 0.6081

Framework versions

  • Transformers 4.18.0
  • Pytorch 1.11.0
  • Datasets 2.1.0
  • Tokenizers 0.12.1