thundaa's picture
update model card README.md
51213a1
|
raw
history blame
3.73 kB
metadata
license: apache-2.0
tags:
  - protein language model
  - generated_from_trainer
datasets:
  - train
metrics:
  - spearmanr
model-index:
  - name: tape-fluorescence-prediction-tape-fluorescence-evotuning-DistilProtBert
    results:
      - task:
          name: Text Classification
          type: text-classification
        dataset:
          name: cradle-bio/tape-fluorescence
          type: train
        metrics:
          - name: Spearmanr
            type: spearmanr
            value: 0.5742059850477367

tape-fluorescence-prediction-tape-fluorescence-evotuning-DistilProtBert

This model is a fine-tuned version of thundaa/tape-fluorescence-evotuning-DistilProtBert on the cradle-bio/tape-fluorescence dataset. It achieves the following results on the evaluation set:

  • Loss: 0.2709
  • Spearmanr: 0.5742

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 5e-05
  • train_batch_size: 40
  • eval_batch_size: 40
  • seed: 11
  • gradient_accumulation_steps: 64
  • total_train_batch_size: 2560
  • optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
  • lr_scheduler_type: linear
  • num_epochs: 30
  • mixed_precision_training: Native AMP

Training results

Training Loss Epoch Step Validation Loss Spearmanr
6.4382 0.93 7 2.0198 -0.0244
1.1243 1.93 14 0.7986 -0.0083
0.802 2.93 21 0.6902 0.2336
0.7469 3.93 28 0.6665 0.3001
0.7519 4.93 35 0.6578 0.3895
0.7247 5.93 42 0.6346 0.3682
0.6991 6.93 49 0.8796 0.3681
0.6829 7.93 56 0.6098 0.3747
0.7241 8.93 63 0.7538 0.4345
0.6703 9.93 70 0.5646 0.4419
0.6415 10.93 77 1.6112 0.3947
1.0551 11.93 84 1.9104 0.4256
1.2621 12.93 91 0.5694 0.4640
0.7165 13.93 98 0.5647 0.4748
0.602 14.93 105 0.3979 0.4907
0.4668 15.93 112 0.3896 0.4891
0.5248 16.93 119 0.5101 0.4878
0.6232 17.93 126 0.3298 0.5128
0.5491 18.93 133 0.6220 0.5210
0.5022 19.93 140 0.5351 0.5212
0.7122 20.93 147 0.3773 0.5278
0.377 21.93 154 0.3368 0.5278
0.3689 22.93 161 0.4503 0.5266
0.3768 23.93 168 0.3237 0.5428
0.3308 24.93 175 0.2850 0.5559
0.3182 25.93 182 0.2804 0.5611
0.3135 26.93 189 0.2792 0.5660
0.2953 27.93 196 0.2669 0.5707
0.2917 28.93 203 0.2654 0.5742
0.2652 29.93 210 0.2709 0.5742

Framework versions

  • Transformers 4.18.0
  • Pytorch 1.11.0
  • Datasets 2.1.0
  • Tokenizers 0.12.1