File size: 3,730 Bytes
090dafb 51213a1 090dafb 51213a1 090dafb 51213a1 090dafb 51213a1 090dafb 51213a1 090dafb |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 |
---
license: apache-2.0
tags:
- protein language model
- generated_from_trainer
datasets:
- train
metrics:
- spearmanr
model-index:
- name: tape-fluorescence-prediction-tape-fluorescence-evotuning-DistilProtBert
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: cradle-bio/tape-fluorescence
type: train
metrics:
- name: Spearmanr
type: spearmanr
value: 0.5742059850477367
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# tape-fluorescence-prediction-tape-fluorescence-evotuning-DistilProtBert
This model is a fine-tuned version of [thundaa/tape-fluorescence-evotuning-DistilProtBert](https://huggingface.co/thundaa/tape-fluorescence-evotuning-DistilProtBert) on the cradle-bio/tape-fluorescence dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2709
- Spearmanr: 0.5742
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 40
- eval_batch_size: 40
- seed: 11
- gradient_accumulation_steps: 64
- total_train_batch_size: 2560
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 30
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Spearmanr |
|:-------------:|:-----:|:----:|:---------------:|:---------:|
| 6.4382 | 0.93 | 7 | 2.0198 | -0.0244 |
| 1.1243 | 1.93 | 14 | 0.7986 | -0.0083 |
| 0.802 | 2.93 | 21 | 0.6902 | 0.2336 |
| 0.7469 | 3.93 | 28 | 0.6665 | 0.3001 |
| 0.7519 | 4.93 | 35 | 0.6578 | 0.3895 |
| 0.7247 | 5.93 | 42 | 0.6346 | 0.3682 |
| 0.6991 | 6.93 | 49 | 0.8796 | 0.3681 |
| 0.6829 | 7.93 | 56 | 0.6098 | 0.3747 |
| 0.7241 | 8.93 | 63 | 0.7538 | 0.4345 |
| 0.6703 | 9.93 | 70 | 0.5646 | 0.4419 |
| 0.6415 | 10.93 | 77 | 1.6112 | 0.3947 |
| 1.0551 | 11.93 | 84 | 1.9104 | 0.4256 |
| 1.2621 | 12.93 | 91 | 0.5694 | 0.4640 |
| 0.7165 | 13.93 | 98 | 0.5647 | 0.4748 |
| 0.602 | 14.93 | 105 | 0.3979 | 0.4907 |
| 0.4668 | 15.93 | 112 | 0.3896 | 0.4891 |
| 0.5248 | 16.93 | 119 | 0.5101 | 0.4878 |
| 0.6232 | 17.93 | 126 | 0.3298 | 0.5128 |
| 0.5491 | 18.93 | 133 | 0.6220 | 0.5210 |
| 0.5022 | 19.93 | 140 | 0.5351 | 0.5212 |
| 0.7122 | 20.93 | 147 | 0.3773 | 0.5278 |
| 0.377 | 21.93 | 154 | 0.3368 | 0.5278 |
| 0.3689 | 22.93 | 161 | 0.4503 | 0.5266 |
| 0.3768 | 23.93 | 168 | 0.3237 | 0.5428 |
| 0.3308 | 24.93 | 175 | 0.2850 | 0.5559 |
| 0.3182 | 25.93 | 182 | 0.2804 | 0.5611 |
| 0.3135 | 26.93 | 189 | 0.2792 | 0.5660 |
| 0.2953 | 27.93 | 196 | 0.2669 | 0.5707 |
| 0.2917 | 28.93 | 203 | 0.2654 | 0.5742 |
| 0.2652 | 29.93 | 210 | 0.2709 | 0.5742 |
### Framework versions
- Transformers 4.18.0
- Pytorch 1.11.0
- Datasets 2.1.0
- Tokenizers 0.12.1
|