thundaa commited on
Commit
090dafb
1 Parent(s): 35e654a

update model card README.md

Browse files
Files changed (1) hide show
  1. README.md +79 -0
README.md ADDED
@@ -0,0 +1,79 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: apache-2.0
3
+ tags:
4
+ - protein language model
5
+ - generated_from_trainer
6
+ datasets:
7
+ - train
8
+ metrics:
9
+ - spearmanr
10
+ model-index:
11
+ - name: tape-fluorescence-prediction-tape-fluorescence-evotuning-DistilProtBert
12
+ results:
13
+ - task:
14
+ name: Text Classification
15
+ type: text-classification
16
+ dataset:
17
+ name: cradle-bio/tape-fluorescence
18
+ type: train
19
+ metrics:
20
+ - name: Spearmanr
21
+ type: spearmanr
22
+ value: 0.3011866489457721
23
+ ---
24
+
25
+ <!-- This model card has been generated automatically according to the information the Trainer had access to. You
26
+ should probably proofread and complete it, then remove this comment. -->
27
+
28
+ # tape-fluorescence-prediction-tape-fluorescence-evotuning-DistilProtBert
29
+
30
+ This model is a fine-tuned version of [thundaa/tape-fluorescence-evotuning-DistilProtBert](https://huggingface.co/thundaa/tape-fluorescence-evotuning-DistilProtBert) on the cradle-bio/tape-fluorescence dataset.
31
+ It achieves the following results on the evaluation set:
32
+ - Loss: 0.6667
33
+ - Spearmanr: 0.3012
34
+
35
+ ## Model description
36
+
37
+ More information needed
38
+
39
+ ## Intended uses & limitations
40
+
41
+ More information needed
42
+
43
+ ## Training and evaluation data
44
+
45
+ More information needed
46
+
47
+ ## Training procedure
48
+
49
+ ### Training hyperparameters
50
+
51
+ The following hyperparameters were used during training:
52
+ - learning_rate: 5e-05
53
+ - train_batch_size: 32
54
+ - eval_batch_size: 32
55
+ - seed: 42
56
+ - gradient_accumulation_steps: 128
57
+ - total_train_batch_size: 4096
58
+ - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
59
+ - lr_scheduler_type: linear
60
+ - num_epochs: 5
61
+ - mixed_precision_training: Native AMP
62
+
63
+ ### Training results
64
+
65
+ | Training Loss | Epoch | Step | Validation Loss | Spearmanr |
66
+ |:-------------:|:-----:|:----:|:---------------:|:---------:|
67
+ | 9.0152 | 0.85 | 4 | 4.9739 | -0.0152 |
68
+ | 4.0359 | 1.85 | 8 | 2.1126 | 0.0919 |
69
+ | 1.7594 | 2.85 | 12 | 0.9896 | 0.0973 |
70
+ | 0.9771 | 3.85 | 16 | 0.6949 | 0.3219 |
71
+ | 0.7046 | 4.85 | 20 | 0.6667 | 0.3012 |
72
+
73
+
74
+ ### Framework versions
75
+
76
+ - Transformers 4.18.0
77
+ - Pytorch 1.11.0
78
+ - Datasets 2.1.0
79
+ - Tokenizers 0.12.1