Update README.md
Browse files
README.md
CHANGED
@@ -1,56 +1,56 @@
|
|
1 |
-
---
|
2 |
-
license: mit
|
3 |
-
language: protein
|
4 |
-
tags:
|
5 |
-
- protein language model
|
6 |
-
datasets:
|
7 |
-
- Uniref50
|
8 |
-
---
|
9 |
-
|
10 |
-
# DistilProtBert model
|
11 |
-
|
12 |
-
Distilled
|
13 |
-
In addition to cross entropy and cosine teacher-student losses, DistilProtBert was pretrained on a masked language modeling (MLM) objective and it only works with capital letter amino acids.
|
14 |
-
|
15 |
-
# Model description
|
16 |
-
|
17 |
-
DistilProtBert was pretrained on millions of proteins sequences.
|
18 |
-
|
19 |
-
Few important differences between DistilProtBert model and the original ProtBert version are:
|
20 |
-
1. The size of the model
|
21 |
-
2. The size of the pretraining dataset
|
22 |
-
3. Time & hardware used for pretraining
|
23 |
-
|
24 |
-
## Intended uses & limitations
|
25 |
-
|
26 |
-
The model could be used for protein feature extraction or to be fine-tuned on downstream tasks.
|
27 |
-
|
28 |
-
### How to use
|
29 |
-
|
30 |
-
The model can be used the same as ProtBert.
|
31 |
-
|
32 |
-
## Training data
|
33 |
-
|
34 |
-
DistilProtBert model was pretrained on [Uniref50](https://www.uniprot.org/downloads), a dataset consisting of ~43 million protein sequences (only sequences of length between 20 to 512 amino acids were used).
|
35 |
-
|
36 |
-
# Pretraining procedure
|
37 |
-
|
38 |
-
Preprocessing was done using ProtBert's tokenizer.
|
39 |
-
The details of the masking procedure for each sequence followed the original Bert (as mentioned in [ProtBert](https://huggingface.co/Rostlab/prot_bert)).
|
40 |
-
|
41 |
-
The model was pretrained on a single DGX cluster 3 epochs in total. local batch size was 16, the optimizer used was AdamW with a learning rate of 5e-5 and mixed precision settings.
|
42 |
-
|
43 |
-
## Evaluation results
|
44 |
-
|
45 |
-
When fine-tuned on downstream tasks, this model achieves the following results:
|
46 |
-
|
47 |
-
| Task/Dataset | secondary structure (3-states) | Membrane |
|
48 |
-
|:-----:|:-----:|:-----:|
|
49 |
-
| CASP12 | 72 | |
|
50 |
-
| TS115 | 81 | |
|
51 |
-
| CB513 | 79 | |
|
52 |
-
| DeepLoc | | 86 |
|
53 |
-
|
54 |
-
Distinguish between:
|
55 |
-
|
56 |
### BibTeX entry and citation info
|
|
|
1 |
+
---
|
2 |
+
license: mit
|
3 |
+
language: protein
|
4 |
+
tags:
|
5 |
+
- protein language model
|
6 |
+
datasets:
|
7 |
+
- Uniref50
|
8 |
+
---
|
9 |
+
|
10 |
+
# DistilProtBert model
|
11 |
+
|
12 |
+
Distilled version of [ProtBert](https://huggingface.co/Rostlab/prot_bert) model.
|
13 |
+
In addition to cross entropy and cosine teacher-student losses, DistilProtBert was pretrained on a masked language modeling (MLM) objective and it only works with capital letter amino acids.
|
14 |
+
|
15 |
+
# Model description
|
16 |
+
|
17 |
+
DistilProtBert was pretrained on millions of proteins sequences.
|
18 |
+
|
19 |
+
Few important differences between DistilProtBert model and the original ProtBert version are:
|
20 |
+
1. The size of the model
|
21 |
+
2. The size of the pretraining dataset
|
22 |
+
3. Time & hardware used for pretraining
|
23 |
+
|
24 |
+
## Intended uses & limitations
|
25 |
+
|
26 |
+
The model could be used for protein feature extraction or to be fine-tuned on downstream tasks.
|
27 |
+
|
28 |
+
### How to use
|
29 |
+
|
30 |
+
The model can be used the same as ProtBert.
|
31 |
+
|
32 |
+
## Training data
|
33 |
+
|
34 |
+
DistilProtBert model was pretrained on [Uniref50](https://www.uniprot.org/downloads), a dataset consisting of ~43 million protein sequences (only sequences of length between 20 to 512 amino acids were used).
|
35 |
+
|
36 |
+
# Pretraining procedure
|
37 |
+
|
38 |
+
Preprocessing was done using ProtBert's tokenizer.
|
39 |
+
The details of the masking procedure for each sequence followed the original Bert (as mentioned in [ProtBert](https://huggingface.co/Rostlab/prot_bert)).
|
40 |
+
|
41 |
+
The model was pretrained on a single DGX cluster 3 epochs in total. local batch size was 16, the optimizer used was AdamW with a learning rate of 5e-5 and mixed precision settings.
|
42 |
+
|
43 |
+
## Evaluation results
|
44 |
+
|
45 |
+
When fine-tuned on downstream tasks, this model achieves the following results:
|
46 |
+
|
47 |
+
| Task/Dataset | secondary structure (3-states) | Membrane |
|
48 |
+
|:-----:|:-----:|:-----:|
|
49 |
+
| CASP12 | 72 | |
|
50 |
+
| TS115 | 81 | |
|
51 |
+
| CB513 | 79 | |
|
52 |
+
| DeepLoc | | 86 |
|
53 |
+
|
54 |
+
Distinguish between:
|
55 |
+
|
56 |
### BibTeX entry and citation info
|