Update README.md
Browse files
README.md
CHANGED
@@ -7,13 +7,18 @@ license: apache-2.0
|
|
7 |
# ProteinForceGPT: Generative strategies for modeling, design and analysis of protein mechanics
|
8 |
|
9 |
|
10 |
-
###
|
|
|
|
|
11 |
|
12 |
-
This model
|
13 |
|
14 |
The pretraining task is defined as "Sequence<...>" where ... is an amino acid sequence.
|
15 |
|
16 |
-
|
|
|
|
|
|
|
17 |
|
18 |
```raw
|
19 |
CalculateForce<GEECDCGSPSNP..>,
|
@@ -25,20 +30,21 @@ GenerateForce<0.220>
|
|
25 |
GenerateForceEnergy<0.262,0.220>
|
26 |
GenerateForceHistory<0.004,0.034,0.125,0.142,0.159,0.102,0.079,0.073,0.131,0.105,0.071,0.058,0.072,0.060,0.049,0.114,0.122,0.108,0.173,0.192,0.208,0.153,0.212,0.222,0.244>
|
27 |
```
|
28 |
-
|
|
|
|
|
|
|
29 |
|
30 |
```python
|
31 |
from transformers import AutoModelForCausalLM, AutoTokenizer
|
32 |
|
33 |
-
|
34 |
|
35 |
-
tokenizer = AutoTokenizer.from_pretrained(
|
36 |
tokenizer.pad_token = tokenizer.eos_token
|
37 |
|
38 |
-
model_name = pretrained_model_name
|
39 |
-
|
40 |
model = AutoModelForCausalLM.from_pretrained(
|
41 |
-
|
42 |
trust_remote_code=True
|
43 |
).to(device)
|
44 |
|
@@ -91,7 +97,7 @@ Output:
|
|
91 |
0: CalculateForce<GEECDCGSPSNPCCDAATCKLRPGAQCADGLCCDQCRFKKKRTICRIARGDFPDDRCTGQSADCPRWN> [0.262]```
|
92 |
```
|
93 |
|
94 |
-
##
|
95 |
To cite this work:
|
96 |
```
|
97 |
@article{GhafarollahiBuehler_2024,
|
@@ -103,4 +109,17 @@ To cite this work:
|
|
103 |
pages = {},
|
104 |
url = {}
|
105 |
}
|
106 |
-
```
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
7 |
# ProteinForceGPT: Generative strategies for modeling, design and analysis of protein mechanics
|
8 |
|
9 |
|
10 |
+
### Basic information
|
11 |
+
|
12 |
+
This protein language model is an autoregressive transformer model in GPT-style, trained to analyze and predict the mechanical properties of a large number of protein sequences.
|
13 |
|
14 |
+
This protein language foundation model was based on the NeoGPT-X architecture and uses rotary positional embeddings (RoPE). It has 16 attention heads, 36 hidden layers and a hidden size of 1024, an intermediate size of 4086 and uses a GeLU activation function.
|
15 |
|
16 |
The pretraining task is defined as "Sequence<...>" where ... is an amino acid sequence.
|
17 |
|
18 |
+
Pretraining dataset: https://huggingface.co/datasets/lamm-mit/GPTProteinPretrained
|
19 |
+
Pretrained model: https://huggingface.co/lamm-mit/GPTProteinPretrained
|
20 |
+
|
21 |
+
In this fine-tuned model, mechanics-related forward and inverse tasks are:
|
22 |
|
23 |
```raw
|
24 |
CalculateForce<GEECDCGSPSNP..>,
|
|
|
30 |
GenerateForceEnergy<0.262,0.220>
|
31 |
GenerateForceHistory<0.004,0.034,0.125,0.142,0.159,0.102,0.079,0.073,0.131,0.105,0.071,0.058,0.072,0.060,0.049,0.114,0.122,0.108,0.173,0.192,0.208,0.153,0.212,0.222,0.244>
|
32 |
```
|
33 |
+
|
34 |
+
### Load model
|
35 |
+
|
36 |
+
Load model:
|
37 |
|
38 |
```python
|
39 |
from transformers import AutoModelForCausalLM, AutoTokenizer
|
40 |
|
41 |
+
ForceGPT_model_name='lamm-mit/ProteinForceGPT'
|
42 |
|
43 |
+
tokenizer = AutoTokenizer.from_pretrained(ForceGPT_model_name, trust_remote_code=True)
|
44 |
tokenizer.pad_token = tokenizer.eos_token
|
45 |
|
|
|
|
|
46 |
model = AutoModelForCausalLM.from_pretrained(
|
47 |
+
ForceGPT_model_name,
|
48 |
trust_remote_code=True
|
49 |
).to(device)
|
50 |
|
|
|
97 |
0: CalculateForce<GEECDCGSPSNPCCDAATCKLRPGAQCADGLCCDQCRFKKKRTICRIARGDFPDDRCTGQSADCPRWN> [0.262]```
|
98 |
```
|
99 |
|
100 |
+
## Citations
|
101 |
To cite this work:
|
102 |
```
|
103 |
@article{GhafarollahiBuehler_2024,
|
|
|
109 |
pages = {},
|
110 |
url = {}
|
111 |
}
|
112 |
+
```
|
113 |
+
|
114 |
+
The dataset used to fine-tune the model is available at:
|
115 |
+
|
116 |
+
@article{GhafarollahiBuehler_2024,
|
117 |
+
title = {ForceGen: End-to-end de novo protein generation based on nonlinear mechanical unfolding responses using a protein language diffusion model},
|
118 |
+
author = {B. Ni, D.L. Kaplan, M.J. Buehler},
|
119 |
+
journal = {Science Advances},
|
120 |
+
year = {2024},
|
121 |
+
volume = {},
|
122 |
+
pages = {},
|
123 |
+
url = {}
|
124 |
+
}
|
125 |
+
```
|