Update README.md
Browse files
README.md
CHANGED
@@ -1,3 +1,44 @@
|
|
1 |
---
|
2 |
license: mit
|
3 |
---
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
---
|
2 |
license: mit
|
3 |
---
|
4 |
+
We provide both huggingface version and
|
5 |
+
[esm version](https://github.com/facebookresearch/esm) of
|
6 |
+
SaProt (see our github <https://github.com/SaProt/SaProt>). Users can choose either one to use.
|
7 |
+
|
8 |
+
### Huggingface model
|
9 |
+
The following code shows how to load the model.
|
10 |
+
```
|
11 |
+
from transformers import EsmTokenizer, EsmForMaskedLM
|
12 |
+
|
13 |
+
model_path = "/your/path/to/SaProt_650M_AF2"
|
14 |
+
tokenizer = EsmTokenizer.from_pretrained(model_path)
|
15 |
+
model = EsmForMaskedLM.from_pretrained(model_path)
|
16 |
+
|
17 |
+
#################### Example ####################
|
18 |
+
device = "cuda"
|
19 |
+
model.to(device)
|
20 |
+
|
21 |
+
seq = "MdEvVpQpLrVyQdYaKv"
|
22 |
+
tokens = tokenizer.tokenize(seq)
|
23 |
+
print(tokens)
|
24 |
+
|
25 |
+
inputs = tokenizer(seq, return_tensors="pt")
|
26 |
+
inputs = {k: v.to(device) for k, v in inputs.items()}
|
27 |
+
|
28 |
+
outputs = model(**inputs)
|
29 |
+
print(outputs.logits.shape)
|
30 |
+
|
31 |
+
"""
|
32 |
+
['Md', 'Ev', 'Vp', 'Qp', 'Lr', 'Vy', 'Qd', 'Ya', 'Kv']
|
33 |
+
torch.Size([1, 11, 446])
|
34 |
+
"""
|
35 |
+
```
|
36 |
+
|
37 |
+
### esm model
|
38 |
+
The esm version is also stored in the same folder, named `SaProt_650M_AF2.pt`. We provide a function to load the model.
|
39 |
+
```
|
40 |
+
from utils.esm_loader import load_esm_saprot
|
41 |
+
|
42 |
+
model_path = "/your/path/to/SaProt_650M_AF2.pt"
|
43 |
+
model, alphabet = load_esm_saprot(model_path)
|
44 |
+
```
|