RaphaelMourad
commited on
Commit
•
5efbfc6
1
Parent(s):
b7b6cf2
Update README.md
Browse files
README.md
CHANGED
@@ -1,3 +1,54 @@
|
|
1 |
-
---
|
2 |
-
license:
|
3 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
---
|
2 |
+
license: apache-2.0
|
3 |
+
tags:
|
4 |
+
- pretrained
|
5 |
+
- mistral
|
6 |
+
- protein
|
7 |
+
---
|
8 |
+
|
9 |
+
# Model Card for Mistral-Prot-small (Mistral for protein)
|
10 |
+
|
11 |
+
The Mistral-Prot-small Large Language Model (LLM) is a pretrained generative chemical molecule model with 16.725M parameters x 8 experts = 133.8M parameters.
|
12 |
+
It is derived from Mistral-7B-v0.1 model, which was simplified for protein: the number of layers and the hidden size were reduced.
|
13 |
+
The model was pretrained using 1M protein strings from the uniprot 50 database.
|
14 |
+
|
15 |
+
## Model Architecture
|
16 |
+
|
17 |
+
Like Mistral-7B-v0.1, it is a transformer model, with the following architecture choices:
|
18 |
+
- Grouped-Query Attention
|
19 |
+
- Sliding-Window Attention
|
20 |
+
- Byte-fallback BPE tokenizer
|
21 |
+
|
22 |
+
## Load the model from huggingface:
|
23 |
+
|
24 |
+
```
|
25 |
+
import torch
|
26 |
+
from transformers import AutoTokenizer, AutoModel
|
27 |
+
|
28 |
+
tokenizer = AutoTokenizer.from_pretrained("RaphaelMourad/Mistral-Prot-small", trust_remote_code=True)
|
29 |
+
model = AutoModel.from_pretrained("RaphaelMourad/Mistral-Prot-small", trust_remote_code=True)
|
30 |
+
```
|
31 |
+
|
32 |
+
## Calculate the embedding of a DNA sequence
|
33 |
+
|
34 |
+
```
|
35 |
+
insulin = "MALWMRLLPLLALLALWGPDPAAAFVNQHLCGSHLVEALYLVCGERGFFYTPKTRREAEDLQVGQVELGGGPGAGSLQPLALEGSLQKRGIVEQCCTSICSLYQLENYCN"
|
36 |
+
inputs = tokenizer(insulin, return_tensors = 'pt')["input_ids"]
|
37 |
+
hidden_states = model(inputs)[0] # [1, sequence_length, 256]
|
38 |
+
|
39 |
+
# embedding with max pooling
|
40 |
+
embedding_max = torch.max(hidden_states[0], dim=0)[0]
|
41 |
+
print(embedding_max.shape) # expect to be 256
|
42 |
+
```
|
43 |
+
|
44 |
+
## Troubleshooting
|
45 |
+
|
46 |
+
Ensure you are utilizing a stable version of Transformers, 4.34.0 or newer.
|
47 |
+
|
48 |
+
## Notice
|
49 |
+
|
50 |
+
Mistral-Prot-small is a pretrained base model for chemistry.
|
51 |
+
|
52 |
+
## Contact
|
53 |
+
|
54 |
+
Raphaël Mourad. raphael.mourad@univ-tlse3.fr
|