Model Card for Mistral-Chem-v1-134M (Mistral for chemistry)
The Mistral-Chem-v1-134M Large Language Model (LLM) is a pretrained generative chemical molecule model with 134M parameters. It is derived from Mixtral-8x7B-v0.1 model, which was simplified for molecules: the number of layers and the hidden size were reduced. The model was pretrained using 10M molecule SMILES strings from the ZINC 15 database.
Model Architecture
Like Mixtral-8x7B-v0.1, it is a transformer model, with the following architecture choices:
- Grouped-Query Attention
- Sliding-Window Attention
- Byte-fallback BPE tokenizer
- Mixture of Experts
Load the model from huggingface:
import torch
from transformers import AutoTokenizer, AutoModel
tokenizer = AutoTokenizer.from_pretrained("RaphaelMourad/Mistral-Chem-v1-134M", trust_remote_code=True)
model = AutoModel.from_pretrained("RaphaelMourad/Mistral-Chem-v1-134M", trust_remote_code=True)
Calculate the embedding of a DNA sequence
chem = "CCCCC[C@H](Br)CC"
inputs = tokenizer(chem, return_tensors = 'pt')["input_ids"]
hidden_states = model(inputs)[0] # [1, sequence_length, 256]
# embedding with max pooling
embedding_max = torch.max(hidden_states[0], dim=0)[0]
print(embedding_max.shape) # expect to be 256
Troubleshooting
Ensure you are utilizing a stable version of Transformers, 4.34.0 or newer.
Notice
Mistral-Chem-v1-134M is a pretrained base model for chemistry.
Contact
Raphaël Mourad. raphael.mourad@univ-tlse3.fr
- Downloads last month
- 141
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social
visibility and check back later, or deploy to Inference Endpoints (dedicated)
instead.