Edit model card

ESM-2

ESM-2 is a state-of-the-art protein model trained on a masked language modelling objective. It is suitable for fine-tuning on a wide range of tasks that take protein sequences as input. For detailed information on the model architecture and training data, please refer to the accompanying paper. You may also be interested in some demo notebooks (PyTorch, TensorFlow) which demonstrate how to fine-tune ESM-2 models on your tasks of interest.

Several ESM-2 checkpoints are available in the Hub with varying sizes. Larger sizes generally have somewhat better accuracy, but require much more memory and time to train:

Checkpoint name Num layers Num parameters
esm2_t48_15B_UR50D 48 15B
esm2_t36_3B_UR50D 36 3B
esm2_t33_650M_UR50D 33 650M
esm2_t30_150M_UR50D 30 150M
esm2_t12_35M_UR50D 12 35M
esm2_t6_8M_UR50D 6 8M
Downloads last month
827,633
Safetensors
Model size
7.84M params
Tensor type
I64
Β·
F32
Β·
Inference API
Examples
Mask token: undefined

Model tree for facebook/esm2_t6_8M_UR50D

Adapters
2 models
Finetunes
9 models
Quantizations
1 model

Spaces using facebook/esm2_t6_8M_UR50D 11