danfu09's picture
Update README.md
2f57ef2
|
raw
history blame
1.52 kB
metadata
license: apache-2.0
language:
  - en
pipeline_tag: sentence-similarity
inference: false

Monarch Mixer-BERT

The 80M checkpoint for M2-BERT-base from the paper Monarch Mixer: A Simple Sub-Quadratic GEMM-Based Architecture. This model has been pretrained with sequence length 2048, and it has been fine-tuned for long-context retrieval.

This model was trained by Jon Saad-Falcon, Dan Fu, and Simran Arora.

Check out our GitHub for instructions on how to download and fine-tune it!

How to use

You can load this model using Hugging Face AutoModel:

from transformers import AutoModelForMaskedLM
model = AutoModelForMaskedLM.from_pretrained("togethercomputer/m2-bert-80M-2k-retrieval", trust_remote_code=True)

This model generates embeddings for retrieval. The embeddings have a dimensionality of 768:

from transformers import AutoTokenizer, AutoModelForMaskedLM

max_seq_length = 2048
testing_string = "Every morning, I make a cup of coffee to start my day."
model = AutoModelForMaskedLM.from_pretrained("togethercomputer/m2-bert-80M-2k-retrieval", trust_remote_code=True)

tokenizer = AutoTokenizer.from_pretrained("bert-base-uncased", model_max_length=max_seq_length)
input_ids = tokenizer([testing_string], return_tensors="pt", padding="max_length", return_token_type_ids=False, truncation=True, max_length=max_seq_length)

outputs = model(**input_ids)
embeddings = outputs['sentence_embedding']