--- license: apache-2.0 language: - en pipeline_tag: fill-mask inference: false --- # Monarch Mixer-BERT The 341M checkpoint for M2-BERT-large from the paper [Monarch Mixer: A Simple Sub-Quadratic GEMM-Based Architecture](https://arxiv.org/abs/2310.12109). Check out our [GitHub](https://github.com/HazyResearch/m2/tree/main) for instructions on how to download and fine-tune it! ## How to use You can load this model using Hugging Face `AutoModel`: ```python from transformers import AutoModelForMaskedLM mlm = AutoModelForMaskedLM.from_pretrained('alycialee/m2-bert-341M', trust_remote_code=True) ``` This model uses the Hugging Face `bert-base-uncased tokenizer`: ``` from transformers import BertTokenizer tokenizer = BertTokenizer.from_pretrained('bert-base-uncased') ``` You can use this model with a pipeline for masked language modeling: ```python from transformers import AutoModelForMaskedLM, BertTokenizer, pipeline tokenizer = BertTokenizer.from_pretrained('bert-base-uncased') mlm = AutoModelForMaskedLM.from_pretrained('alycialee/m2-bert-341M', trust_remote_code=True) unmasker = pipeline('fill-mask', model=mlm, tokenizer=tokenizer) unmasker('Every morning, I enjoy a cup of [MASK] to start my day.') ``` ### Remote Code This model requires `trust_remote_code=True` to be passed to the `from_pretrained` method. This is because we use custom PyTorch code (see our GitHub). You should consider passing a `revision` argument that specifies the exact git commit of the code, for example: ```python mlm = AutoModelForMaskedLM.from_pretrained( 'alycialee/m2-bert-341M', trust_remote_code=True, revision='ecb4a4a', ) ``` ### Configuration Note `use_flash_mm` is false by default. Using FlashMM is currently not supported. Using `hyena_training_additions` is turned off.