--- license: apache-2.0 language: - en pipeline_tag: fill-mask --- # Monarch Mixer-BERT The 110M checkpoint for M2-BERT-base from the paper [Monarch Mixer: A Simple Sub-Quadratic GEMM-Based Architecture](https://arxiv.org/abs/2310.12109). Check out our [GitHub](https://github.com/HazyResearch/m2/tree/main) for instructions on how to download and fine-tune it! ## How to use Using AutoModel: ```python from transformers import AutoModelForMaskedLM mlm = AutoModelForMaskedLM.from_pretrained('alycialee/m2-bert-110M', trust_remote_code=True) ``` You can use this model with a pipeline for masked language modeling: ```python from transformers import pipeline unmasker = pipeline('fill-mask', model='alycialee/m2-bert-110M', trust_remote_code=True) unmasker("Every morning, I enjoy a cup of [MASK] to start my day.") ``` ### Remote Code This model requires `trust_remote_code=True` to be passed to the `from_pretrained` method. This is because we use custom PyTorch code (see our GitHub). You should consider passing a `revision` argument that specifies the exact git commit of the code, for example: ```python mlm = AutoModelForMaskedLM.from_pretrained( 'alycialee/m2-bert-110M', trust_remote_code=True, revision='eee02a4', ) ``` ### Configuration Note `use_flash_mm` is false by default. Using FlashMM is currently not supported. Using `hyena_training_additions` is turned off.