jacobfulano commited on
Commit
65996c1
1 Parent(s): 4695bbf

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +17 -1
README.md CHANGED
@@ -62,6 +62,20 @@ reduces the number of read/write operations between the GPU HBM (high bandwidth
62
 
63
  # How to use
64
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
65
  ## Training data
66
 
67
  MosaicBERT is pretrained using a standard Masked Language Modeling (MLM) objective: the model is given a sequence of
@@ -109,4 +123,6 @@ GLUE test results:
109
  |:----:|:-----------:|:----:|:----:|:-----:|:----:|:-----:|:----:|:----:|:-------:|
110
  | | | | | | | | | | |
111
 
112
- ## Intended uses & limitations
 
 
 
62
 
63
  # How to use
64
 
65
+
66
+
67
+
68
+ ```python
69
+ from transformers import AutoModelforForMaskedLM
70
+ mlm = AutoModelForMaskedLM.from_pretrained('mosaicml/mosaic-bert-base', use_auth_token=<your token>, trust_remote_code=True)
71
+ ```
72
+ The tokenizer for this model is the Hugging Face `bert-base-uncased` tokenizer.
73
+
74
+ ```python
75
+ from transformers import BertTokenizer
76
+ tokenizer = BertTokenizer.from_pretrained('bert-base-uncased')
77
+ ```
78
+
79
  ## Training data
80
 
81
  MosaicBERT is pretrained using a standard Masked Language Modeling (MLM) objective: the model is given a sequence of
 
123
  |:----:|:-----------:|:----:|:----:|:-----:|:----:|:-----:|:----:|:----:|:-------:|
124
  | | | | | | | | | | |
125
 
126
+ ## Intended uses & limitations
127
+
128
+ This model is intended to be finetuned on downstream tasks.