gchhablani commited on
Commit
391de81
1 Parent(s): da6e6cd

Add README.md

Browse files
Files changed (1) hide show
  1. README.md +2 -2
README.md CHANGED
@@ -31,11 +31,11 @@ was pretrained with two objectives:
31
  predict if the two sentences were following each other or not.
32
  This way, the model learns an inner representation of the English language that can then be used to extract features
33
  useful for downstream tasks: if you have a dataset of labeled sentences for instance, you can train a standard
34
- classifier using the features produced by the BERT model as inputs.
35
 
36
  ## Intended uses & limitations
37
  You can use the raw model for either masked language modeling or next sentence prediction, but it's mostly intended to
38
- be fine-tuned on a downstream task. See the [model hub](https://huggingface.co/models?filter=bert) to look for
39
  fine-tuned versions on a task that interests you.
40
  Note that this model is primarily aimed at being fine-tuned on tasks that use the whole sentence (potentially masked)
41
  to make decisions, such as sequence classification, token classification or question answering. For tasks such as text
 
31
  predict if the two sentences were following each other or not.
32
  This way, the model learns an inner representation of the English language that can then be used to extract features
33
  useful for downstream tasks: if you have a dataset of labeled sentences for instance, you can train a standard
34
+ classifier using the features produced by the MultiBERTs model as inputs.
35
 
36
  ## Intended uses & limitations
37
  You can use the raw model for either masked language modeling or next sentence prediction, but it's mostly intended to
38
+ be fine-tuned on a downstream task. See the [model hub](https://huggingface.co/models?filter=multiberts) to look for
39
  fine-tuned versions on a task that interests you.
40
  Note that this model is primarily aimed at being fine-tuned on tasks that use the whole sentence (potentially masked)
41
  to make decisions, such as sequence classification, token classification or question answering. For tasks such as text