gchhablani commited on
Commit
c2c4571
1 Parent(s): 7fdcddd

Fix MLM checkpoint in write-up

Browse files
Files changed (2) hide show
  1. sections/mlm_intro.md +1 -1
  2. sections/mlm_usage.md +1 -1
sections/mlm_intro.md CHANGED
@@ -1,4 +1,4 @@
1
- This demo uses a [CLIP-Vision-Bert model checkpoint](https://huggingface.co/flax-community/clip-vision-bert-cc12m-60k) pre-trained using text-only Masked LM on approximately 10 million image-text pairs taken from the [Conceptual 12M dataset](https://github.com/google-research-datasets/conceptual-12m) translated using [MBart](https://huggingface.co/transformers/model_doc/mbart.html). The translations are performed in the following four languages: English, French, German and Spanish, giving 2.5M examples in each language.
2
 
3
  The model can be used for mask-filling as shown in this demo. The caption can be present or written in any of the following: English, French, German and Spanish.
4
 
 
1
+ This demo uses a [CLIP-Vision-Bert model checkpoint](https://huggingface.co/flax-community/clip-vision-bert-cc12m-70k) pre-trained using text-only Masked LM on approximately 10 million image-text pairs taken from the [Conceptual 12M dataset](https://github.com/google-research-datasets/conceptual-12m) translated using [MBart](https://huggingface.co/transformers/model_doc/mbart.html). The translations are performed in the following four languages: English, French, German and Spanish, giving 2.5M examples in each language.
2
 
3
  The model can be used for mask-filling as shown in this demo. The caption can be present or written in any of the following: English, French, German and Spanish.
4
 
sections/mlm_usage.md CHANGED
@@ -1,4 +1,4 @@
1
- - This demo loads the `FlaxCLIPVisionBertForMaskedLM` present in the `model` directory of this repository. The checkpoint is loaded from [`flax-community/clip-vision-bert-cc12m-60k`](https://huggingface.co/flax-community/clip-vision-bert-cc12m-60k) which is pre-trained checkpoint with 60k steps.
2
 
3
  - 100 random validation set examples are present in the `cc12m_data/vqa_val.tsv` with respective images in the `cc12m_data/images_data` directory.
4
 
 
1
+ - This demo loads the `FlaxCLIPVisionBertForMaskedLM` present in the `model` directory of this repository. The checkpoint is loaded from [`flax-community/clip-vision-bert-cc12m-70k`](https://huggingface.co/flax-community/clip-vision-bert-cc12m-70k) which is pre-trained checkpoint with 70k steps.
2
 
3
  - 100 random validation set examples are present in the `cc12m_data/vqa_val.tsv` with respective images in the `cc12m_data/images_data` directory.
4