weizhiwang commited on
Commit
b1d5c08
1 Parent(s): 58d2aa7

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +41 -0
README.md CHANGED
@@ -1,3 +1,44 @@
1
  ---
2
  license: apache-2.0
3
  ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
  ---
2
  license: apache-2.0
3
  ---
4
+ <br>
5
+ <br>
6
+
7
+ # MLM-Filter-13b Model Card
8
+
9
+ ## Model details
10
+
11
+ **Model type:**
12
+ LLaVA is an open-source chatbot trained by fine-tuning LLaMA/Vicuna on GPT-generated multimodal instruction-following data.
13
+ It is an auto-regressive language model, based on the transformer architecture.
14
+
15
+ **Model date:**
16
+ MLM-Filter-13B was trained in Dec 2023.
17
+
18
+ **Paper or resources for more information:**
19
+ https://mlm-filter.github.io/
20
+
21
+ ## License
22
+ Llama 2 is licensed under the LLAMA 2 Community License,
23
+ Copyright (c) Meta Platforms, Inc. All Rights Reserved.
24
+
25
+ **Where to send questions or comments about the model:**
26
+ https://github.com/victorwz/mlm-filter/issues
27
+
28
+ ## Intended use
29
+ **Primary intended uses:**
30
+ MLM-Filter can be used as a drop-in replacement for CLIPScore in these tasks:
31
+
32
+ 1. Score image-text data in large-scale pre-training dataset and then filter high-quality subsets based on the scores (For training MLLMs or VLMs, please consider to jointly use the Image-Text Matching score and the Object Detail Fulfillment score);
33
+
34
+ 2. Evaluate the image-text alignment for image2text or text2image generation models;
35
+
36
+ 3. Any potential applications with the need to calculate the image-text alignment.
37
+
38
+
39
+ ## Training dataset
40
+ - 46k instruction sampled from LLaVA-1.5 665k data.
41
+ - 4k instructions on image-text data quality assessment tasks ranging across 4 metrics.
42
+
43
+ ## Usage Sample
44
+ Please follow the instructions in https://github.com/Victorwz/MLM_Filter.