NJUyued commited on
Commit
945c19d
1 Parent(s): ca17441

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +32 -4
README.md CHANGED
@@ -1,6 +1,28 @@
1
- ---
2
- license: cc-by-nc-4.0
3
- ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
4
 
5
  # PC2-NoiseofWeb
6
 
@@ -12,7 +34,13 @@ This repo releases data introduced in our paper
12
  Quick links: [[arXiv (coming soon)]() | [Published paper (coming soon)]() | [Poster (coming soon)]() | [Zhihu (coming soon)]() | [Code download]() | [Dataset download](https://drive.google.com/file/d/1MsR9GmRDUj4NoeL4xL8TXpes51JnpsrZ/view?usp=drive_link)]
13
 
14
  ## Data Collection
15
- We develop a new dataset named **Noise of Web (NoW)** for NCL. It contains **100K website image-meta description pairs** (**98,000 pairs for training, 1,000 for validation, and 1,000 for testing**), which are open-sourced and can be crawled by anyone. NoW has two main characteristics: *without human annotations and the noisy pairs are naturally captured*. The source data of NoW is obtained by taking screenshots when accessing web pages on mobile devices (resolution: 720 $\times$ 1280) and parsing meta descriptions in html source code. In [NCR](https://github.com/XLearning-SCU/2021-NeurIPS-NCR) (predecessor of NCL), each image in all datasets were preprocessed using Faster-RCNN detector provided by [Bottom-up Attention Model](https://github.com/peteanderson80/bottom-up-attention) to generate 36 region proposals, and each proposal was encoded as a 2048-dimensional feature. Thus, following NCR, we release our the features instead of raw images for fair comparison. However, we can not just use detection methods like Faster-RCNN to extract image features since it is trained on real-world animals and objects on MS-COCO. To tackle this, we adapt [APT](https://openaccess.thecvf.com/content/CVPR2023/papers/Gu_Mobile_User_Interface_Element_Detection_via_Adaptively_Prompt_Tuning_CVPR_2023_paper.pdf) as the detection model since it is trained on the mobile user interface data. Similar to existing datasets, we capture top 36 objects with their features for one image, that is, we can encode one image into a 36 $\times$ 768 matrix. We do not artificially control the noise ratio, as all data is obtained automatically and randomly over the web. **The estimated noise ratio of this dataset is nearly 70%**. Due to the automated and non-human curated data collection process, the noise in NoW is highly authentic and intrinsic.
 
 
 
 
 
 
16
 
17
  ## Data Structure
18
 
 
1
+ ---
2
+ license: cc-by-nc-4.0
3
+ task_categories:
4
+ - text-to-image
5
+ - image-to-text
6
+ - text-retrieval
7
+ modalities:
8
+ - image
9
+ - text
10
+ language:
11
+ - zh
12
+ - en
13
+ - ja
14
+ - ru
15
+ tags:
16
+ - realistic
17
+ - industry
18
+ - mobile user interface
19
+ - image-text matching
20
+ - image-text retrieval
21
+ - noisy correspondence learning
22
+ - NCL benchmark
23
+ size_categories:
24
+ - 100K<n<1M
25
+ ---
26
 
27
  # PC2-NoiseofWeb
28
 
 
34
  Quick links: [[arXiv (coming soon)]() | [Published paper (coming soon)]() | [Poster (coming soon)]() | [Zhihu (coming soon)]() | [Code download]() | [Dataset download](https://drive.google.com/file/d/1MsR9GmRDUj4NoeL4xL8TXpes51JnpsrZ/view?usp=drive_link)]
35
 
36
  ## Data Collection
37
+ We develop a new dataset named **Noise of Web (NoW)** for NCL. It contains **100K** cross-modal pairs consisting of **website images** and **multilingual website meta-descriptions** (**98,000 pairs for training, 1,000 for validation, and 1,000 for testing**). NoW has two main characteristics: *without human annotations and the noisy pairs are naturally captured*. The source image data of NoW is obtained by taking screenshots when accessing web pages on mobile user interface (MUI) with 720 $\times$ 1280 resolution, and we parse the meta-description field in the HTML source code as the captions. In [NCR](https://github.com/XLearning-SCU/2021-NeurIPS-NCR) (predecessor of NCL), each image in all datasets were preprocessed using Faster-RCNN detector provided by [Bottom-up Attention Model](https://github.com/peteanderson80/bottom-up-attention) to generate 36 region proposals, and each proposal was encoded as a 2048-dimensional feature. Thus, following NCR, we release our the features instead of raw images for fair comparison. However, we can not just use detection methods like Faster-RCNN to extract image features since it is trained on real-world animals and objects on MS-COCO. To tackle this, we adapt [APT](https://openaccess.thecvf.com/content/CVPR2023/papers/Gu_Mobile_User_Interface_Element_Detection_via_Adaptively_Prompt_Tuning_CVPR_2023_paper.pdf) as the detection model since it is trained on MUI data. Then, we capture the 768-dimensional features of top 36 objects for one image. Due to the automated and non-human curated data collection process, the noise in NoW is highly authentic and intrinsic. **The estimated noise ratio of this dataset is nearly 70%**.
38
+
39
+ <div align=center>
40
+
41
+ <img width="750px" src="/figures/now-1.jpg">
42
+
43
+ </div>
44
 
45
  ## Data Structure
46