NJUyued commited on
Commit
1e6d0f5
1 Parent(s): 6e6ed4b

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +2 -2
README.md CHANGED
@@ -36,7 +36,7 @@ This repo releases data introduced in our paper
36
  Quick links: [[Repo](https://github.com/alipay/PC2-NoiseofWeb) | [arXiv (coming soon)]() | [Published paper (coming soon)]() | [Poster (coming soon)]() | [Zhihu (coming soon)]() | [Dataset download](https://huggingface.co/datasets/NJUyued/NoW/resolve/main/NoW.zip?download=true)]
37
 
38
  ## Data Collection
39
- We develop a new dataset named **Noise of Web (NoW)** for NCL. It contains **100K** cross-modal pairs consisting of **website images** and **multilingual website meta-descriptions** (**98,000 pairs for training, 1,000 for validation, and 1,000 for testing**). NoW has two main characteristics: *without human annotations and the noisy pairs are naturally captured*. The source image data of NoW is obtained by taking screenshots when accessing web pages on mobile user interface (MUI) with 720 $\times$ 1280 resolution, and we parse the meta-description field in the HTML source code as the captions. In [NCR](https://github.com/XLearning-SCU/2021-NeurIPS-NCR) (predecessor of NCL), each image in all datasets were preprocessed using Faster-RCNN detector provided by [Bottom-up Attention Model](https://github.com/peteanderson80/bottom-up-attention) to generate 36 region proposals, and each proposal was encoded as a 2048-dimensional feature. Thus, following NCR, we release our the features instead of raw images for fair comparison. However, we can not just use detection methods like Faster-RCNN to extract image features since it is trained on real-world animals and objects on MS-COCO. To tackle this, we adapt [APT](https://openaccess.thecvf.com/content/CVPR2023/papers/Gu_Mobile_User_Interface_Element_Detection_via_Adaptively_Prompt_Tuning_CVPR_2023_paper.pdf) as the detection model since it is trained on MUI data. Then, we capture the 768-dimensional features of top 36 objects for one image. Due to the automated and non-human curated data collection process, the noise in NoW is highly authentic and intrinsic. **The estimated noise ratio of this dataset is nearly 70%**.
40
 
41
  <div align=center>
42
 
@@ -100,4 +100,4 @@ images = np.load(os.path.join(data_path, "%s_ims.npy" % data_split))
100
 
101
  return captions_token, images
102
  ```
103
- Additionally, you can search for code snippets containing the string `now100k_precomp` in `co_train.py`, `data.py`, `evaluation.py`, and `run.py` in this repo and refer to them to process the NoW dataset for use in your own code.
 
36
  Quick links: [[Repo](https://github.com/alipay/PC2-NoiseofWeb) | [arXiv (coming soon)]() | [Published paper (coming soon)]() | [Poster (coming soon)]() | [Zhihu (coming soon)]() | [Dataset download](https://huggingface.co/datasets/NJUyued/NoW/resolve/main/NoW.zip?download=true)]
37
 
38
  ## Data Collection
39
+ We develop a new dataset named **Noise of Web (NoW)** for NCL. It contains **100K** cross-modal pairs consisting of **website images** and **multilingual website meta-descriptions** (**98,000 pairs for training, 1,000 for validation, and 1,000 for testing**). NoW has two main characteristics: *without human annotations and the noisy pairs are naturally captured*. The source image data of NoW is obtained by taking screenshots when accessing web pages on mobile user interface (MUI) with 720 X 1280 resolution, and we parse the meta-description field in the HTML source code as the captions. In [NCR](https://github.com/XLearning-SCU/2021-NeurIPS-NCR) (predecessor of NCL), each image in all datasets were preprocessed using Faster-RCNN detector provided by [Bottom-up Attention Model](https://github.com/peteanderson80/bottom-up-attention) to generate 36 region proposals, and each proposal was encoded as a 2048-dimensional feature. Thus, following NCR, we release our the features instead of raw images for fair comparison. However, we can not just use detection methods like Faster-RCNN to extract image features since it is trained on real-world animals and objects on MS-COCO. To tackle this, we adapt [APT](https://openaccess.thecvf.com/content/CVPR2023/papers/Gu_Mobile_User_Interface_Element_Detection_via_Adaptively_Prompt_Tuning_CVPR_2023_paper.pdf) as the detection model since it is trained on MUI data. Then, we capture the 768-dimensional features of top 36 objects for one image. Due to the automated and non-human curated data collection process, the noise in NoW is highly authentic and intrinsic. **The estimated noise ratio of this dataset is nearly 70%**.
40
 
41
  <div align=center>
42
 
 
100
 
101
  return captions_token, images
102
  ```
103
+ Additionally, you can search for code snippets containing the string `now100k_precomp` in `co_train.py`, `data.py`, `evaluation.py`, and `run.py` in [PC2's repo](https://github.com/alipay/PC2-NoiseofWeb) and refer to them to process the NoW dataset for use in your own code.