sraimund commited on
Commit
13b52e3
·
verified ·
1 Parent(s): 28ecfed

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +28 -17
README.md CHANGED
@@ -2,9 +2,10 @@
2
  license: cc-by-4.0
3
  ---
4
  # MapPool - Bubbling up an extremely large corpus of maps for AI
5
- (early access version)
6
 
7
- This repository contains URLs, textual descriptions, embeddings of 75 million potential maps. It has been derived from the [CommonPool dataset](https://huggingface.co/datasets/mlfoundations/datacomp_xlarge) from [DataComp](https://www.datacomp.ai/). The MapPool dataset may help to train resource-intensive architectures like Transformers or Diffusion Models in order to establish vision and language foundation models specialized on maps.
 
 
8
 
9
  ## How is the data structured?
10
 
@@ -98,31 +99,41 @@ if __name__ == "__main__":
98
  multiprocessing.freeze_support()
99
  main()
100
  ```
101
- Note that image links may be broken since the release of the original CommonPool dataset. It is estimated that 2/3 of the images are still available, that is, 50 million potential map images. 5TB of storage are needed when assuming an average image size of 100kB. With a large bandwidth, it may be possible to download the images within 24h.
102
 
103
  ## How was this dataset created?
104
 
105
- The dataset is a subset of the [CommonPool dataset (xlarge)](https://huggingface.co/datasets/mlfoundations/datacomp_xlarge), which consists of 10 billion images. To filter the data, a classifier was established based on the embeddings of 1,860 maps and 1,860 non-maps and evaluated on 1,240 maps and 1,240 non-maps. This map dataset has been collected by [Schnürer et al. 2021](https://doi.org/10.1080/00087041.2020.1738112). The embeddings were generated by a pre-trained vision transformer on OpenAI data ([OpenCLIP](https://github.com/mlfoundations/open_clip)). Afterwards, different methods were tested to classify the embeddings:
106
-
107
- | Model | Accuracy
108
- |-----------------------------------------------|----------
109
- | Xception / InceptionResNetV2 (= Baseline) | 96.7
110
- | ViT-L/14 + L2 distance to averaged embeddings | 96.7
111
- | ViT-L/14 + Logistic Regression | 97.9
112
- | ViT-L/14 + MLP (3x256 units) | 98.2
113
- | ViT-L/14 + SVM (polynomial, degree 3) | 98.5
114
 
115
- Merely averaging the embeddings and calculating the nearest distance already reached the same accuracy as the two classification networks in [Schnürer et al. 2021](https://doi.org/10.1080/00087041.2020.1738112). Training models from [scikit](https://scikit-learn.org/) to distinguish maps and non-maps increased the validation accuracy even further. The highest accuracy has been achieved with a Support Vector Machine (SVM) with a polynomial kernel.
 
 
 
 
 
 
116
 
117
- Overall, downloading the CommonPool dataset, separating non-maps and uploading the maps took about 50h for 10 CPUs and 120GB RAM on average as well as caused incoming network traffic of 500MB/s. SVMs are computationally the most demanding model among the examined ones; luckily, the inference speed could be improved by using an [Intel Extension](https://intel.github.io/scikit-learn-intelex). Classifying 500,000 embeddings took about 10 secs.
118
 
119
  ## What are the limitations?
120
 
121
- A qualitative inspection of the detected maps in the CommonPool dataset looks promising, however, it is not known what the actual accuracy is. Especially the false negative rate is hard to estimate due to the high number of non-maps among the CommonPool images. Mixtures between natural images and maps (e.g. a map printed on a bag, a map in a park) have not been further examined. Also, duplicates or very similar map images have not been detected.
 
 
 
 
 
 
 
 
 
 
 
 
122
 
123
- Textual embeddings have not been considered in the separation process so far. The training dataset has a quite large variety of images, however, the textual descriptions may be too specific since the dataset originates only from Pinterest. Also, simply filtering by the word 'map' may lead to false positives as it has many meanings. Nevertheless, the textual embedding space may be explored in the future and possibly help to refine the visual classifier.
124
 
125
- It is planned to republish the training data and deploy the classification model.
126
 
127
  ### Citation
128
 
 
2
  license: cc-by-4.0
3
  ---
4
  # MapPool - Bubbling up an extremely large corpus of maps for AI
 
5
 
6
+ <img src="map_bubbles.png" alt="many small air bubbles containing colorful maps arising with light rays under the ocean (AI-generated image)" width="256"/>
7
+
8
+ This repository contains URLs, textual descriptions, embeddings of 75 million potential maps. It has been derived from [CommonPool](https://www.datacomp.ai/), a dataset consisting of 12 billion text-image pairs from the Internet. The images have been encoded by a vision transformer and classified into maps and non-maps by a support vector machine. This approach outperforms previous models and yields a validation accuracy of 98.5%. The MapPool dataset may help to train data-intensive architectures in order to establish vision and language foundation models specialized on maps. The analysis of the dataset and the exploration of the embedding space offers a large potential for future work.
9
 
10
  ## How is the data structured?
11
 
 
99
  multiprocessing.freeze_support()
100
  main()
101
  ```
102
+ As the Internet is constantly changing, about two thirds of the original images (= 48 million) are still downloadable. 5.77TB of space are required to store them in their original formats and 100GB of space are needed when creating 128x128px thumbnails in the webm format with 60% quality. Downloading the images took 40 hours with 24 CPUs, 30GB RAM, and 40MB/s of network traffic on average.
103
 
104
  ## How was this dataset created?
105
 
106
+ MapPool has been created by classifying the image embeddings included in [CommonPool](https://huggingface.co/datasets/mlfoundations/datacomp_xlarge) which have been generated by two pre-trained vision transformers (ViTs). The [L/14 model](https://github.com/mlfoundations/open_clip) with more parameters and outputting 768-dimensional embeddings has been considered as it has achieved higher classification accuracies. In this work, different map classifiers (Table 1) from [scikit-learn](https://scikit-learn.org/) with the [Intel Extension](https://intel.github.io/scikit-learn-intelex) have been trained on the embeddings of 1,860 maps and 1,860 non-maps, and has been evaluated on 1,240 maps and 1,240 non-maps ([Schnürer et al. 2021](https://doi.org/10.1080/00087041.2020.1738112)). Only simple classification models have been considered due to their efficiency and as meaningful embeddings have been already created by the vision transformer.
 
 
 
 
 
 
 
 
107
 
108
+ | Model | Accuracy
109
+ |----------------------------------------------------------|----------
110
+ | Xception / InceptionResNetV2 (= Baseline) | 96.7
111
+ | ViT-L/14 + L2 distance to averaged embeddings | 96.7
112
+ | ViT-L/14 + Logistic Regression | 97.9
113
+ | ViT-L/14 + Multilayer Perceptron (3x256 units) | 98.2
114
+ | ViT-L/14 + Support Vector Machine (polynomial, degree 3) | 98.5
115
 
116
+ With the Support Vector Machine, 500,000 image embeddings could be classified within 10 seconds. Downloading, classifying the whole dataset, and uploading the results took about 50 hours with 10 CPUs, 120GB RAM, and 500MB/s of network traffic on average.
117
 
118
  ## What are the limitations?
119
 
120
+ A qualitative inspection of the detected maps looks promising; however, it is not known what the actual accuracy is. Especially the false negative rate is hard to estimate due to the high number of non-maps among the CommonPool images. Mixtures between natural images and maps (e.g., a map printed on a bag, a map in a park) have not been further examined.
121
+
122
+ Textual embeddings have not been considered in the separation process so far. The training dataset for the map classifier has a quite large variety, such as pictorial maps and 3D maps as well as sketches and paintings. However, the textual descriptions may be too specific since the dataset originates only from one source (i.e., Pinterest).
123
+
124
+ ## What are future research directions?
125
+
126
+ A detailed analysis of the content and metadata of maps in MapPool, potentially resulting in a search engine, is subject of future work. Additionally, the visual and textual embedding space may be explored to refine the map classifier and to detect duplicates among the images. It can be examined whether training with map-only images leads to better results for map-specific tasks, for instance generating maps based on textual prompts, than with a mixture of maps and other images.
127
+
128
+ Feel free to contact [me](https://people.epfl.ch/raimund.schnurer) in case you like to collaborate!
129
+
130
+ ### License
131
+
132
+ The dataset is published under the Creative Commons Attribution 4.0 license. Please respect the copyright of the original images when making use of MapPool.
133
 
134
+ ### Disclaimer
135
 
136
+ The owner is not responsible for the content of linked external websites and will not guarantee for any damage any content of these websites may cause.
137
 
138
  ### Citation
139