Update README.md
Browse files
README.md
CHANGED
@@ -1,3 +1,92 @@
|
|
1 |
-
---
|
2 |
-
license: cc-by-4.0
|
3 |
-
---
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
---
|
2 |
+
license: cc-by-4.0
|
3 |
+
---
|
4 |
+
# MapPool
|
5 |
+
|
6 |
+
This large corpus contains URLs, textual descriptions, embeddings of 75 million potential maps. It has been derived from the [CommonPool dataset](https://huggingface.co/datasets/mlfoundations/datacomp_xlarge) from [DataComp](https://www.datacomp.ai/). The MapPool dataset may help to train resource-intensive architectures like Transformers or Diffusion Models in order to establish foundation models specialized on maps.
|
7 |
+
|
8 |
+
## How is the data structured?
|
9 |
+
|
10 |
+
| Key | Meaning
|
11 |
+
|----------------------------------|----------
|
12 |
+
| uid | Unique identifier
|
13 |
+
| url | Link to the image
|
14 |
+
| text | Textual description of the image
|
15 |
+
| original_width / original_height | Dimensions of the image
|
16 |
+
| sha256 | Hash of the image (to verify if the image is the same as the one in the URL)
|
17 |
+
| l14_img | Embedding of the image (768 dimensions)
|
18 |
+
| l14_txt | Embedding of the textual description (768 dimensions)
|
19 |
+
| clip_l14_similarity_score | Similarity between the image and text (higher values indicate higher similarity)
|
20 |
+
|
21 |
+
|
22 |
+
## How can the parquet files be read?
|
23 |
+
|
24 |
+
You can read parquet files with [pandas](https://pandas.pydata.org/):
|
25 |
+
```
|
26 |
+
import pandas as pd
|
27 |
+
|
28 |
+
df = pd.read_parquet("<file_name>.parquet")
|
29 |
+
```
|
30 |
+
The pyarrow or fastparquet library is required additionally.
|
31 |
+
|
32 |
+
## How can the images be downloaded?
|
33 |
+
|
34 |
+
You can download the images with [img2dataset](https://github.com/rom1504/img2dataset).
|
35 |
+
```
|
36 |
+
from img2dataset import download
|
37 |
+
|
38 |
+
download(
|
39 |
+
thread_count=64,
|
40 |
+
url_list="<file_name>.parquet",
|
41 |
+
output_folder="<folder_path>",
|
42 |
+
resize_mode="no",
|
43 |
+
output_format="files",
|
44 |
+
input_format="parquet",
|
45 |
+
url_col="url",
|
46 |
+
caption_col="text",
|
47 |
+
verify_hash=("sha256", "sha256"),
|
48 |
+
)
|
49 |
+
```
|
50 |
+
Windows users:
|
51 |
+
|
52 |
+
```
|
53 |
+
import multiprocessing as mp
|
54 |
+
from img2dataset import download
|
55 |
+
|
56 |
+
def main():
|
57 |
+
download(...)
|
58 |
+
|
59 |
+
if __name__ == "__main__":
|
60 |
+
multiprocessing.freeze_support()
|
61 |
+
main()
|
62 |
+
```
|
63 |
+
|
64 |
+
## How was this dataset created?
|
65 |
+
|
66 |
+
The dataset is a subset of the [CommonPool dataset (xlarge)](https://huggingface.co/datasets/mlfoundations/datacomp_xlarge), which consists of 10,000,000,000 images. To filter the data, a classifier was established based on the embeddings of 1,860 maps and 1,860 non-maps and evaluated on 1,240 maps and 1,240 non-maps. This map dataset has been collected by [Schnürer et al. 2021](https://doi.org/10.1080/00087041.2020.1738112). The embeddings were generated by a pre-trained vision transformer on OpenAI data ([OpenCLIP](https://github.com/mlfoundations/open_clip)). Afterwards, different methods were tested to classify the embeddings:
|
67 |
+
|
68 |
+
| Model | Accuracy
|
69 |
+
|-----------------------------------------------|----------
|
70 |
+
| Xception / InceptionResNetV2 (= Baseline) | 96.7
|
71 |
+
| ViT-L/14 + L2 distance to averaged embeddings | 96.7
|
72 |
+
| ViT-L/14 + Logistic Regression | 97.9
|
73 |
+
| ViT-L/14 + MLP (3x256 units) | 98.2
|
74 |
+
| ViT-L/14 + SVM (polynomial, degree 3) | 98.5
|
75 |
+
|
76 |
+
Merely averaging the embeddings and calculating the nearest distance already reached the same accuracy as the two classification networks in [Schnürer et al. 2021](https://doi.org/10.1080/00087041.2020.1738112). Training models from [scikit](https://scikit-learn.org/) to distinguish maps and non-maps increased the validation accuracy even further. The highest accuracy has been achieved with a Support Vector Machine (SVM) with a polynomial kernel.
|
77 |
+
|
78 |
+
Overall, downloading the CommonPool dataset, separating non-maps and uploading the maps took about 50h for 10 CPUs and 120GB RAM on average as well as caused incoming network traffic of 500MB/s. SVMs are computationally the most demanding model; luckily, the inference speed could be improved by using an [Intel Extension](https://intel.github.io/scikit-learn-intelex). Classifying 500,000 embeddings took about 10 secs.
|
79 |
+
|
80 |
+
## What are the limitations?
|
81 |
+
|
82 |
+
A qualitative inspection of the detected maps in the CommonPool dataset looks promising, however, it is not known what the actual accuracy is. Especially the false negative rate is hard to estimate due to the high number of non-maps among the CommonPool images. Mixtures between natural images and maps (e.g. a map printed on a bag, a map in a park) have not been further examined - ideally, those cases would be also classified as maps.
|
83 |
+
|
84 |
+
Textual embeddings have not been considered in the separation process so far. The training dataset has a quite large variety of images, however, the textual descriptions may be too specific since the dataset originates only from Pinterest. Also, simply filtering by the word 'map' may lead to false positives as it has many meanings. Nevertheless, the textual embedding space may be explored in the future and possibly help to refine the visual classifier.
|
85 |
+
|
86 |
+
It is planned to republish the training data and deploy the classification model.
|
87 |
+
|
88 |
+
### Citation
|
89 |
+
|
90 |
+
```
|
91 |
+
@inproceedings{Schnürer_2024, title={MapPool - Diving deep to bubble up a huge dataset for MapAI}, author={Schnürer, Raimund}, year={2024}}
|
92 |
+
```
|