File size: 1,578 Bytes
44477ea
 
 
 
 
 
 
 
 
 
 
 
 
 
849c67c
44477ea
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
---
task_categories:
- image-to-text
- text-to-image
pretty_name: Data Filtering Networks, 200m, datacomp large
size_categories:
- 100M<n<1B
---
# Data filtering networks, 200m

This is a dataset released from the Data Filtering Networks paper. It consists of a subset of Datacomp large. 

These parquet files are that subset. The following script was used to filter the parquet files using the subset from apf1/datafilteringnetworks_2b.

```python
import os
from os import path
import numpy as np
import pyarrow.parquet as pq
from glob import glob
from multiprocessing import Pool

parquet_files = list(glob("../*.parquet"))
out_path = "../resampled/"
os.makedirs(out_path, exist_ok=True)
subset_file = "../indices/datacomp_large_dfn_200m_inds.npy"
u16 = np.dtype("u8,u8")


def load_subset():
    return np.load(subset_file, mmap_mode="r")


def process_parquet(parquet_file):
    print("filtering", parquet_file)
    subset = load_subset()
    table = pq.read_table(parquet_file)
    mask = []
    for uid in table["uid"]:
        uid = str(uid)
        key_u16 = np.array([divmod(int(uid, 16), 2**64)], u16)[0]
        a = np.searchsorted(subset, key_u16, "left")
        b = np.searchsorted(subset, key_u16, "right")
        count = b - a

        assert count == 1 or count == 0

        mask.append(count == 1)

    table = table.filter(mask)

    out_filename = out_path + "/" + path.basename(parquet_file)
    pq.write_table(table, out_filename)

    print("wrote ", out_filename)


with Pool(4) as pool:
    pool.map(process_parquet, parquet_files)

print("done.")
```