File size: 1,745 Bytes
61b1675
 
 
 
 
 
 
 
 
 
 
1a2d0b6
61b1675
1a2d0b6
61b1675
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
---
task_categories:
- image-feature-extraction
---

# Google Image Malaysian Vehicle Dedup

Original dataset https://huggingface.co/datasets/malaysia-ai/crawl-google-image-malaysian-vehicle

Source code at https://github.com/mesolitica/malaysian-dataset/tree/master/vlm/dedup-malaysian-vehicle

## Dedup 70% similar

[dedup-0.7.jsonl](dedup-0.7.jsonl), total deduped 97598 images,

```
{'filename': 'train-00075-of-00165-c0ebcc169b1f62d2.parquet',
 'keyword': '2021 Honda City 1.5 E',
 'no': 2,
 'selected_indices': [696,
  702,
  705,
  707,
  712,
  716,
  720,
  723,
  727,
  732,
  742,
  745,
  775,
  779,
  780,
  787,
  797,
  817,
  844,
  876,
  894,
  898,
  905,
  917,
  962,
  965,
  966,
  988,
  993,
  995,
  1000,
  1009,
  1012,
  1015,
  1016,
  1029,
  1044,
  1049,
  1054,
  1077,
  1086,
  1096,
  1131,
  1174,
  1185,
  1188,
  1198,
  1208,
  1216,
  1217,
  1219,
  1223,
  1229,
  1237,
  1247,
  1253,
  1274,
  1276,
  1286,
  1305,
  1314,
  1347,
  1348,
  1353,
  1355,
  1401,
  1412]}
```

- `filename` is the parquet file from the original repository.
- `selected_indices` is the index of dataframe of that filename.

## Embedding

We convert to embedding using https://huggingface.co/google/siglip-base-patch16-512, we use MosaicML for faster indexing,

```python
from streaming import MDSWriter
from streaming.base.format.mds.encodings import Encoding, _encodings
from streaming import LocalDataset
import streaming
import numpy as np
from tqdm import tqdm

class Float32(Encoding):
    def encode(self, obj) -> bytes:
        return obj.tobytes()

    def decode(self, data: bytes):
        return np.frombuffer(data, np.float32)

_encodings['float32'] = Float32

dataset = LocalDataset('embedding')
```