huseinzol05's picture
Update README.md
1a2d0b6 verified
metadata
task_categories:
  - image-feature-extraction

Google Image Malaysian Vehicle Dedup

Original dataset https://huggingface.co/datasets/malaysia-ai/crawl-google-image-malaysian-vehicle

Source code at https://github.com/mesolitica/malaysian-dataset/tree/master/vlm/dedup-malaysian-vehicle

Dedup 70% similar

dedup-0.7.jsonl, total deduped 97598 images,

{'filename': 'train-00075-of-00165-c0ebcc169b1f62d2.parquet',
 'keyword': '2021 Honda City 1.5 E',
 'no': 2,
 'selected_indices': [696,
  702,
  705,
  707,
  712,
  716,
  720,
  723,
  727,
  732,
  742,
  745,
  775,
  779,
  780,
  787,
  797,
  817,
  844,
  876,
  894,
  898,
  905,
  917,
  962,
  965,
  966,
  988,
  993,
  995,
  1000,
  1009,
  1012,
  1015,
  1016,
  1029,
  1044,
  1049,
  1054,
  1077,
  1086,
  1096,
  1131,
  1174,
  1185,
  1188,
  1198,
  1208,
  1216,
  1217,
  1219,
  1223,
  1229,
  1237,
  1247,
  1253,
  1274,
  1276,
  1286,
  1305,
  1314,
  1347,
  1348,
  1353,
  1355,
  1401,
  1412]}
  • filename is the parquet file from the original repository.
  • selected_indices is the index of dataframe of that filename.

Embedding

We convert to embedding using https://huggingface.co/google/siglip-base-patch16-512, we use MosaicML for faster indexing,

from streaming import MDSWriter
from streaming.base.format.mds.encodings import Encoding, _encodings
from streaming import LocalDataset
import streaming
import numpy as np
from tqdm import tqdm

class Float32(Encoding):
    def encode(self, obj) -> bytes:
        return obj.tobytes()

    def decode(self, data: bytes):
        return np.frombuffer(data, np.float32)

_encodings['float32'] = Float32

dataset = LocalDataset('embedding')