lukawskikacper's picture
First version of the dataset
f6c1e18
metadata
language:
  - en
pretty_name: clip-ViT-V-32 embeddings of the Wolt food images
task_categories:
  - feature-extraction
size_categories:
  - 1M<n<10M

wolt-food-clip-ViT-B-32-embeddings

Qdrant's Food Discovery demo relies on the dataset of food images from the Wolt app. Each point in the collection represents a dish with a single image. The image is represented as a vector of 512 float numbers.

Generation process

The embeddings generated with clip-ViT-B-32 model have been generated using the following code snippet:

from PIL import Image
from sentence_transformers import SentenceTransformer

image_path = "5dbfd216-5cce-11eb-8122-de94874ad1c8_ns_takeaway_seelachs_ei_baguette.jpeg"

model = SentenceTransformer("clip-ViT-B-32")
embedding = model.encode(Image.open(image_path))