Spaces:
Runtime error
A newer version of the Gradio SDK is available:
5.9.1
title: img2art-search
app_file: app.py
sdk: gradio
sdk_version: 4.37.2
Image-to-Art Search π
"Find real artwork that looks like your images"
This project fine-tunes a Vision Transformer (ViT) model, pre-trained with "google/vit-base-patch32-224-in21k" weights and fine tuned with the style of ArtButMakeItSports, to perform image-to-art search across 81k artworks made available by WikiArt.
Table of Contents
Overview
This project leverages the Vision Transformer (ViT) model architecture for the task of image-to-art search. By fine-tuning the pre-trained ViT model on a custom dataset derived from the Instagram account ArtButMakeItSports, we aim to create a model capable of matching images (but not only) to corresponding artworks, being able to search for any of the images on WikiArt.
Installation
- Clone the repository:
git clone https://github.com/brunorosilva/img2art-search.git
cd img2art-search
- Install poetry:
pip install poetry
- Install using poetry:
poetry install
How it works
Dataset Preparation
- Download images from the ArtButMakeItSports Instagram account.
- Organize the images into appropriate directories for training and validation.
- Get a fine tuned model
- Create the gallery using WikiArt
Training
Fine-tune the ViT model:
make train
Inference via Gradio
Perform image-to-art search using the fine-tuned model:
make viz
Recreate the wikiart gallery
make wikiart
Create new gallery
If you want to index new images to search, use:
poetry run python main.py gallery --gallery_path <your_path>
Dataset
The dataset derives from 1k images from the Instagram account ArtButMakeItSports. Images are downloaded and split into training, validation and test sets. Each image is paired with its corresponding artwork for training purposes, if you want this dataset just ask me stating your usage.
WikiArt is indexed using the same process, except that there's no expected result. So each artwork is mapped to itself and the model is used as a feature extractor and the gallery embeddings are saved as a numpy file (will be changed to chromadb in the future).
Training
The training script fine-tunes the ViT model on the prepared dataset. Key steps include:
- Loading the pre-trained "google/vit-base-patch32-224-in21k" weights.
- Preparing the dataset and data loaders.
- Fine-tuning the model using a custom training loop.
- Saving the model to the results folder
Interface
The recommended method to get results is to use gradio as an interface by running make viz
. This will open a server and you can use some image you want to search or even use your webcam to get top 4 search results.
Examples
Search for contextual similarity
Search for expression similarity (yep, that's me)
Contributing
There are three topics I'd appreciate help with:
- Increasing the gallery by embedding new painting datasets, the current one has 81k artworks because I just got a ready to go dataset, but the complete WikiArt catalog alone has 250k+ artworks, so I really want to up this number to a least 300k in the near future;
- Porting the encoding and search to a vector db, preferably chromadb;
- Open issues with how this could be improved, new ideas will be considered.
License
The source code for the site is licensed under the MIT license, which you can find in the MIT-LICENSE.txt file.
All graphical assets are licensed under the Creative Commons Attribution 3.0 Unported License.