metadata
license: cc-by-nc-4.0
Molmo 7B Flux Dev Image Captioner.
A simple python and gradio script to use Molmo 7B for image captioning. The prompt is currently written to produce captions that work well for Flux Dev LoRA training, but you could adjust it to suit other models captioning style.
Install:
clone the repo
cd to "models" and clone Molmo-7B-D-0924:
git lfs install git clone https://huggingface.co/allenai/Molmo-7B-D-0924 ```
create a python3 venv or use conda to create an environment, eg:
conda create -n caption python=3.11
activate your environment, eg:
conda activate caption
install the dependencies
pip3 install -r requirements.txt
run the gradio version:
python3 main.py
- create a zip file of images
- upload it
- process it
- click the button to download the caption zip file, the link is at the top of the page
run the command-line version:
python3 caption.py
- make sure your images are in the "images" directory
- captions will be placed in the "images" directory
Note:
- The scripts are configured to load the model at bf16 precision, for max precision and lower memory utilization. This should fit in a single 24GB GPU.
- You can edit the scripts to use a lower quant of the model, such as fp8, though accuracy may be lower.
- If torch sees your first GPU supports flash attention and the others do not, it will assume all the cards do and it will throw an exception. A workaround is to use, for example, "CUDA_VISIBLE_DEVICES=0 python3 main.py (or caption.py)", to force torch to ignore the card supporting flash attention, so that it will use your other cards without it. Or, use it to exclude non-flash-attention-supporting GPUs.