--- license: cc-by-nc-4.0 --- Molmo 7B Flux Dev Image Captioner. ![Screenshot](example.png) A simple python and gradio script to use Molmo 7B for image captioning. The prompt is currently written to produce captions that work well for Flux Dev LoRA training, but you could adjust it to suit other models captioning style. Install: 1. clone the repo 2. cd to "models" and clone Molmo-7B-D-0924: ``` git lfs install git clone https://huggingface.co/allenai/Molmo-7B-D-0924 ``` Since the 4-bit quant isn't that large, I have included it here. There's no need to clone it seperately. The full 32-bit version is big, so I leave it up to you to clone it if you want it. 1. create a python3 venv or use conda to create an environment, eg: ``` conda create -n caption python=3.11 ``` 2. activate your environment, eg: ``` conda activate caption ``` 3. install the dependencies ``` pip3 install -r requirements.txt ``` 4. run the gradio version: ``` python3 main.py ``` 1. create a zip file of images 2. upload it 3. process it 4. click the button to download the caption zip file, the link is at the top of the page run the command-line version: ``` python3 caption.py ``` (use molmo at bf16 for more accuracy; needs 24GB GPU) ``` python3 caption.py -q ``` (use molmo at int4; should be fine with 12GB GPU) 1. make sure your images are in the "images" directory 2. captions will be placed in the "images" directory Note: - main.py (gradio version does not yet support quant model) - If torch sees your first GPU supports flash attention and the others do not, it will assume all the cards do and it will throw an exception. A workaround is to use, for example, "CUDA_VISIBLE_DEVICES=0 python3 main.py (or caption.py)", to force torch to ignore the card supporting flash attention, so that it will use your other cards without it. Or, use it to exclude non-flash-attention-supporting GPUs.