quarterturn
update README.md
c36fe28
metadata
license: cc-by-nc-4.0

Molmo 7B Flux Dev Image Captioner. Screenshot

A simple python and gradio script to use Molmo 7B for image captioning. The prompt is currently written to produce captions that work well for Flux Dev LoRA training, but you could adjust it to suit other models captioning style.

Install:

  1. clone the repo
  2. cd to "models" and choose a model:

For max precision clone Molmo-7B-D-0924:

       git lfs install
       git clone https://huggingface.co/allenai/Molmo-7B-D-0924 

You'll need a 24GB GPU since the model loads at bf16.

For less precision, but much lower memory needed, clone molmo-7B-D-bnb-4bit:

      git lfs install
      git clone https://huggingface.co/cyan2k/molmo-7B-D-bnb-4bit

A 12GB GPU should be fine. Note that the 4-bit quant produces not just less accurate, but quite different in it's description. YMMV.

  1. create a python3 venv or use conda to create an environment, eg: conda create -n caption python=3.11

  2. activate your environment, eg: conda activate caption

  3. install the dependencies pip3 install -r requirements.txt

  4. run the gradio version: python3 main.py (use original molmo model at bf16) or python3 main.py -q (use 4bit quant molmo model)

    1. create a zip file of images
    2. upload it
    3. process it
    4. click the button to download the caption zip file, the link is at the top of the page

    run the command-line version: python3 caption.py (use original molmo model at bf16) python3 caption.py -q (use 4bit quant molmo model)

    1. make sure your images are in the "images" directory
    2. captions will be placed in the "images" directory

Note:

  • If torch sees your first GPU supports flash attention and the others do not, it will assume all the cards do and it will throw an exception. A workaround is to use, for example, "CUDA_VISIBLE_DEVICES=0 python3 main.py (or caption.py)", to force torch to ignore the card supporting flash attention, so that it will use your other cards without it. Or, use it to exclude non-flash-attention-supporting GPUs.