quarterturn
update README.md
83af106
|
raw
history blame
1.82 kB
metadata
license: cc-by-nc-4.0

Molmo 7B Flux Dev Image Captioner. Screenshot

A simple python and gradio script to use Molmo 7B for image captioning. The prompt is currently written to produce captions that work well for Flux Dev LoRA training, but you could adjust it to suit other models captioning style.

Install:

  1. clone the repo

  2. cd to "models" and clone Molmo-7B-D-0924:

        git lfs install
        git clone https://huggingface.co/allenai/Molmo-7B-D-0924 ```
    
  3. create a python3 venv or use conda to create an environment, eg: conda create -n caption python=3.11

  4. activate your environment, eg: conda activate caption

  5. install the dependencies pip3 install -r requirements.txt

  6. run the gradio version: python3 main.py

    1. create a zip file of images
    2. upload it
    3. process it
    4. click the button to download the caption zip file, the link is at the top of the page

    run the command-line version: python3 caption.py

    1. make sure your images are in the "images" directory
    2. captions will be placed in the "images" directory

Note:

  • The scripts are configured to load the model at bf16 precision, for max precision and lower memory utilization. This should fit in a single 24GB GPU.
  • You can edit the scripts to use a lower quant of the model, such as fp8, though accuracy may be lower.
  • If torch sees your first GPU supports flash attention and the others do not, it will assume all the cards do and it will throw an exception. A workaround is to use, for example, "CUDA_VISIBLE_DEVICES=0 python3 main.py (or caption.py)", to force torch to ignore the card supporting flash attention, so that it will use your other cards without it. Or, use it to exclude non-flash-attention-supporting GPUs.