--- license: mit --- # Florence-2-large-PromptGen v2.0 This upgrade is based on PromptGen 1.5 with some new features to the model: ## Features: * Improved caption quality for \, \ and \. * A new \ instruction, which helps the model to better understands the image composition of the input image. * Memory efficient compare to other models! This is a really light weight caption model that allows you to use a little more than 1G of VRAM and produce lightening fast and high quality image captions. * Designed to handle image captions for Flux model for both T5XXL CLIP and CLIP_L, the Miaoshou Tagger new node called "Flux CLIP Text Encode" which eliminates the need to run two separate tagger tools for caption creation. You can easily populate both CLIPs in a single generation, significantly boosting speed when working with Flux models. ## Instruction prompt: \ generate prompt as danbooru style tags
\ a one line caption for the image
\ a structured caption format which detects the position of the subjects in the image
\ a very detailed description for the image
\ image composition analysis mode
\ a mixed caption style of more detailed caption and tags, this is extremely useful for FLUX model when using T5XXL and CLIP_L together. A new node in MiaoshouTagger ComfyUI is added to support this instruction.
\ Combine the power of mixed caption with analyze.
## Version History: For version 2.0, you will notice the following 1. \ along with a beta node in ComfyUI for partial image analysis 2. A new instruction for \ 3. A much improve accuracy for \, \ and \ ## How to use: To use this model, you can load it directly from the Hugging Face Model Hub: ```python model = AutoModelForCausalLM.from_pretrained("MiaoshouAI/Florence-2-large-PromptGen-v2.0", trust_remote_code=True) processor = AutoProcessor.from_pretrained("MiaoshouAI/Florence-2-large-PromptGen-v2.0", trust_remote_code=True) prompt = "" url = "https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/transformers/tasks/car.jpg?download=true" image = Image.open(requests.get(url, stream=True).raw) inputs = processor(text=prompt, images=image, return_tensors="pt").to(device) generated_ids = model.generate(     input_ids=inputs["input_ids"],     pixel_values=inputs["pixel_values"],     max_new_tokens=1024,     do_sample=False,     num_beams=3 ) generated_text = processor.batch_decode(generated_ids, skip_special_tokens=False)[0] parsed_answer = processor.post_process_generation(generated_text, task=prompt, image_size=(image.width, image.height)) print(parsed_answer) ``` ## Use under MiaoshouAI Tagger ComfyUI If you just want to use this model, you can use it under ComfyUI-Miaoshouai-Tagger https://github.com/miaoshouai/ComfyUI-Miaoshouai-Tagger A detailed use and install instruction is already there. (If you have already installed MiaoshouAI Tagger, you need to update the node in ComfyUI Manager first or use git pull to get the latest update.)