MiaoshouAI commited on
Commit
4aa33ea
1 Parent(s): 39b49be

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +71 -3
README.md CHANGED
@@ -1,3 +1,71 @@
1
- ---
2
- license: mit
3
- ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: mit
3
+ ---
4
+ # Florence-2-large-PromptGen v2.0
5
+ This upgrade is based on PromptGen 1.5 with some new features to the model:
6
+
7
+ ## Features:
8
+ * Improved caption quality for \<GENERATE_TAGS\>, \<DETAILED_CAPTION\> and \<MORE_DETAILED_CAPTION\>.
9
+ <img style="width:100%; hight:100%" src="https://msdn.miaoshouai.com/miaoshou/bo/2024-11-05_03-15-15.png" />
10
+ <img style="width:100%; hight:100%" src="https://msdn.miaoshouai.com/miaoshou/bo/2024-11-05_03-40-29.png" />
11
+ * A new \<ANALYZE\> instruction, which helps the model to better understands the image composition of the input image.
12
+ <img style="width:100%; hight:100%" src="https://msdn.miaoshouai.com/miaoshou/bo/2024-11-05_03-42-58.png" />
13
+ <img style="width:100%; hight:100%" src="https://msdn.miaoshouai.com/miaoshou/bo/2024-11-05_07-42-36.png" />
14
+ * Memory efficient compare to other models! This is a really light weight caption model that allows you to use a little more than 1G of VRAM and produce lightening fast and high quality image captions.
15
+ <img style="width:100%; hight:100%" src="https://msdn.miaoshouai.com/miaoshou/bo/2024-09-05_12-56-39.png" />
16
+ * Designed to handle image captions for Flux model for both T5XXL CLIP and CLIP_L, the Miaoshou Tagger new node called "Flux CLIP Text Encode" which eliminates the need to run two separate tagger tools for caption creation. You can easily populate both CLIPs in a single generation, significantly boosting speed when working with Flux models.
17
+ <img style="width:100%; hight:100%" src="https://msdn.miaoshouai.com/miaoshou/bo/2024-09-05_14-11-02.png" />
18
+
19
+ ## Instruction prompt:
20
+ \<GENERATE_TAGS\> generate prompt as danbooru style tags<br>
21
+ \<CAPTION\> a one line caption for the image<br>
22
+ \<DETAILED_CAPTION\> a structured caption format which detects the position of the subjects in the image<br>
23
+ \<MORE_DETAILED_CAPTION\> a very detailed description for the image<br>
24
+ \<ANALYZE\> image composition analysis mode<br>
25
+ \<MIXED_CAPTION\> a mixed caption style of more detailed caption and tags, this is extremely useful for FLUX model when using T5XXL and CLIP_L together. A new node in MiaoshouTagger ComfyUI is added to support this instruction.<br>
26
+ \<MIXED_CAPTION_PLUS\> Combine the power of mixed caption with analyze.<br>
27
+
28
+ ## Version History:
29
+ For version 2.0, you will notice the following
30
+ 1. \<ANALYZE\> along with a beta node in ComfyUI for partial image analysis
31
+ 2. A new instruction for \<MIXED_CAPTION_PLUS\>
32
+ 3. A much improve accuracy for \<GENERATE_TAGS\>, \<DETAILED_CAPTION\> and \<MORE_DETAILED_CAPTION\>
33
+
34
+
35
+ ## How to use:
36
+
37
+ To use this model, you can load it directly from the Hugging Face Model Hub:
38
+
39
+ ```python
40
+
41
+ model = AutoModelForCausalLM.from_pretrained("MiaoshouAI/Florence-2-large-PromptGen-v2.0", trust_remote_code=True)
42
+ processor = AutoProcessor.from_pretrained("MiaoshouAI/Florence-2-large-PromptGen-v2.0", trust_remote_code=True)
43
+
44
+ prompt = "<MORE_DETAILED_CAPTION>"
45
+
46
+ url = "https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/transformers/tasks/car.jpg?download=true"
47
+ image = Image.open(requests.get(url, stream=True).raw)
48
+
49
+ inputs = processor(text=prompt, images=image, return_tensors="pt").to(device)
50
+
51
+ generated_ids = model.generate(
52
+     input_ids=inputs["input_ids"],
53
+     pixel_values=inputs["pixel_values"],
54
+     max_new_tokens=1024,
55
+     do_sample=False,
56
+     num_beams=3
57
+ )
58
+ generated_text = processor.batch_decode(generated_ids, skip_special_tokens=False)[0]
59
+
60
+ parsed_answer = processor.post_process_generation(generated_text, task=prompt, image_size=(image.width, image.height))
61
+
62
+ print(parsed_answer)
63
+ ```
64
+
65
+ ## Use under MiaoshouAI Tagger ComfyUI
66
+ If you just want to use this model, you can use it under ComfyUI-Miaoshouai-Tagger
67
+
68
+ https://github.com/miaoshouai/ComfyUI-Miaoshouai-Tagger
69
+
70
+ A detailed use and install instruction is already there.
71
+ (If you have already installed MiaoshouAI Tagger, you need to update the node in ComfyUI Manager first or use git pull to get the latest update.)