Fuxiao commited on
Commit
2a9ae08
1 Parent(s): 562fc2c

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +134 -3
README.md CHANGED
@@ -1,3 +1,134 @@
1
- ---
2
- license: cc-by-nc-nd-4.0
3
- ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: cc-by-nc-sa-4.0
3
+ library_name: transformers
4
+ pipeline_tag: text-generation
5
+ tags:
6
+ - Eagle
7
+ - VLM
8
+ ---
9
+
10
+ # Eagle Model Card
11
+
12
+ ## Model details
13
+
14
+ **Model type:**
15
+ Eagle is a family of Vision-Centric High-Resolution Multimodal LLMs. It presents a thorough exploration to strengthen multimodal LLM perception with a mixture of vision encoders and different input resolutions. The model contains a channel-concatenation-based "CLIP+X" fusion for vision experts with different architectures (ViT/ConvNets) and knowledge (detection/segmentation/OCR/SSL). The resulting family of Eagle models support up to over 1K input resolution and obtain strong results on multimodal LLM benchmarks, especially resolution-sensitive tasks such as optical character recognition and document understanding.
16
+
17
+
18
+ ![image/png](https://cdn-uploads.huggingface.co/production/uploads/64618b9496259bec21d44704/BdAIMvo--yG7SpG5xDeYN.png)
19
+
20
+ **Paper or resources for more information:**
21
+ https://github.com/NVlabs/Eagle
22
+
23
+ [arXiv](https://arxiv.org/pdf/2408.15998) / [Demo](https://huggingface.co/spaces/NVEagle/Eagle-X5-13B-Chat) / [Huggingface](https://huggingface.co/papers/2408.15998)
24
+
25
+ ```
26
+ @misc{shi2024eagleexploringdesignspace,
27
+ title={Eagle: Exploring The Design Space for Multimodal LLMs with Mixture of Encoders},
28
+ author={Min Shi and Fuxiao Liu and Shihao Wang and Shijia Liao and Subhashree Radhakrishnan and De-An Huang and Hongxu Yin and Karan Sapra and Yaser Yacoob and Humphrey Shi and Bryan Catanzaro and Andrew Tao and Jan Kautz and Zhiding Yu and Guilin Liu},
29
+ year={2024},
30
+ eprint={2408.15998},
31
+ archivePrefix={arXiv},
32
+ primaryClass={cs.CV},
33
+ url={https://arxiv.org/abs/2408.15998},
34
+ }
35
+ ```
36
+
37
+ ## License
38
+ - The code is released under the Apache 2.0 license as found in the [LICENSE](./LICENSE) file.
39
+ - The pretrained weights are released under the [CC-BY-NC-SA-4.0 license](https://creativecommons.org/licenses/by-nc-sa/4.0/deed.en).
40
+ - The service is a research preview intended for non-commercial use only, and is subject to the following licenses and terms:
41
+ - [Model License](https://github.com/facebookresearch/llama/blob/main/MODEL_CARD.md) of LLaMA
42
+ - [Terms of Use](https://openai.com/policies/terms-of-use) of the data generated by OpenAI
43
+ - [Dataset Licenses](https://github.com/Efficient-Large-Model/VILA/blob/main/data_prepare/LICENSE) for each one used during training.
44
+
45
+ **Where to send questions or comments about the model:**
46
+ https://github.com/NVlabs/Eagle/issues
47
+
48
+ ## Model Architecture:
49
+
50
+ **Architecture Type:** Transformer
51
+
52
+ ## Input:
53
+
54
+ **Input Type:** Image, Text
55
+
56
+ **Input Format:** Red, Green, Blue; String
57
+
58
+ ## Output:
59
+
60
+ **Output Type:** Text
61
+
62
+ **Output Format:** String
63
+
64
+ ## Inference:
65
+ ```
66
+ import os
67
+ import torch
68
+ import numpy as np
69
+ from eagle import conversation as conversation_lib
70
+ from eagle.constants import DEFAULT_IMAGE_TOKEN
71
+ from eagle.constants import IMAGE_TOKEN_INDEX, DEFAULT_IMAGE_TOKEN, DEFAULT_IM_START_TOKEN, DEFAULT_IM_END_TOKEN
72
+ from eagle.conversation import conv_templates, SeparatorStyle
73
+ from eagle.model.builder import load_pretrained_model
74
+ from eagle.utils import disable_torch_init
75
+ from eagle.mm_utils import tokenizer_image_token, get_model_name_from_path, process_images, KeywordsStoppingCriteria
76
+ from PIL import Image
77
+ import argparse
78
+ from transformers import TextIteratorStreamer
79
+ from threading import Thread
80
+
81
+ model_path = "NVEagle/Eagle-X5-13B-Chat"
82
+ conv_mode = "vicuna_v1"
83
+ image_path = "assets/georgia-tech.jpeg"
84
+ input_prompt = "Describe this image."
85
+
86
+ model_name = get_model_name_from_path(model_path)
87
+ tokenizer, model, image_processor, context_len = load_pretrained_model(model_path,None,model_name,False,False)
88
+ if model.config.mm_use_im_start_end:
89
+ input_prompt = DEFAULT_IM_START_TOKEN + DEFAULT_IMAGE_TOKEN + DEFAULT_IM_END_TOKEN + '\n' + input_prompt
90
+ else:
91
+ input_prompt = DEFAULT_IMAGE_TOKEN + '\n' + input_prompt
92
+
93
+ conv = conv_templates[conv_mode].copy()
94
+ conv.append_message(conv.roles[0], input_prompt)
95
+ conv.append_message(conv.roles[1], None)
96
+ prompt = conv.get_prompt()
97
+
98
+ image = Image.open(image_path).convert('RGB')
99
+ image_tensor = process_images([image], image_processor, model.config)[0]
100
+ input_ids = tokenizer_image_token(prompt, tokenizer, IMAGE_TOKEN_INDEX, return_tensors='pt')
101
+
102
+ input_ids = input_ids.to(device='cuda', non_blocking=True)
103
+ image_tensor = image_tensor.to(dtype=torch.float16, device='cuda', non_blocking=True)
104
+
105
+ with torch.inference_mode():
106
+ output_ids = model.generate(
107
+ input_ids.unsqueeze(0),
108
+ images=image_tensor.unsqueeze(0),
109
+ image_sizes=[image.size],
110
+ do_sample=True,
111
+ temperature=0.2,
112
+ top_p=0.5,
113
+ num_beams=1,
114
+ max_new_tokens=256,
115
+ use_cache=True)
116
+
117
+ outputs = tokenizer.batch_decode(output_ids, skip_special_tokens=True)[0].strip()
118
+ print(f"Image:{image_path} \nPrompt:{input_prompt} \nOutput:{outputs}")
119
+ ```
120
+
121
+
122
+ **[Preferred/Supported] Operating System(s):** <br>
123
+ Linux
124
+
125
+ ## Intended use
126
+ **Primary intended uses:**
127
+ The primary use of Eagle is research on large multimodal models and chatbots.
128
+
129
+ **Primary intended users:**
130
+ The primary intended users of the model are researchers and hobbyists in computer vision, natural language processing, machine learning, and artificial intelligence.
131
+
132
+ ## Ethical Considerations
133
+ NVIDIA believes Trustworthy AI is a shared responsibility and we have established policies and practices to enable development for a wide array of AI applications. When downloaded or used in accordance with our terms of service, developers should work with their internal model team to ensure this model meets requirements for the relevant industry and use case and addresses unforeseen product misuse.
134
+