Update README.md
Browse files- remove eagle arxiv link
- remove demo code related to github
- add license
README.md
CHANGED
@@ -1,3 +1,8 @@
|
|
|
|
|
|
|
|
|
|
|
|
1 |
# Valley 2.0
|
2 |
## Introduction
|
3 |
Valley [github](https://github.com/bytedance/Valley) is a cutting-edge multimodal large model designed to handle a variety of tasks involving text, images, and video data, which is developed by ByteDance. Our model not only
|
@@ -7,11 +12,13 @@ Valley [github](https://github.com/bytedance/Valley) is a cutting-edge multimoda
|
|
7 |
|
8 |
when evaluated against models of the same scale.
|
9 |
|
|
|
|
|
10 |
|
11 |
## Valley-Eagle
|
12 |
The foundational version of Valley is a multimodal large model aligned with Siglip and Qwen2.5, incorporating LargeMLP and ConvAdapter to construct the projector.
|
13 |
|
14 |
-
- In the final version, we also referenced
|
15 |
- This enhancement supplements the model’s performance in extreme scenarios, and we chose the Qwen2vl VisionEncoder for this purpose.
|
16 |
|
17 |
and the model structure is shown as follows:
|
@@ -21,85 +28,15 @@ and the model structure is shown as follows:
|
|
21 |
</div>
|
22 |
|
23 |
|
24 |
-
## Release
|
25 |
-
- [12/23] 🔥 Announcing [Valley-Qwen2.5-7B](https://huggingface.co/ByteDance)!
|
26 |
-
|
27 |
## Environment Setup
|
28 |
``` bash
|
29 |
pip install torch==2.4.0 torchvision==0.19.0 torchaudio==2.4.0 --index-url https://download.pytorch.org/whl/cu121
|
30 |
pip install -r requirements.txt
|
31 |
```
|
32 |
|
33 |
-
## Inference Demo
|
34 |
-
- Single image
|
35 |
-
``` python
|
36 |
-
from valley_eagle_chat import ValleyEagleChat
|
37 |
-
model = ValleyEagleChat(
|
38 |
-
model_path='path/to/ckpt',
|
39 |
-
padding_side = 'left',
|
40 |
-
)
|
41 |
-
|
42 |
-
url = 'http://p16-goveng-va.ibyteimg.com/tos-maliva-i-wtmo38ne4c-us/4870400481414052507~tplv-wtmo38ne4c-jpeg.jpeg'
|
43 |
-
img = urllib.request.urlopen(url=url, timeout=5).read()
|
44 |
-
|
45 |
-
request = {
|
46 |
-
"chat_history": [
|
47 |
-
{'role': 'system', 'content': 'You are Valley, developed by ByteDance. Your are a helpfull Assistant.'},
|
48 |
-
{'role': 'user', 'content': 'Describe the given image.'},
|
49 |
-
],
|
50 |
-
"images": [img],
|
51 |
-
}
|
52 |
-
|
53 |
-
result = model(request)
|
54 |
-
print(f"\n>>> Assistant:\n")
|
55 |
-
print(result)
|
56 |
-
```
|
57 |
-
|
58 |
-
- Video
|
59 |
-
``` python
|
60 |
-
from valley_eagle_chat import ValleyEagleChat
|
61 |
-
import decord
|
62 |
-
import requests
|
63 |
-
import numpy as np
|
64 |
-
from torchvision import transforms
|
65 |
-
|
66 |
-
model = ValleyEagleChat(
|
67 |
-
model_path='path/to/ckpt',
|
68 |
-
padding_side = 'left',
|
69 |
-
)
|
70 |
-
|
71 |
-
url = 'https://videos.pexels.com/video-files/29641276/12753127_1920_1080_25fps.mp4'
|
72 |
-
video_file = './video.mp4'
|
73 |
-
response = requests.get(url)
|
74 |
-
if response.status_code == 200:
|
75 |
-
with open("video.mp4", "wb") as f:
|
76 |
-
f.write(response.content)
|
77 |
-
else:
|
78 |
-
print("download error!")
|
79 |
-
exit(1)
|
80 |
-
|
81 |
-
video_reader = decord.VideoReader(video_file)
|
82 |
-
decord.bridge.set_bridge("torch")
|
83 |
-
video = video_reader.get_batch(
|
84 |
-
np.linspace(0, len(video_reader) - 1, 8).astype(np.int_)
|
85 |
-
).byte()
|
86 |
-
print([transforms.ToPILImage()(image.permute(2, 0, 1)).convert("RGB") for image in video])
|
87 |
-
|
88 |
-
request = {
|
89 |
-
"chat_history": [
|
90 |
-
{'role': 'system', 'content': 'You are Valley, developed by ByteDance. Your are a helpfull Assistant.'},
|
91 |
-
{'role': 'user', 'content': 'Describe the given video.'},
|
92 |
-
],
|
93 |
-
"images": [transforms.ToPILImage()(image.permute(2, 0, 1)).convert("RGB") for image in video],
|
94 |
-
}
|
95 |
-
result = model(request)
|
96 |
-
print(f"\n>>> Assistant:\n")
|
97 |
-
print(result)
|
98 |
-
```
|
99 |
-
|
100 |
## License Agreement
|
101 |
All of our open-source models are licensed under the Apache-2.0 license.
|
102 |
|
103 |
|
104 |
## Citation
|
105 |
-
Coming Soon!
|
|
|
1 |
+
---
|
2 |
+
license: apache-2.0
|
3 |
+
base_model:
|
4 |
+
- Qwen/Qwen2.5-7B-Instruct
|
5 |
+
---
|
6 |
# Valley 2.0
|
7 |
## Introduction
|
8 |
Valley [github](https://github.com/bytedance/Valley) is a cutting-edge multimodal large model designed to handle a variety of tasks involving text, images, and video data, which is developed by ByteDance. Our model not only
|
|
|
12 |
|
13 |
when evaluated against models of the same scale.
|
14 |
|
15 |
+
## Release
|
16 |
+
- [12/23] 🔥 Announcing [Valley-Qwen2.5-7B](https://huggingface.co/ByteDance)!
|
17 |
|
18 |
## Valley-Eagle
|
19 |
The foundational version of Valley is a multimodal large model aligned with Siglip and Qwen2.5, incorporating LargeMLP and ConvAdapter to construct the projector.
|
20 |
|
21 |
+
- In the final version, we also referenced Eagle, introducing an additional VisionEncoder that can flexibly adjust the number of tokens and is parallelized with the original visual tokens.
|
22 |
- This enhancement supplements the model’s performance in extreme scenarios, and we chose the Qwen2vl VisionEncoder for this purpose.
|
23 |
|
24 |
and the model structure is shown as follows:
|
|
|
28 |
</div>
|
29 |
|
30 |
|
|
|
|
|
|
|
31 |
## Environment Setup
|
32 |
``` bash
|
33 |
pip install torch==2.4.0 torchvision==0.19.0 torchaudio==2.4.0 --index-url https://download.pytorch.org/whl/cu121
|
34 |
pip install -r requirements.txt
|
35 |
```
|
36 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
37 |
## License Agreement
|
38 |
All of our open-source models are licensed under the Apache-2.0 license.
|
39 |
|
40 |
|
41 |
## Citation
|
42 |
+
Coming Soon!
|