File size: 4,941 Bytes
3f1b7f0
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
2024-07-10 04:41:59 | INFO | model_worker | args: Namespace(host='0.0.0.0', port=40001, worker_address='http://10.140.60.25:40001', controller_address='http://10.140.60.209:10075', model_path='share_internvl/InternVL2-1B/', model_name=None, device='cuda', limit_model_concurrency=5, stream_interval=1, load_8bit=False)
2024-07-10 04:41:59 | INFO | model_worker | Loading the model InternVL2-1B on worker 19edde ...
2024-07-10 04:41:59 | WARNING | transformers.tokenization_utils_base | Special tokens have been added in the vocabulary, make sure the associated word embeddings are fine-tuned or trained.
2024-07-10 04:41:59 | WARNING | transformers.tokenization_utils_base | Special tokens have been added in the vocabulary, make sure the associated word embeddings are fine-tuned or trained.
2024-07-10 04:42:01 | INFO | model_worker | Register to controller
2024-07-10 04:42:01 | ERROR | stderr | INFO:     Started server process [99741]
2024-07-10 04:42:01 | ERROR | stderr | INFO:     Waiting for application startup.
2024-07-10 04:42:01 | ERROR | stderr | INFO:     Application startup complete.
2024-07-10 04:42:01 | ERROR | stderr | INFO:     Uvicorn running on http://0.0.0.0:40001 (Press CTRL+C to quit)
2024-07-10 04:42:16 | INFO | model_worker | Send heart beat. Models: ['InternVL2-1B']. Semaphore: None. global_counter: 0
2024-07-10 04:42:31 | INFO | model_worker | Send heart beat. Models: ['InternVL2-1B']. Semaphore: None. global_counter: 0
2024-07-10 04:42:46 | INFO | model_worker | Send heart beat. Models: ['InternVL2-1B']. Semaphore: None. global_counter: 0
2024-07-10 04:43:00 | INFO | stdout | INFO:     10.140.60.209:55826 - "POST /worker_get_status HTTP/1.1" 200 OK
2024-07-10 04:43:01 | INFO | model_worker | Send heart beat. Models: ['InternVL2-1B']. Semaphore: None. global_counter: 0
2024-07-10 04:43:15 | INFO | stdout | INFO:     10.140.60.209:55900 - "POST /worker_get_status HTTP/1.1" 200 OK
2024-07-10 04:43:16 | INFO | stdout | INFO:     10.140.60.209:55938 - "POST /worker_get_status HTTP/1.1" 200 OK
2024-07-10 04:43:17 | INFO | model_worker | Send heart beat. Models: ['InternVL2-1B']. Semaphore: None. global_counter: 0
2024-07-10 04:43:17 | INFO | model_worker | Send heart beat. Models: ['InternVL2-1B']. Semaphore: Semaphore(value=4, locked=False). global_counter: 1
2024-07-10 04:43:17 | INFO | stdout | INFO:     10.140.60.209:55960 - "POST /worker_generate_stream HTTP/1.1" 200 OK
2024-07-10 04:43:17 | INFO | stdout | history: []
2024-07-10 04:43:17 | INFO | stdout | question: Image-1: <image>
2024-07-10 04:43:17 | INFO | stdout | Describe this image in detail.
2024-07-10 04:43:17 | INFO | stdout | pil_images: [<PIL.PngImagePlugin.PngImageFile image mode=RGB size=1024x1024 at 0x7F5138388130>]
2024-07-10 04:43:17 | INFO | model_worker | dynamic_image_size: True
2024-07-10 04:43:17 | INFO | model_worker | use_thumbnail: True
2024-07-10 04:43:17 | INFO | model_worker | Resize images to 448x448
2024-07-10 04:43:17 | INFO | model_worker | Split images to torch.Size([10, 3, 448, 448])
2024-07-10 04:43:17 | ERROR | stderr | Exception in thread Thread-2 (chat):
2024-07-10 04:43:17 | ERROR | stderr | Traceback (most recent call last):
2024-07-10 04:43:17 | ERROR | stderr |   File "/mnt/petrelfs/wangweiyun/miniconda3/envs/internvl-apex/lib/python3.10/threading.py", line 1009, in _bootstrap_inner
2024-07-10 04:43:18 | ERROR | stderr |     self.run()
2024-07-10 04:43:18 | ERROR | stderr |   File "/mnt/petrelfs/wangweiyun/miniconda3/envs/internvl-apex/lib/python3.10/threading.py", line 946, in run
2024-07-10 04:43:18 | ERROR | stderr |     self._target(*self._args, **self._kwargs)
2024-07-10 04:43:18 | ERROR | stderr |   File "/mnt/petrelfs/wangweiyun/.cache/huggingface/modules/transformers_modules/InternVL2-1B/modeling_internvl_chat.py", line 280, in chat
2024-07-10 04:43:18 | ERROR | stderr |     generation_output = self.generate(
2024-07-10 04:43:18 | ERROR | stderr |   File "/mnt/petrelfs/wangweiyun/miniconda3/envs/internvl-apex/lib/python3.10/site-packages/torch/utils/_contextlib.py", line 115, in decorate_context
2024-07-10 04:43:18 | ERROR | stderr |     return func(*args, **kwargs)
2024-07-10 04:43:18 | ERROR | stderr |   File "/mnt/petrelfs/wangweiyun/.cache/huggingface/modules/transformers_modules/InternVL2-1B/modeling_internvl_chat.py", line 330, in generate
2024-07-10 04:43:18 | ERROR | stderr |     outputs = self.language_model.generate(
2024-07-10 04:43:18 | ERROR | stderr | TypeError: transformers.generation.utils.GenerationMixin.generate() got multiple values for keyword argument 'use_cache'
2024-07-10 04:43:27 | INFO | stdout | Caught Unknown Error
2024-07-10 04:43:27 | INFO | model_worker | Send heart beat. Models: ['InternVL2-1B']. Semaphore: Semaphore(value=5, locked=False). global_counter: 1
2024-07-10 04:43:32 | INFO | model_worker | Send heart beat. Models: ['InternVL2-1B']. Semaphore: Semaphore(value=5, locked=False). global_counter: 1