File size: 168,414 Bytes
a8052c0
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271
272
273
274
275
276
277
278
279
280
281
282
283
284
285
286
287
288
289
290
291
292
293
294
295
296
297
298
299
300
301
302
303
304
305
306
307
308
309
310
311
312
313
314
315
316
317
318
319
320
321
322
323
324
325
326
327
328
329
330
331
332
333
334
335
336
337
338
339
340
341
342
343
344
345
346
347
348
349
350
351
352
353
354
355
356
357
358
359
360
361
362
363
364
365
366
367
368
369
370
371
372
373
374
375
376
377
378
379
380
381
382
383
384
385
386
387
388
389
390
391
392
393
394
395
396
397
398
399
400
401
402
403
404
405
406
407
408
409
410
411
412
413
414
415
416
417
418
419
420
421
422
423
424
425
426
427
428
429
430
431
432
433
434
435
436
437
438
439
440
441
442
443
444
445
446
447
448
449
450
451
452
453
454
455
456
457
458
459
460
461
462
463
464
465
466
467
468
469
470
471
472
473
474
475
476
477
478
479
480
481
482
483
484
485
486
487
488
489
490
491
492
493
494
495
496
497
498
499
500
501
502
503
504
505
506
507
508
509
510
511
512
513
514
515
516
517
518
519
520
521
522
523
524
525
526
527
528
529
530
531
532
533
534
535
536
537
538
539
540
541
542
543
544
545
546
547
548
549
550
551
552
553
554
555
556
557
558
559
560
561
562
563
564
565
566
567
568
569
570
571
572
573
574
/mnt/petrelfs/wangweiyun/miniconda3/envs/internvl_eval2/lib/python3.10/site-packages/bitsandbytes/cextension.py:34: UserWarning: The installed version of bitsandbytes was compiled without GPU support. 8-bit optimizers, 8-bit multiplication, and GPU quantization are unavailable.
  warn("The installed version of bitsandbytes was compiled without GPU support. "
/mnt/petrelfs/wangweiyun/miniconda3/envs/internvl_eval2/lib/python3.10/site-packages/bitsandbytes/libbitsandbytes_cpu.so: undefined symbol: cadam32bit_grad_fp32
model path is /mnt/petrelfs/wangweiyun/workspace_cz/InternVL/internvl_chat_dev/share_internvl/InternVL2-26B-Pretrain/
12/05 03:12:05 - OpenCompass - WARNING - No previous results to reuse!
12/05 03:12:05 - OpenCompass - INFO - Reusing experiements from 20241205_031205
12/05 03:12:05 - OpenCompass - INFO - Current exp folder: /mnt/petrelfs/wangweiyun/workspace_cz/InternVL/internvl_chat_dev/share_internvl/InternVL2-26B-Pretrain/20241205_031205
12/05 03:12:09 - OpenCompass - INFO - Partitioned into 256 tasks.
[                                                  ] 0/256, elapsed: 0s, ETA:use_backend False {'abbr': 'internvl-chat-20b', 'batch_size': 4, 'max_out_len': 1024, 'model_args': {'device': 'cuda'}, 'path': '/mnt/petrelfs/wangweiyun/workspace_cz/InternVL/internvl_chat_dev/share_internvl/InternVL2-26B-Pretrain/', 'run_cfg': {'num_gpus': 1}, 'type': 'opencompass.models.InternVLChat'}
command torchrun --master_port=28898 --nproc_per_node 1 /mnt/hwfile/wangweiyun/workspace_tcx/opencompass/opencompass/tasks/openicl_infer.py tmp/61736_params.py
use_backend False {'abbr': 'internvl-chat-20b', 'batch_size': 4, 'max_out_len': 1024, 'model_args': {'device': 'cuda'}, 'path': '/mnt/petrelfs/wangweiyun/workspace_cz/InternVL/internvl_chat_dev/share_internvl/InternVL2-26B-Pretrain/', 'run_cfg': {'num_gpus': 1}, 'type': 'opencompass.models.InternVLChat'}
command torchrun --master_port=13974 --nproc_per_node 1 /mnt/hwfile/wangweiyun/workspace_tcx/opencompass/opencompass/tasks/openicl_infer.py tmp/61745_params.py
use_backend False {'abbr': 'internvl-chat-20b', 'batch_size': 4, 'max_out_len': 1024, 'model_args': {'device': 'cuda'}, 'path': '/mnt/petrelfs/wangweiyun/workspace_cz/InternVL/internvl_chat_dev/share_internvl/InternVL2-26B-Pretrain/', 'run_cfg': {'num_gpus': 1}, 'type': 'opencompass.models.InternVLChat'}
command torchrun --master_port=23895 --nproc_per_node 1 /mnt/hwfile/wangweiyun/workspace_tcx/opencompass/opencompass/tasks/openicl_infer.py tmp/61741_params.py
use_backend False {'abbr': 'internvl-chat-20b', 'batch_size': 4, 'max_out_len': 1024, 'model_args': {'device': 'cuda'}, 'path': '/mnt/petrelfs/wangweiyun/workspace_cz/InternVL/internvl_chat_dev/share_internvl/InternVL2-26B-Pretrain/', 'run_cfg': {'num_gpus': 1}, 'type': 'opencompass.models.InternVLChat'}
command torchrun --master_port=23397 --nproc_per_node 1 /mnt/hwfile/wangweiyun/workspace_tcx/opencompass/opencompass/tasks/openicl_infer.py tmp/61732_params.py
use_backend False {'abbr': 'internvl-chat-20b', 'batch_size': 4, 'max_out_len': 1024, 'model_args': {'device': 'cuda'}, 'path': '/mnt/petrelfs/wangweiyun/workspace_cz/InternVL/internvl_chat_dev/share_internvl/InternVL2-26B-Pretrain/', 'run_cfg': {'num_gpus': 1}, 'type': 'opencompass.models.InternVLChat'}
command torchrun --master_port=26460 --nproc_per_node 1 /mnt/hwfile/wangweiyun/workspace_tcx/opencompass/opencompass/tasks/openicl_infer.py tmp/61734_params.py
use_backend False {'abbr': 'internvl-chat-20b', 'batch_size': 4, 'max_out_len': 1024, 'model_args': {'device': 'cuda'}, 'path': '/mnt/petrelfs/wangweiyun/workspace_cz/InternVL/internvl_chat_dev/share_internvl/InternVL2-26B-Pretrain/', 'run_cfg': {'num_gpus': 1}, 'type': 'opencompass.models.InternVLChat'}
command torchrun --master_port=29300 --nproc_per_node 1 /mnt/hwfile/wangweiyun/workspace_tcx/opencompass/opencompass/tasks/openicl_infer.py tmp/61743_params.py
use_backend False {'abbr': 'internvl-chat-20b', 'batch_size': 4, 'max_out_len': 1024, 'model_args': {'device': 'cuda'}, 'path': '/mnt/petrelfs/wangweiyun/workspace_cz/InternVL/internvl_chat_dev/share_internvl/InternVL2-26B-Pretrain/', 'run_cfg': {'num_gpus': 1}, 'type': 'opencompass.models.InternVLChat'}
command torchrun --master_port=28515 --nproc_per_node 1 /mnt/hwfile/wangweiyun/workspace_tcx/opencompass/opencompass/tasks/openicl_infer.py tmp/61729_params.py
use_backend False {'abbr': 'internvl-chat-20b', 'batch_size': 4, 'max_out_len': 1024, 'model_args': {'device': 'cuda'}, 'path': '/mnt/petrelfs/wangweiyun/workspace_cz/InternVL/internvl_chat_dev/share_internvl/InternVL2-26B-Pretrain/', 'run_cfg': {'num_gpus': 1}, 'type': 'opencompass.models.InternVLChat'}
command torchrun --master_port=15592 --nproc_per_node 1 /mnt/hwfile/wangweiyun/workspace_tcx/opencompass/opencompass/tasks/openicl_infer.py tmp/61727_params.py
use_backend False {'abbr': 'internvl-chat-20b', 'batch_size': 4, 'max_out_len': 1024, 'model_args': {'device': 'cuda'}, 'path': '/mnt/petrelfs/wangweiyun/workspace_cz/InternVL/internvl_chat_dev/share_internvl/InternVL2-26B-Pretrain/', 'run_cfg': {'num_gpus': 1}, 'type': 'opencompass.models.InternVLChat'}
command torchrun --master_port=31987 --nproc_per_node 1 /mnt/hwfile/wangweiyun/workspace_tcx/opencompass/opencompass/tasks/openicl_infer.py tmp/61709_params.py
use_backend False {'abbr': 'internvl-chat-20b', 'batch_size': 4, 'max_out_len': 1024, 'model_args': {'device': 'cuda'}, 'path': '/mnt/petrelfs/wangweiyun/workspace_cz/InternVL/internvl_chat_dev/share_internvl/InternVL2-26B-Pretrain/', 'run_cfg': {'num_gpus': 1}, 'type': 'opencompass.models.InternVLChat'}
command torchrun --master_port=28267 --nproc_per_node 1 /mnt/hwfile/wangweiyun/workspace_tcx/opencompass/opencompass/tasks/openicl_infer.py tmp/61730_params.py
use_backend False {'abbr': 'internvl-chat-20b', 'batch_size': 4, 'max_out_len': 1024, 'model_args': {'device': 'cuda'}, 'path': '/mnt/petrelfs/wangweiyun/workspace_cz/InternVL/internvl_chat_dev/share_internvl/InternVL2-26B-Pretrain/', 'run_cfg': {'num_gpus': 1}, 'type': 'opencompass.models.InternVLChat'}
command torchrun --master_port=14705 --nproc_per_node 1 /mnt/hwfile/wangweiyun/workspace_tcx/opencompass/opencompass/tasks/openicl_infer.py tmp/61715_params.py
use_backend False {'abbr': 'internvl-chat-20b', 'batch_size': 4, 'max_out_len': 1024, 'model_args': {'device': 'cuda'}, 'path': '/mnt/petrelfs/wangweiyun/workspace_cz/InternVL/internvl_chat_dev/share_internvl/InternVL2-26B-Pretrain/', 'run_cfg': {'num_gpus': 1}, 'type': 'opencompass.models.InternVLChat'}
command torchrun --master_port=19544 --nproc_per_node 1 /mnt/hwfile/wangweiyun/workspace_tcx/opencompass/opencompass/tasks/openicl_infer.py tmp/61728_params.py
use_backend False {'abbr': 'internvl-chat-20b', 'batch_size': 4, 'max_out_len': 1024, 'model_args': {'device': 'cuda'}, 'path': '/mnt/petrelfs/wangweiyun/workspace_cz/InternVL/internvl_chat_dev/share_internvl/InternVL2-26B-Pretrain/', 'run_cfg': {'num_gpus': 1}, 'type': 'opencompass.models.InternVLChat'}
command torchrun --master_port=19219 --nproc_per_node 1 /mnt/hwfile/wangweiyun/workspace_tcx/opencompass/opencompass/tasks/openicl_infer.py tmp/61707_params.py
use_backend False {'abbr': 'internvl-chat-20b', 'batch_size': 4, 'max_out_len': 1024, 'model_args': {'device': 'cuda'}, 'path': '/mnt/petrelfs/wangweiyun/workspace_cz/InternVL/internvl_chat_dev/share_internvl/InternVL2-26B-Pretrain/', 'run_cfg': {'num_gpus': 1}, 'type': 'opencompass.models.InternVLChat'}
command torchrun --master_port=15202 --nproc_per_node 1 /mnt/hwfile/wangweiyun/workspace_tcx/opencompass/opencompass/tasks/openicl_infer.py tmp/61738_params.py
use_backend False {'abbr': 'internvl-chat-20b', 'batch_size': 4, 'max_out_len': 1024, 'model_args': {'device': 'cuda'}, 'path': '/mnt/petrelfs/wangweiyun/workspace_cz/InternVL/internvl_chat_dev/share_internvl/InternVL2-26B-Pretrain/', 'run_cfg': {'num_gpus': 1}, 'type': 'opencompass.models.InternVLChat'}
command torchrun --master_port=17846 --nproc_per_node 1 /mnt/hwfile/wangweiyun/workspace_tcx/opencompass/opencompass/tasks/openicl_infer.py tmp/61739_params.py
use_backend False {'abbr': 'internvl-chat-20b', 'batch_size': 4, 'max_out_len': 1024, 'model_args': {'device': 'cuda'}, 'path': '/mnt/petrelfs/wangweiyun/workspace_cz/InternVL/internvl_chat_dev/share_internvl/InternVL2-26B-Pretrain/', 'run_cfg': {'num_gpus': 1}, 'type': 'opencompass.models.InternVLChat'}
command torchrun --master_port=28064 --nproc_per_node 1 /mnt/hwfile/wangweiyun/workspace_tcx/opencompass/opencompass/tasks/openicl_infer.py tmp/61489_params.py
use_backend False {'abbr': 'internvl-chat-20b', 'batch_size': 4, 'max_out_len': 1024, 'model_args': {'device': 'cuda'}, 'path': '/mnt/petrelfs/wangweiyun/workspace_cz/InternVL/internvl_chat_dev/share_internvl/InternVL2-26B-Pretrain/', 'run_cfg': {'num_gpus': 1}, 'type': 'opencompass.models.InternVLChat'}
use_backend False {'abbr': 'internvl-chat-20b', 'batch_size': 4, 'max_out_len': 1024, 'model_args': {'device': 'cuda'}, 'path': '/mnt/petrelfs/wangweiyun/workspace_cz/InternVL/internvl_chat_dev/share_internvl/InternVL2-26B-Pretrain/', 'run_cfg': {'num_gpus': 1}, 'type': 'opencompass.models.InternVLChat'}
use_backend False {'abbr': 'internvl-chat-20b', 'batch_size': 4, 'max_out_len': 1024, 'model_args': {'device': 'cuda'}, 'path': '/mnt/petrelfs/wangweiyun/workspace_cz/InternVL/internvl_chat_dev/share_internvl/InternVL2-26B-Pretrain/', 'run_cfg': {'num_gpus': 1}, 'type': 'opencompass.models.InternVLChat'}
commandcommand  torchrun --master_port=18516 --nproc_per_node 1 /mnt/hwfile/wangweiyun/workspace_tcx/opencompass/opencompass/tasks/openicl_infer.py tmp/61479_params.pytorchrun --master_port=17120 --nproc_per_node 1 /mnt/hwfile/wangweiyun/workspace_tcx/opencompass/opencompass/tasks/openicl_infer.py tmp/61714_params.py

command torchrun --master_port=13177 --nproc_per_node 1 /mnt/hwfile/wangweiyun/workspace_tcx/opencompass/opencompass/tasks/openicl_infer.py tmp/61486_params.py
use_backend False {'abbr': 'internvl-chat-20b', 'batch_size': 4, 'max_out_len': 1024, 'model_args': {'device': 'cuda'}, 'path': '/mnt/petrelfs/wangweiyun/workspace_cz/InternVL/internvl_chat_dev/share_internvl/InternVL2-26B-Pretrain/', 'run_cfg': {'num_gpus': 1}, 'type': 'opencompass.models.InternVLChat'}
command torchrun --master_port=19363 --nproc_per_node 1 /mnt/hwfile/wangweiyun/workspace_tcx/opencompass/opencompass/tasks/openicl_infer.py tmp/61459_params.py
use_backend False {'abbr': 'internvl-chat-20b', 'batch_size': 4, 'max_out_len': 1024, 'model_args': {'device': 'cuda'}, 'path': '/mnt/petrelfs/wangweiyun/workspace_cz/InternVL/internvl_chat_dev/share_internvl/InternVL2-26B-Pretrain/', 'run_cfg': {'num_gpus': 1}, 'type': 'opencompass.models.InternVLChat'}
command torchrun --master_port=22050 --nproc_per_node 1 /mnt/hwfile/wangweiyun/workspace_tcx/opencompass/opencompass/tasks/openicl_infer.py tmp/61462_params.py
use_backend False {'abbr': 'internvl-chat-20b', 'batch_size': 4, 'max_out_len': 1024, 'model_args': {'device': 'cuda'}, 'path': '/mnt/petrelfs/wangweiyun/workspace_cz/InternVL/internvl_chat_dev/share_internvl/InternVL2-26B-Pretrain/', 'run_cfg': {'num_gpus': 1}, 'type': 'opencompass.models.InternVLChat'}
command torchrun --master_port=18484 --nproc_per_node 1 /mnt/hwfile/wangweiyun/workspace_tcx/opencompass/opencompass/tasks/openicl_infer.py tmp/61468_params.py
use_backend False {'abbr': 'internvl-chat-20b', 'batch_size': 4, 'max_out_len': 1024, 'model_args': {'device': 'cuda'}, 'path': '/mnt/petrelfs/wangweiyun/workspace_cz/InternVL/internvl_chat_dev/share_internvl/InternVL2-26B-Pretrain/', 'run_cfg': {'num_gpus': 1}, 'type': 'opencompass.models.InternVLChat'}
command torchrun --master_port=18175 --nproc_per_node 1 /mnt/hwfile/wangweiyun/workspace_tcx/opencompass/opencompass/tasks/openicl_infer.py tmp/61496_params.py
use_backend False {'abbr': 'internvl-chat-20b', 'batch_size': 4, 'max_out_len': 1024, 'model_args': {'device': 'cuda'}, 'path': '/mnt/petrelfs/wangweiyun/workspace_cz/InternVL/internvl_chat_dev/share_internvl/InternVL2-26B-Pretrain/', 'run_cfg': {'num_gpus': 1}, 'type': 'opencompass.models.InternVLChat'}
command torchrun --master_port=17936 --nproc_per_node 1 /mnt/hwfile/wangweiyun/workspace_tcx/opencompass/opencompass/tasks/openicl_infer.py tmp/61742_params.py
use_backend False {'abbr': 'internvl-chat-20b', 'batch_size': 4, 'max_out_len': 1024, 'model_args': {'device': 'cuda'}, 'path': '/mnt/petrelfs/wangweiyun/workspace_cz/InternVL/internvl_chat_dev/share_internvl/InternVL2-26B-Pretrain/', 'run_cfg': {'num_gpus': 1}, 'type': 'opencompass.models.InternVLChat'}
command torchrun --master_port=26207 --nproc_per_node 1 /mnt/hwfile/wangweiyun/workspace_tcx/opencompass/opencompass/tasks/openicl_infer.py tmp/61474_params.py
use_backend False {'abbr': 'internvl-chat-20b', 'batch_size': 4, 'max_out_len': 1024, 'model_args': {'device': 'cuda'}, 'path': '/mnt/petrelfs/wangweiyun/workspace_cz/InternVL/internvl_chat_dev/share_internvl/InternVL2-26B-Pretrain/', 'run_cfg': {'num_gpus': 1}, 'type': 'opencompass.models.InternVLChat'}
command torchrun --master_port=16246 --nproc_per_node 1 /mnt/hwfile/wangweiyun/workspace_tcx/opencompass/opencompass/tasks/openicl_infer.py tmp/61744_params.py
use_backend False {'abbr': 'internvl-chat-20b', 'batch_size': 4, 'max_out_len': 1024, 'model_args': {'device': 'cuda'}, 'path': '/mnt/petrelfs/wangweiyun/workspace_cz/InternVL/internvl_chat_dev/share_internvl/InternVL2-26B-Pretrain/', 'run_cfg': {'num_gpus': 1}, 'type': 'opencompass.models.InternVLChat'}
command torchrun --master_port=23833 --nproc_per_node 1 /mnt/hwfile/wangweiyun/workspace_tcx/opencompass/opencompass/tasks/openicl_infer.py tmp/61708_params.py
use_backend False {'abbr': 'internvl-chat-20b', 'batch_size': 4, 'max_out_len': 1024, 'model_args': {'device': 'cuda'}, 'path': '/mnt/petrelfs/wangweiyun/workspace_cz/InternVL/internvl_chat_dev/share_internvl/InternVL2-26B-Pretrain/', 'run_cfg': {'num_gpus': 1}, 'type': 'opencompass.models.InternVLChat'}
command torchrun --master_port=29874 --nproc_per_node 1 /mnt/hwfile/wangweiyun/workspace_tcx/opencompass/opencompass/tasks/openicl_infer.py tmp/61491_params.py
use_backend False {'abbr': 'internvl-chat-20b', 'batch_size': 4, 'max_out_len': 1024, 'model_args': {'device': 'cuda'}, 'path': '/mnt/petrelfs/wangweiyun/workspace_cz/InternVL/internvl_chat_dev/share_internvl/InternVL2-26B-Pretrain/', 'run_cfg': {'num_gpus': 1}, 'type': 'opencompass.models.InternVLChat'}
command torchrun --master_port=17808 --nproc_per_node 1 /mnt/hwfile/wangweiyun/workspace_tcx/opencompass/opencompass/tasks/openicl_infer.py tmp/61731_params.py
use_backend False {'abbr': 'internvl-chat-20b', 'batch_size': 4, 'max_out_len': 1024, 'model_args': {'device': 'cuda'}, 'path': '/mnt/petrelfs/wangweiyun/workspace_cz/InternVL/internvl_chat_dev/share_internvl/InternVL2-26B-Pretrain/', 'run_cfg': {'num_gpus': 1}, 'type': 'opencompass.models.InternVLChat'}
command torchrun --master_port=18926 --nproc_per_node 1 /mnt/hwfile/wangweiyun/workspace_tcx/opencompass/opencompass/tasks/openicl_infer.py tmp/61495_params.py
use_backend False {'abbr': 'internvl-chat-20b', 'batch_size': 4, 'max_out_len': 1024, 'model_args': {'device': 'cuda'}, 'path': '/mnt/petrelfs/wangweiyun/workspace_cz/InternVL/internvl_chat_dev/share_internvl/InternVL2-26B-Pretrain/', 'run_cfg': {'num_gpus': 1}, 'type': 'opencompass.models.InternVLChat'}
command torchrun --master_port=18853 --nproc_per_node 1 /mnt/hwfile/wangweiyun/workspace_tcx/opencompass/opencompass/tasks/openicl_infer.py tmp/61446_params.py
use_backend False {'abbr': 'internvl-chat-20b', 'batch_size': 4, 'max_out_len': 1024, 'model_args': {'device': 'cuda'}, 'path': '/mnt/petrelfs/wangweiyun/workspace_cz/InternVL/internvl_chat_dev/share_internvl/InternVL2-26B-Pretrain/', 'run_cfg': {'num_gpus': 1}, 'type': 'opencompass.models.InternVLChat'}
command torchrun --master_port=30638 --nproc_per_node 1 /mnt/hwfile/wangweiyun/workspace_tcx/opencompass/opencompass/tasks/openicl_infer.py tmp/61447_params.py
use_backend False {'abbr': 'internvl-chat-20b', 'batch_size': 4, 'max_out_len': 1024, 'model_args': {'device': 'cuda'}, 'path': '/mnt/petrelfs/wangweiyun/workspace_cz/InternVL/internvl_chat_dev/share_internvl/InternVL2-26B-Pretrain/', 'run_cfg': {'num_gpus': 1}, 'type': 'opencompass.models.InternVLChat'}
command torchrun --master_port=30944 --nproc_per_node 1 /mnt/hwfile/wangweiyun/workspace_tcx/opencompass/opencompass/tasks/openicl_infer.py tmp/61671_params.py
use_backend False {'abbr': 'internvl-chat-20b', 'batch_size': 4, 'max_out_len': 1024, 'model_args': {'device': 'cuda'}, 'path': '/mnt/petrelfs/wangweiyun/workspace_cz/InternVL/internvl_chat_dev/share_internvl/InternVL2-26B-Pretrain/', 'run_cfg': {'num_gpus': 1}, 'type': 'opencompass.models.InternVLChat'}
command torchrun --master_port=20476 --nproc_per_node 1 /mnt/hwfile/wangweiyun/workspace_tcx/opencompass/opencompass/tasks/openicl_infer.py tmp/61497_params.py
use_backend False {'abbr': 'internvl-chat-20b', 'batch_size': 4, 'max_out_len': 1024, 'model_args': {'device': 'cuda'}, 'path': '/mnt/petrelfs/wangweiyun/workspace_cz/InternVL/internvl_chat_dev/share_internvl/InternVL2-26B-Pretrain/', 'run_cfg': {'num_gpus': 1}, 'type': 'opencompass.models.InternVLChat'}
command torchrun --master_port=23623 --nproc_per_node 1 /mnt/hwfile/wangweiyun/workspace_tcx/opencompass/opencompass/tasks/openicl_infer.py tmp/61238_params.py
use_backenduse_backend False {'abbr': 'internvl-chat-20b', 'batch_size': 4, 'max_out_len': 1024, 'model_args': {'device': 'cuda'}, 'path': '/mnt/petrelfs/wangweiyun/workspace_cz/InternVL/internvl_chat_dev/share_internvl/InternVL2-26B-Pretrain/', 'run_cfg': {'num_gpus': 1}, 'type': 'opencompass.models.InternVLChat'}
command torchrun --master_port=17105 --nproc_per_node 1 /mnt/hwfile/wangweiyun/workspace_tcx/opencompass/opencompass/tasks/openicl_infer.py tmp/61681_params.py
use_backend False {'abbr': 'internvl-chat-20b', 'batch_size': 4, 'max_out_len': 1024, 'model_args': {'device': 'cuda'}, 'path': '/mnt/petrelfs/wangweiyun/workspace_cz/InternVL/internvl_chat_dev/share_internvl/InternVL2-26B-Pretrain/', 'run_cfg': {'num_gpus': 1}, 'type': 'opencompass.models.InternVLChat'}
command torchrun --master_port=30007 --nproc_per_node 1 /mnt/hwfile/wangweiyun/workspace_tcx/opencompass/opencompass/tasks/openicl_infer.py tmp/61664_params.py
 False {'abbr': 'internvl-chat-20b', 'batch_size': 4, 'max_out_len': 1024, 'model_args': {'device': 'cuda'}, 'path': '/mnt/petrelfs/wangweiyun/workspace_cz/InternVL/internvl_chat_dev/share_internvl/InternVL2-26B-Pretrain/', 'run_cfg': {'num_gpus': 1}, 'type': 'opencompass.models.InternVLChat'}
command torchrun --master_port=31226 --nproc_per_node 1 /mnt/hwfile/wangweiyun/workspace_tcx/opencompass/opencompass/tasks/openicl_infer.py tmp/61455_params.py
use_backend False {'abbr': 'internvl-chat-20b', 'batch_size': 4, 'max_out_len': 1024, 'model_args': {'device': 'cuda'}, 'path': '/mnt/petrelfs/wangweiyun/workspace_cz/InternVL/internvl_chat_dev/share_internvl/InternVL2-26B-Pretrain/', 'run_cfg': {'num_gpus': 1}, 'type': 'opencompass.models.InternVLChat'}
command torchrun --master_port=31610 --nproc_per_node 1 /mnt/hwfile/wangweiyun/workspace_tcx/opencompass/opencompass/tasks/openicl_infer.py tmp/61490_params.py
use_backend False {'abbr': 'internvl-chat-20b', 'batch_size': 4, 'max_out_len': 1024, 'model_args': {'device': 'cuda'}, 'path': '/mnt/petrelfs/wangweiyun/workspace_cz/InternVL/internvl_chat_dev/share_internvl/InternVL2-26B-Pretrain/', 'run_cfg': {'num_gpus': 1}, 'type': 'opencompass.models.InternVLChat'}
command torchrun --master_port=21005 --nproc_per_node 1 /mnt/hwfile/wangweiyun/workspace_tcx/opencompass/opencompass/tasks/openicl_infer.py tmp/61711_params.py
use_backend False {'abbr': 'internvl-chat-20b', 'batch_size': 4, 'max_out_len': 1024, 'model_args': {'device': 'cuda'}, 'path': '/mnt/petrelfs/wangweiyun/workspace_cz/InternVL/internvl_chat_dev/share_internvl/InternVL2-26B-Pretrain/', 'run_cfg': {'num_gpus': 1}, 'type': 'opencompass.models.InternVLChat'}
command torchrun --master_port=21719 --nproc_per_node 1 /mnt/hwfile/wangweiyun/workspace_tcx/opencompass/opencompass/tasks/openicl_infer.py tmp/61718_params.py
use_backend False {'abbr': 'internvl-chat-20b', 'batch_size': 4, 'max_out_len': 1024, 'model_args': {'device': 'cuda'}, 'path': '/mnt/petrelfs/wangweiyun/workspace_cz/InternVL/internvl_chat_dev/share_internvl/InternVL2-26B-Pretrain/', 'run_cfg': {'num_gpus': 1}, 'type': 'opencompass.models.InternVLChat'}
command torchrun --master_port=20883 --nproc_per_node 1 /mnt/hwfile/wangweiyun/workspace_tcx/opencompass/opencompass/tasks/openicl_infer.py tmp/61244_params.py
use_backend False {'abbr': 'internvl-chat-20b', 'batch_size': 4, 'max_out_len': 1024, 'model_args': {'device': 'cuda'}, 'path': '/mnt/petrelfs/wangweiyun/workspace_cz/InternVL/internvl_chat_dev/share_internvl/InternVL2-26B-Pretrain/', 'run_cfg': {'num_gpus': 1}, 'type': 'opencompass.models.InternVLChat'}
command torchrun --master_port=18678 --nproc_per_node 1 /mnt/hwfile/wangweiyun/workspace_tcx/opencompass/opencompass/tasks/openicl_infer.py tmp/61661_params.py
use_backend False {'abbr': 'internvl-chat-20b', 'batch_size': 4, 'max_out_len': 1024, 'model_args': {'device': 'cuda'}, 'path': '/mnt/petrelfs/wangweiyun/workspace_cz/InternVL/internvl_chat_dev/share_internvl/InternVL2-26B-Pretrain/', 'run_cfg': {'num_gpus': 1}, 'type': 'opencompass.models.InternVLChat'}
command torchrun --master_port=29693 --nproc_per_node 1 /mnt/hwfile/wangweiyun/workspace_tcx/opencompass/opencompass/tasks/openicl_infer.py tmp/61712_params.py
use_backend False {'abbr': 'internvl-chat-20b', 'batch_size': 4, 'max_out_len': 1024, 'model_args': {'device': 'cuda'}, 'path': '/mnt/petrelfs/wangweiyun/workspace_cz/InternVL/internvl_chat_dev/share_internvl/InternVL2-26B-Pretrain/', 'run_cfg': {'num_gpus': 1}, 'type': 'opencompass.models.InternVLChat'}
command torchrun --master_port=24996 --nproc_per_node 1 /mnt/hwfile/wangweiyun/workspace_tcx/opencompass/opencompass/tasks/openicl_infer.py tmp/61657_params.py
use_backend False {'abbr': 'internvl-chat-20b', 'batch_size': 4, 'max_out_len': 1024, 'model_args': {'device': 'cuda'}, 'path': '/mnt/petrelfs/wangweiyun/workspace_cz/InternVL/internvl_chat_dev/share_internvl/InternVL2-26B-Pretrain/', 'run_cfg': {'num_gpus': 1}, 'type': 'opencompass.models.InternVLChat'}
command torchrun --master_port=17745 --nproc_per_node 1 /mnt/hwfile/wangweiyun/workspace_tcx/opencompass/opencompass/tasks/openicl_infer.py tmp/61678_params.py
use_backend False {'abbr': 'internvl-chat-20b', 'batch_size': 4, 'max_out_len': 1024, 'model_args': {'device': 'cuda'}, 'path': '/mnt/petrelfs/wangweiyun/workspace_cz/InternVL/internvl_chat_dev/share_internvl/InternVL2-26B-Pretrain/', 'run_cfg': {'num_gpus': 1}, 'type': 'opencompass.models.InternVLChat'}
command torchrun --master_port=15237 --nproc_per_node 1 /mnt/hwfile/wangweiyun/workspace_tcx/opencompass/opencompass/tasks/openicl_infer.py tmp/61677_params.py
use_backend False {'abbr': 'internvl-chat-20b', 'batch_size': 4, 'max_out_len': 1024, 'model_args': {'device': 'cuda'}, 'path': '/mnt/petrelfs/wangweiyun/workspace_cz/InternVL/internvl_chat_dev/share_internvl/InternVL2-26B-Pretrain/', 'run_cfg': {'num_gpus': 1}, 'type': 'opencompass.models.InternVLChat'}
command torchrun --master_port=19673 --nproc_per_node 1 /mnt/hwfile/wangweiyun/workspace_tcx/opencompass/opencompass/tasks/openicl_infer.py tmp/61698_params.py
use_backend False {'abbr': 'internvl-chat-20b', 'batch_size': 4, 'max_out_len': 1024, 'model_args': {'device': 'cuda'}, 'path': '/mnt/petrelfs/wangweiyun/workspace_cz/InternVL/internvl_chat_dev/share_internvl/InternVL2-26B-Pretrain/', 'run_cfg': {'num_gpus': 1}, 'type': 'opencompass.models.InternVLChat'}
command torchrun --master_port=22391 --nproc_per_node 1 /mnt/hwfile/wangweiyun/workspace_tcx/opencompass/opencompass/tasks/openicl_infer.py tmp/61656_params.py
use_backend False {'abbr': 'internvl-chat-20b', 'batch_size': 4, 'max_out_len': 1024, 'model_args': {'device': 'cuda'}, 'path': '/mnt/petrelfs/wangweiyun/workspace_cz/InternVL/internvl_chat_dev/share_internvl/InternVL2-26B-Pretrain/', 'run_cfg': {'num_gpus': 1}, 'type': 'opencompass.models.InternVLChat'}
command torchrun --master_port=18334 --nproc_per_node 1 /mnt/hwfile/wangweiyun/workspace_tcx/opencompass/opencompass/tasks/openicl_infer.py tmp/61623_params.py
use_backend False {'abbr': 'internvl-chat-20b', 'batch_size': 4, 'max_out_len': 1024, 'model_args': {'device': 'cuda'}, 'path': '/mnt/petrelfs/wangweiyun/workspace_cz/InternVL/internvl_chat_dev/share_internvl/InternVL2-26B-Pretrain/', 'run_cfg': {'num_gpus': 1}, 'type': 'opencompass.models.InternVLChat'}
command torchrun --master_port=26543 --nproc_per_node 1 /mnt/hwfile/wangweiyun/workspace_tcx/opencompass/opencompass/tasks/openicl_infer.py tmp/61746_params.py
use_backend False {'abbr': 'internvl-chat-20b', 'batch_size': 4, 'max_out_len': 1024, 'model_args': {'device': 'cuda'}, 'path': '/mnt/petrelfs/wangweiyun/workspace_cz/InternVL/internvl_chat_dev/share_internvl/InternVL2-26B-Pretrain/', 'run_cfg': {'num_gpus': 1}, 'type': 'opencompass.models.InternVLChat'}
command torchrun --master_port=23493 --nproc_per_node 1 /mnt/hwfile/wangweiyun/workspace_tcx/opencompass/opencompass/tasks/openicl_infer.py tmp/61236_params.py
use_backend False {'abbr': 'internvl-chat-20b', 'batch_size': 4, 'max_out_len': 1024, 'model_args': {'device': 'cuda'}, 'path': '/mnt/petrelfs/wangweiyun/workspace_cz/InternVL/internvl_chat_dev/share_internvl/InternVL2-26B-Pretrain/', 'run_cfg': {'num_gpus': 1}, 'type': 'opencompass.models.InternVLChat'}
command torchrun --master_port=12254 --nproc_per_node 1 /mnt/hwfile/wangweiyun/workspace_tcx/opencompass/opencompass/tasks/openicl_infer.py tmp/61735_params.py
use_backend False {'abbr': 'internvl-chat-20b', 'batch_size': 4, 'max_out_len': 1024, 'model_args': {'device': 'cuda'}, 'path': '/mnt/petrelfs/wangweiyun/workspace_cz/InternVL/internvl_chat_dev/share_internvl/InternVL2-26B-Pretrain/', 'run_cfg': {'num_gpus': 1}, 'type': 'opencompass.models.InternVLChat'}
command torchrun --master_port=20664 --nproc_per_node 1 /mnt/hwfile/wangweiyun/workspace_tcx/opencompass/opencompass/tasks/openicl_infer.py tmp/61654_params.py
use_backend False {'abbr': 'internvl-chat-20b', 'batch_size': 4, 'max_out_len': 1024, 'model_args': {'device': 'cuda'}, 'path': '/mnt/petrelfs/wangweiyun/workspace_cz/InternVL/internvl_chat_dev/share_internvl/InternVL2-26B-Pretrain/', 'run_cfg': {'num_gpus': 1}, 'type': 'opencompass.models.InternVLChat'}
command torchrun --master_port=23138 --nproc_per_node 1 /mnt/hwfile/wangweiyun/workspace_tcx/opencompass/opencompass/tasks/openicl_infer.py tmp/61616_params.py
use_backend False {'abbr': 'internvl-chat-20b', 'batch_size': 4, 'max_out_len': 1024, 'model_args': {'device': 'cuda'}, 'path': '/mnt/petrelfs/wangweiyun/workspace_cz/InternVL/internvl_chat_dev/share_internvl/InternVL2-26B-Pretrain/', 'run_cfg': {'num_gpus': 1}, 'type': 'opencompass.models.InternVLChat'}
command torchrun --master_port=30625 --nproc_per_node 1 /mnt/hwfile/wangweiyun/workspace_tcx/opencompass/opencompass/tasks/openicl_infer.py tmp/61627_params.py
use_backend False {'abbr': 'internvl-chat-20b', 'batch_size': 4, 'max_out_len': 1024, 'model_args': {'device': 'cuda'}, 'path': '/mnt/petrelfs/wangweiyun/workspace_cz/InternVL/internvl_chat_dev/share_internvl/InternVL2-26B-Pretrain/', 'run_cfg': {'num_gpus': 1}, 'type': 'opencompass.models.InternVLChat'}
command torchrun --master_port=28676 --nproc_per_node 1 /mnt/hwfile/wangweiyun/workspace_tcx/opencompass/opencompass/tasks/openicl_infer.py tmp/61643_params.py
use_backend False {'abbr': 'internvl-chat-20b', 'batch_size': 4, 'max_out_len': 1024, 'model_args': {'device': 'cuda'}, 'path': '/mnt/petrelfs/wangweiyun/workspace_cz/InternVL/internvl_chat_dev/share_internvl/InternVL2-26B-Pretrain/', 'run_cfg': {'num_gpus': 1}, 'type': 'opencompass.models.InternVLChat'}
command torchrun --master_port=22903 --nproc_per_node 1 /mnt/hwfile/wangweiyun/workspace_tcx/opencompass/opencompass/tasks/openicl_infer.py tmp/61713_params.py
use_backend False {'abbr': 'internvl-chat-20b', 'batch_size': 4, 'max_out_len': 1024, 'model_args': {'device': 'cuda'}, 'path': '/mnt/petrelfs/wangweiyun/workspace_cz/InternVL/internvl_chat_dev/share_internvl/InternVL2-26B-Pretrain/', 'run_cfg': {'num_gpus': 1}, 'type': 'opencompass.models.InternVLChat'}
command torchrun --master_port=29872 --nproc_per_node 1 /mnt/hwfile/wangweiyun/workspace_tcx/opencompass/opencompass/tasks/openicl_infer.py tmp/61680_params.py
use_backend False {'abbr': 'internvl-chat-20b', 'batch_size': 4, 'max_out_len': 1024, 'model_args': {'device': 'cuda'}, 'path': '/mnt/petrelfs/wangweiyun/workspace_cz/InternVL/internvl_chat_dev/share_internvl/InternVL2-26B-Pretrain/', 'run_cfg': {'num_gpus': 1}, 'type': 'opencompass.models.InternVLChat'}
command torchrun --master_port=21410 --nproc_per_node 1 /mnt/hwfile/wangweiyun/workspace_tcx/opencompass/opencompass/tasks/openicl_infer.py tmp/61663_params.py
use_backend False {'abbr': 'internvl-chat-20b', 'batch_size': 4, 'max_out_len': 1024, 'model_args': {'device': 'cuda'}, 'path': '/mnt/petrelfs/wangweiyun/workspace_cz/InternVL/internvl_chat_dev/share_internvl/InternVL2-26B-Pretrain/', 'run_cfg': {'num_gpus': 1}, 'type': 'opencompass.models.InternVLChat'}
command torchrun --master_port=29298 --nproc_per_node 1 /mnt/hwfile/wangweiyun/workspace_tcx/opencompass/opencompass/tasks/openicl_infer.py tmp/61243_params.py
use_backend False {'abbr': 'internvl-chat-20b', 'batch_size': 4, 'max_out_len': 1024, 'model_args': {'device': 'cuda'}, 'path': '/mnt/petrelfs/wangweiyun/workspace_cz/InternVL/internvl_chat_dev/share_internvl/InternVL2-26B-Pretrain/', 'run_cfg': {'num_gpus': 1}, 'type': 'opencompass.models.InternVLChat'}
command torchrun --master_port=22812 --nproc_per_node 1 /mnt/hwfile/wangweiyun/workspace_tcx/opencompass/opencompass/tasks/openicl_infer.py tmp/61667_params.py
use_backend False {'abbr': 'internvl-chat-20b', 'batch_size': 4, 'max_out_len': 1024, 'model_args': {'device': 'cuda'}, 'path': '/mnt/petrelfs/wangweiyun/workspace_cz/InternVL/internvl_chat_dev/share_internvl/InternVL2-26B-Pretrain/', 'run_cfg': {'num_gpus': 1}, 'type': 'opencompass.models.InternVLChat'}
command torchrun --master_port=18038 --nproc_per_node 1 /mnt/hwfile/wangweiyun/workspace_tcx/opencompass/opencompass/tasks/openicl_infer.py tmp/61737_params.py
use_backend False {'abbr': 'internvl-chat-20b', 'batch_size': 4, 'max_out_len': 1024, 'model_args': {'device': 'cuda'}, 'path': '/mnt/petrelfs/wangweiyun/workspace_cz/InternVL/internvl_chat_dev/share_internvl/InternVL2-26B-Pretrain/', 'run_cfg': {'num_gpus': 1}, 'type': 'opencompass.models.InternVLChat'}
command torchrun --master_port=14937 --nproc_per_node 1 /mnt/hwfile/wangweiyun/workspace_tcx/opencompass/opencompass/tasks/openicl_infer.py tmp/61706_params.py
use_backend False {'abbr': 'internvl-chat-20b', 'batch_size': 4, 'max_out_len': 1024, 'model_args': {'device': 'cuda'}, 'path': '/mnt/petrelfs/wangweiyun/workspace_cz/InternVL/internvl_chat_dev/share_internvl/InternVL2-26B-Pretrain/', 'run_cfg': {'num_gpus': 1}, 'type': 'opencompass.models.InternVLChat'}
command torchrun --master_port=24286 --nproc_per_node 1 /mnt/hwfile/wangweiyun/workspace_tcx/opencompass/opencompass/tasks/openicl_infer.py tmp/61660_params.py
use_backend False {'abbr': 'internvl-chat-20b', 'batch_size': 4, 'max_out_len': 1024, 'model_args': {'device': 'cuda'}, 'path': '/mnt/petrelfs/wangweiyun/workspace_cz/InternVL/internvl_chat_dev/share_internvl/InternVL2-26B-Pretrain/', 'run_cfg': {'num_gpus': 1}, 'type': 'opencompass.models.InternVLChat'}
command torchrun --master_port=18431 --nproc_per_node 1 /mnt/hwfile/wangweiyun/workspace_tcx/opencompass/opencompass/tasks/openicl_infer.py tmp/61665_params.py
use_backend False {'abbr': 'internvl-chat-20b', 'batch_size': 4, 'max_out_len': 1024, 'model_args': {'device': 'cuda'}, 'path': '/mnt/petrelfs/wangweiyun/workspace_cz/InternVL/internvl_chat_dev/share_internvl/InternVL2-26B-Pretrain/', 'run_cfg': {'num_gpus': 1}, 'type': 'opencompass.models.InternVLChat'}
command torchrun --master_port=22560 --nproc_per_node 1 /mnt/hwfile/wangweiyun/workspace_tcx/opencompass/opencompass/tasks/openicl_infer.py tmp/61702_params.py
use_backend False {'abbr': 'internvl-chat-20b', 'batch_size': 4, 'max_out_len': 1024, 'model_args': {'device': 'cuda'}, 'path': '/mnt/petrelfs/wangweiyun/workspace_cz/InternVL/internvl_chat_dev/share_internvl/InternVL2-26B-Pretrain/', 'run_cfg': {'num_gpus': 1}, 'type': 'opencompass.models.InternVLChat'}
command torchrun --master_port=13116 --nproc_per_node 1 /mnt/hwfile/wangweiyun/workspace_tcx/opencompass/opencompass/tasks/openicl_infer.py tmp/61669_params.py
use_backend False {'abbr': 'internvl-chat-20b', 'batch_size': 4, 'max_out_len': 1024, 'model_args': {'device': 'cuda'}, 'path': '/mnt/petrelfs/wangweiyun/workspace_cz/InternVL/internvl_chat_dev/share_internvl/InternVL2-26B-Pretrain/', 'run_cfg': {'num_gpus': 1}, 'type': 'opencompass.models.InternVLChat'}
command torchrun --master_port=15790 --nproc_per_node 1 /mnt/hwfile/wangweiyun/workspace_tcx/opencompass/opencompass/tasks/openicl_infer.py tmp/61703_params.py
use_backend False {'abbr': 'internvl-chat-20b', 'batch_size': 4, 'max_out_len': 1024, 'model_args': {'device': 'cuda'}, 'path': '/mnt/petrelfs/wangweiyun/workspace_cz/InternVL/internvl_chat_dev/share_internvl/InternVL2-26B-Pretrain/', 'run_cfg': {'num_gpus': 1}, 'type': 'opencompass.models.InternVLChat'}
command torchrun --master_port=12051 --nproc_per_node 1 /mnt/hwfile/wangweiyun/workspace_tcx/opencompass/opencompass/tasks/openicl_infer.py tmp/61659_params.py
use_backend False {'abbr': 'internvl-chat-20b', 'batch_size': 4, 'max_out_len': 1024, 'model_args': {'device': 'cuda'}, 'path': '/mnt/petrelfs/wangweiyun/workspace_cz/InternVL/internvl_chat_dev/share_internvl/InternVL2-26B-Pretrain/', 'run_cfg': {'num_gpus': 1}, 'type': 'opencompass.models.InternVLChat'}
command torchrun --master_port=17684 --nproc_per_node 1 /mnt/hwfile/wangweiyun/workspace_tcx/opencompass/opencompass/tasks/openicl_infer.py tmp/61682_params.py
use_backend False {'abbr': 'internvl-chat-20b', 'batch_size': 4, 'max_out_len': 1024, 'model_args': {'device': 'cuda'}, 'path': '/mnt/petrelfs/wangweiyun/workspace_cz/InternVL/internvl_chat_dev/share_internvl/InternVL2-26B-Pretrain/', 'run_cfg': {'num_gpus': 1}, 'type': 'opencompass.models.InternVLChat'}
use_backend False {'abbr': 'internvl-chat-20b', 'batch_size': 4, 'max_out_len': 1024, 'model_args': {'device': 'cuda'}, 'path': '/mnt/petrelfs/wangweiyun/workspace_cz/InternVL/internvl_chat_dev/share_internvl/InternVL2-26B-Pretrain/', 'run_cfg': {'num_gpus': 1}, 'type': 'opencompass.models.InternVLChat'}
use_backend False {'abbr': 'internvl-chat-20b', 'batch_size': 4, 'max_out_len': 1024, 'model_args': {'device': 'cuda'}, 'path': '/mnt/petrelfs/wangweiyun/workspace_cz/InternVL/internvl_chat_dev/share_internvl/InternVL2-26B-Pretrain/', 'run_cfg': {'num_gpus': 1}, 'type': 'opencompass.models.InternVLChat'}
command torchrun --master_port=31432 --nproc_per_node 1 /mnt/hwfile/wangweiyun/workspace_tcx/opencompass/opencompass/tasks/openicl_infer.py tmp/61619_params.py
command torchrun --master_port=27634 --nproc_per_node 1 /mnt/hwfile/wangweiyun/workspace_tcx/opencompass/opencompass/tasks/openicl_infer.py tmp/61539_params.py
command torchrun --master_port=14953 --nproc_per_node 1 /mnt/hwfile/wangweiyun/workspace_tcx/opencompass/opencompass/tasks/openicl_infer.py tmp/61615_params.py
use_backend False {'abbr': 'internvl-chat-20b', 'batch_size': 4, 'max_out_len': 1024, 'model_args': {'device': 'cuda'}, 'path': '/mnt/petrelfs/wangweiyun/workspace_cz/InternVL/internvl_chat_dev/share_internvl/InternVL2-26B-Pretrain/', 'run_cfg': {'num_gpus': 1}, 'type': 'opencompass.models.InternVLChat'}
command torchrun --master_port=30299 --nproc_per_node 1 /mnt/hwfile/wangweiyun/workspace_tcx/opencompass/opencompass/tasks/openicl_infer.py tmp/61631_params.py
use_backend False {'abbr': 'internvl-chat-20b', 'batch_size': 4, 'max_out_len': 1024, 'model_args': {'device': 'cuda'}, 'path': '/mnt/petrelfs/wangweiyun/workspace_cz/InternVL/internvl_chat_dev/share_internvl/InternVL2-26B-Pretrain/', 'run_cfg': {'num_gpus': 1}, 'type': 'opencompass.models.InternVLChat'}
command torchrun --master_port=20142 --nproc_per_node 1 /mnt/hwfile/wangweiyun/workspace_tcx/opencompass/opencompass/tasks/openicl_infer.py tmp/61675_params.py
use_backend False {'abbr': 'internvl-chat-20b', 'batch_size': 4, 'max_out_len': 1024, 'model_args': {'device': 'cuda'}, 'path': '/mnt/petrelfs/wangweiyun/workspace_cz/InternVL/internvl_chat_dev/share_internvl/InternVL2-26B-Pretrain/', 'run_cfg': {'num_gpus': 1}, 'type': 'opencompass.models.InternVLChat'}
command torchrun --master_port=31068 --nproc_per_node 1 /mnt/hwfile/wangweiyun/workspace_tcx/opencompass/opencompass/tasks/openicl_infer.py tmp/61537_params.py
use_backend False {'abbr': 'internvl-chat-20b', 'batch_size': 4, 'max_out_len': 1024, 'model_args': {'device': 'cuda'}, 'path': '/mnt/petrelfs/wangweiyun/workspace_cz/InternVL/internvl_chat_dev/share_internvl/InternVL2-26B-Pretrain/', 'run_cfg': {'num_gpus': 1}, 'type': 'opencompass.models.InternVLChat'}
command torchrun --master_port=12072 --nproc_per_node 1 /mnt/hwfile/wangweiyun/workspace_tcx/opencompass/opencompass/tasks/openicl_infer.py tmp/61621_params.py
use_backend False {'abbr': 'internvl-chat-20b', 'batch_size': 4, 'max_out_len': 1024, 'model_args': {'device': 'cuda'}, 'path': '/mnt/petrelfs/wangweiyun/workspace_cz/InternVL/internvl_chat_dev/share_internvl/InternVL2-26B-Pretrain/', 'run_cfg': {'num_gpus': 1}, 'type': 'opencompass.models.InternVLChat'}
command torchrun --master_port=31398 --nproc_per_node 1 /mnt/hwfile/wangweiyun/workspace_tcx/opencompass/opencompass/tasks/openicl_infer.py tmp/61653_params.py
use_backend False {'abbr': 'internvl-chat-20b', 'batch_size': 4, 'max_out_len': 1024, 'model_args': {'device': 'cuda'}, 'path': '/mnt/petrelfs/wangweiyun/workspace_cz/InternVL/internvl_chat_dev/share_internvl/InternVL2-26B-Pretrain/', 'run_cfg': {'num_gpus': 1}, 'type': 'opencompass.models.InternVLChat'}
command torchrun --master_port=31910 --nproc_per_node 1 /mnt/hwfile/wangweiyun/workspace_tcx/opencompass/opencompass/tasks/openicl_infer.py tmp/61543_params.py
use_backend False {'abbr': 'internvl-chat-20b', 'batch_size': 4, 'max_out_len': 1024, 'model_args': {'device': 'cuda'}, 'path': '/mnt/petrelfs/wangweiyun/workspace_cz/InternVL/internvl_chat_dev/share_internvl/InternVL2-26B-Pretrain/', 'run_cfg': {'num_gpus': 1}, 'type': 'opencompass.models.InternVLChat'}
command torchrun --master_port=19582 --nproc_per_node 1 /mnt/hwfile/wangweiyun/workspace_tcx/opencompass/opencompass/tasks/openicl_infer.py tmp/61634_params.py
use_backend False {'abbr': 'internvl-chat-20b', 'batch_size': 4, 'max_out_len': 1024, 'model_args': {'device': 'cuda'}, 'path': '/mnt/petrelfs/wangweiyun/workspace_cz/InternVL/internvl_chat_dev/share_internvl/InternVL2-26B-Pretrain/', 'run_cfg': {'num_gpus': 1}, 'type': 'opencompass.models.InternVLChat'}
command torchrun --master_port=12260 --nproc_per_node 1 /mnt/hwfile/wangweiyun/workspace_tcx/opencompass/opencompass/tasks/openicl_infer.py tmp/61672_params.py
use_backend False {'abbr': 'internvl-chat-20b', 'batch_size': 4, 'max_out_len': 1024, 'model_args': {'device': 'cuda'}, 'path': '/mnt/petrelfs/wangweiyun/workspace_cz/InternVL/internvl_chat_dev/share_internvl/InternVL2-26B-Pretrain/', 'run_cfg': {'num_gpus': 1}, 'type': 'opencompass.models.InternVLChat'}
command torchrun --master_port=20692 --nproc_per_node 1 /mnt/hwfile/wangweiyun/workspace_tcx/opencompass/opencompass/tasks/openicl_infer.py tmp/61622_params.py
use_backend False {'abbr': 'internvl-chat-20b', 'batch_size': 4, 'max_out_len': 1024, 'model_args': {'device': 'cuda'}, 'path': '/mnt/petrelfs/wangweiyun/workspace_cz/InternVL/internvl_chat_dev/share_internvl/InternVL2-26B-Pretrain/', 'run_cfg': {'num_gpus': 1}, 'type': 'opencompass.models.InternVLChat'}
command torchrun --master_port=21883 --nproc_per_node 1 /mnt/hwfile/wangweiyun/workspace_tcx/opencompass/opencompass/tasks/openicl_infer.py tmp/61733_params.py
use_backend False {'abbr': 'internvl-chat-20b', 'batch_size': 4, 'max_out_len': 1024, 'model_args': {'device': 'cuda'}, 'path': '/mnt/petrelfs/wangweiyun/workspace_cz/InternVL/internvl_chat_dev/share_internvl/InternVL2-26B-Pretrain/', 'run_cfg': {'num_gpus': 1}, 'type': 'opencompass.models.InternVLChat'}
command torchrun --master_port=14392 --nproc_per_node 1 /mnt/hwfile/wangweiyun/workspace_tcx/opencompass/opencompass/tasks/openicl_infer.py tmp/61710_params.py
use_backend False {'abbr': 'internvl-chat-20b', 'batch_size': 4, 'max_out_len': 1024, 'model_args': {'device': 'cuda'}, 'path': '/mnt/petrelfs/wangweiyun/workspace_cz/InternVL/internvl_chat_dev/share_internvl/InternVL2-26B-Pretrain/', 'run_cfg': {'num_gpus': 1}, 'type': 'opencompass.models.InternVLChat'}
command torchrun --master_port=22906 --nproc_per_node 1 /mnt/hwfile/wangweiyun/workspace_tcx/opencompass/opencompass/tasks/openicl_infer.py tmp/61674_params.py
use_backend False {'abbr': 'internvl-chat-20b', 'batch_size': 4, 'max_out_len': 1024, 'model_args': {'device': 'cuda'}, 'path': '/mnt/petrelfs/wangweiyun/workspace_cz/InternVL/internvl_chat_dev/share_internvl/InternVL2-26B-Pretrain/', 'run_cfg': {'num_gpus': 1}, 'type': 'opencompass.models.InternVLChat'}
command torchrun --master_port=23028 --nproc_per_node 1 /mnt/hwfile/wangweiyun/workspace_tcx/opencompass/opencompass/tasks/openicl_infer.py tmp/61676_params.py
use_backend False {'abbr': 'internvl-chat-20b', 'batch_size': 4, 'max_out_len': 1024, 'model_args': {'device': 'cuda'}, 'path': '/mnt/petrelfs/wangweiyun/workspace_cz/InternVL/internvl_chat_dev/share_internvl/InternVL2-26B-Pretrain/', 'run_cfg': {'num_gpus': 1}, 'type': 'opencompass.models.InternVLChat'}
command torchrun --master_port=27286 --nproc_per_node 1 /mnt/hwfile/wangweiyun/workspace_tcx/opencompass/opencompass/tasks/openicl_infer.py tmp/61644_params.py
use_backend False {'abbr': 'internvl-chat-20b', 'batch_size': 4, 'max_out_len': 1024, 'model_args': {'device': 'cuda'}, 'path': '/mnt/petrelfs/wangweiyun/workspace_cz/InternVL/internvl_chat_dev/share_internvl/InternVL2-26B-Pretrain/', 'run_cfg': {'num_gpus': 1}, 'type': 'opencompass.models.InternVLChat'}
command torchrun --master_port=23295 --nproc_per_node 1 /mnt/hwfile/wangweiyun/workspace_tcx/opencompass/opencompass/tasks/openicl_infer.py tmp/61642_params.py
use_backend False {'abbr': 'internvl-chat-20b', 'batch_size': 4, 'max_out_len': 1024, 'model_args': {'device': 'cuda'}, 'path': '/mnt/petrelfs/wangweiyun/workspace_cz/InternVL/internvl_chat_dev/share_internvl/InternVL2-26B-Pretrain/', 'run_cfg': {'num_gpus': 1}, 'type': 'opencompass.models.InternVLChat'}
command torchrun --master_port=18810 --nproc_per_node 1 /mnt/hwfile/wangweiyun/workspace_tcx/opencompass/opencompass/tasks/openicl_infer.py tmp/61699_params.py
use_backend False {'abbr': 'internvl-chat-20b', 'batch_size': 4, 'max_out_len': 1024, 'model_args': {'device': 'cuda'}, 'path': '/mnt/petrelfs/wangweiyun/workspace_cz/InternVL/internvl_chat_dev/share_internvl/InternVL2-26B-Pretrain/', 'run_cfg': {'num_gpus': 1}, 'type': 'opencompass.models.InternVLChat'}
command torchrun --master_port=28796 --nproc_per_node 1 /mnt/hwfile/wangweiyun/workspace_tcx/opencompass/opencompass/tasks/openicl_infer.py tmp/61670_params.py
use_backend False {'abbr': 'internvl-chat-20b', 'batch_size': 4, 'max_out_len': 1024, 'model_args': {'device': 'cuda'}, 'path': '/mnt/petrelfs/wangweiyun/workspace_cz/InternVL/internvl_chat_dev/share_internvl/InternVL2-26B-Pretrain/', 'run_cfg': {'num_gpus': 1}, 'type': 'opencompass.models.InternVLChat'}
use_backend False {'abbr': 'internvl-chat-20b', 'batch_size': 4, 'max_out_len': 1024, 'model_args': {'device': 'cuda'}, 'path': '/mnt/petrelfs/wangweiyun/workspace_cz/InternVL/internvl_chat_dev/share_internvl/InternVL2-26B-Pretrain/', 'run_cfg': {'num_gpus': 1}, 'type': 'opencompass.models.InternVLChat'}command 
torchrun --master_port=29389 --nproc_per_node 1 /mnt/hwfile/wangweiyun/workspace_tcx/opencompass/opencompass/tasks/openicl_infer.py tmp/61630_params.py
command torchrun --master_port=24171 --nproc_per_node 1 /mnt/hwfile/wangweiyun/workspace_tcx/opencompass/opencompass/tasks/openicl_infer.py tmp/61632_params.py
use_backend False {'abbr': 'internvl-chat-20b', 'batch_size': 4, 'max_out_len': 1024, 'model_args': {'device': 'cuda'}, 'path': '/mnt/petrelfs/wangweiyun/workspace_cz/InternVL/internvl_chat_dev/share_internvl/InternVL2-26B-Pretrain/', 'run_cfg': {'num_gpus': 1}, 'type': 'opencompass.models.InternVLChat'}
command torchrun --master_port=31117 --nproc_per_node 1 /mnt/hwfile/wangweiyun/workspace_tcx/opencompass/opencompass/tasks/openicl_infer.py tmp/61655_params.py
use_backend False {'abbr': 'internvl-chat-20b', 'batch_size': 4, 'max_out_len': 1024, 'model_args': {'device': 'cuda'}, 'path': '/mnt/petrelfs/wangweiyun/workspace_cz/InternVL/internvl_chat_dev/share_internvl/InternVL2-26B-Pretrain/', 'run_cfg': {'num_gpus': 1}, 'type': 'opencompass.models.InternVLChat'}
command torchrun --master_port=12257 --nproc_per_node 1 /mnt/hwfile/wangweiyun/workspace_tcx/opencompass/opencompass/tasks/openicl_infer.py tmp/61668_params.py
use_backend False {'abbr': 'internvl-chat-20b', 'batch_size': 4, 'max_out_len': 1024, 'model_args': {'device': 'cuda'}, 'path': '/mnt/petrelfs/wangweiyun/workspace_cz/InternVL/internvl_chat_dev/share_internvl/InternVL2-26B-Pretrain/', 'run_cfg': {'num_gpus': 1}, 'type': 'opencompass.models.InternVLChat'}
command torchrun --master_port=20939 --nproc_per_node 1 /mnt/hwfile/wangweiyun/workspace_tcx/opencompass/opencompass/tasks/openicl_infer.py tmp/61700_params.py
use_backend False {'abbr': 'internvl-chat-20b', 'batch_size': 4, 'max_out_len': 1024, 'model_args': {'device': 'cuda'}, 'path': '/mnt/petrelfs/wangweiyun/workspace_cz/InternVL/internvl_chat_dev/share_internvl/InternVL2-26B-Pretrain/', 'run_cfg': {'num_gpus': 1}, 'type': 'opencompass.models.InternVLChat'}use_backend
 False use_backend False {'abbr': 'internvl-chat-20b', 'batch_size': 4, 'max_out_len': 1024, 'model_args': {'device': 'cuda'}, 'path': '/mnt/petrelfs/wangweiyun/workspace_cz/InternVL/internvl_chat_dev/share_internvl/InternVL2-26B-Pretrain/', 'run_cfg': {'num_gpus': 1}, 'type': 'opencompass.models.InternVLChat'}
{'abbr': 'internvl-chat-20b', 'batch_size': 4, 'max_out_len': 1024, 'model_args': {'device': 'cuda'}, 'path': '/mnt/petrelfs/wangweiyun/workspace_cz/InternVL/internvl_chat_dev/share_internvl/InternVL2-26B-Pretrain/', 'run_cfg': {'num_gpus': 1}, 'type': 'opencompass.models.InternVLChat'}
use_backenduse_backend False  False {'abbr': 'internvl-chat-20b', 'batch_size': 4, 'max_out_len': 1024, 'model_args': {'device': 'cuda'}, 'path': '/mnt/petrelfs/wangweiyun/workspace_cz/InternVL/internvl_chat_dev/share_internvl/InternVL2-26B-Pretrain/', 'run_cfg': {'num_gpus': 1}, 'type': 'opencompass.models.InternVLChat'}{'abbr': 'internvl-chat-20b', 'batch_size': 4, 'max_out_len': 1024, 'model_args': {'device': 'cuda'}, 'path': '/mnt/petrelfs/wangweiyun/workspace_cz/InternVL/internvl_chat_dev/share_internvl/InternVL2-26B-Pretrain/', 'run_cfg': {'num_gpus': 1}, 'type': 'opencompass.models.InternVLChat'}

use_backend False {'abbr': 'internvl-chat-20b', 'batch_size': 4, 'max_out_len': 1024, 'model_args': {'device': 'cuda'}, 'path': '/mnt/petrelfs/wangweiyun/workspace_cz/InternVL/internvl_chat_dev/share_internvl/InternVL2-26B-Pretrain/', 'run_cfg': {'num_gpus': 1}, 'type': 'opencompass.models.InternVLChat'}
commandcommandcommand  torchrun --master_port=20246 --nproc_per_node 1 /mnt/hwfile/wangweiyun/workspace_tcx/opencompass/opencompass/tasks/openicl_infer.py tmp/61666_params.py torchrun --master_port=28833 --nproc_per_node 1 /mnt/hwfile/wangweiyun/workspace_tcx/opencompass/opencompass/tasks/openicl_infer.py tmp/61538_params.py
torchrun --master_port=19074 --nproc_per_node 1 /mnt/hwfile/wangweiyun/workspace_tcx/opencompass/opencompass/tasks/openicl_infer.py tmp/61705_params.py

commandcommand command torchrun --master_port=20927 --nproc_per_node 1 /mnt/hwfile/wangweiyun/workspace_tcx/opencompass/opencompass/tasks/openicl_infer.py tmp/61704_params.py torchrun --master_port=29770 --nproc_per_node 1 /mnt/hwfile/wangweiyun/workspace_tcx/opencompass/opencompass/tasks/openicl_infer.py tmp/61637_params.pytorchrun --master_port=19211 --nproc_per_node 1 /mnt/hwfile/wangweiyun/workspace_tcx/opencompass/opencompass/tasks/openicl_infer.py tmp/61499_params.py


use_backend False {'abbr': 'internvl-chat-20b', 'batch_size': 4, 'max_out_len': 1024, 'model_args': {'device': 'cuda'}, 'path': '/mnt/petrelfs/wangweiyun/workspace_cz/InternVL/internvl_chat_dev/share_internvl/InternVL2-26B-Pretrain/', 'run_cfg': {'num_gpus': 1}, 'type': 'opencompass.models.InternVLChat'}
command torchrun --master_port=16212 --nproc_per_node 1 /mnt/hwfile/wangweiyun/workspace_tcx/opencompass/opencompass/tasks/openicl_infer.py tmp/61635_params.py
use_backend False {'abbr': 'internvl-chat-20b', 'batch_size': 4, 'max_out_len': 1024, 'model_args': {'device': 'cuda'}, 'path': '/mnt/petrelfs/wangweiyun/workspace_cz/InternVL/internvl_chat_dev/share_internvl/InternVL2-26B-Pretrain/', 'run_cfg': {'num_gpus': 1}, 'type': 'opencompass.models.InternVLChat'}
command torchrun --master_port=15578 --nproc_per_node 1 /mnt/hwfile/wangweiyun/workspace_tcx/opencompass/opencompass/tasks/openicl_infer.py tmp/61633_params.py
use_backend False use_backend {'abbr': 'internvl-chat-20b', 'batch_size': 4, 'max_out_len': 1024, 'model_args': {'device': 'cuda'}, 'path': '/mnt/petrelfs/wangweiyun/workspace_cz/InternVL/internvl_chat_dev/share_internvl/InternVL2-26B-Pretrain/', 'run_cfg': {'num_gpus': 1}, 'type': 'opencompass.models.InternVLChat'}False 
{'abbr': 'internvl-chat-20b', 'batch_size': 4, 'max_out_len': 1024, 'model_args': {'device': 'cuda'}, 'path': '/mnt/petrelfs/wangweiyun/workspace_cz/InternVL/internvl_chat_dev/share_internvl/InternVL2-26B-Pretrain/', 'run_cfg': {'num_gpus': 1}, 'type': 'opencompass.models.InternVLChat'}
command torchrun --master_port=22986 --nproc_per_node 1 /mnt/hwfile/wangweiyun/workspace_tcx/opencompass/opencompass/tasks/openicl_infer.py tmp/61535_params.py
command torchrun --master_port=21802 --nproc_per_node 1 /mnt/hwfile/wangweiyun/workspace_tcx/opencompass/opencompass/tasks/openicl_infer.py tmp/61526_params.py
use_backend False {'abbr': 'internvl-chat-20b', 'batch_size': 4, 'max_out_len': 1024, 'model_args': {'device': 'cuda'}, 'path': '/mnt/petrelfs/wangweiyun/workspace_cz/InternVL/internvl_chat_dev/share_internvl/InternVL2-26B-Pretrain/', 'run_cfg': {'num_gpus': 1}, 'type': 'opencompass.models.InternVLChat'}
command torchrun --master_port=16569 --nproc_per_node 1 /mnt/hwfile/wangweiyun/workspace_tcx/opencompass/opencompass/tasks/openicl_infer.py tmp/61541_params.py
use_backend False {'abbr': 'internvl-chat-20b', 'batch_size': 4, 'max_out_len': 1024, 'model_args': {'device': 'cuda'}, 'path': '/mnt/petrelfs/wangweiyun/workspace_cz/InternVL/internvl_chat_dev/share_internvl/InternVL2-26B-Pretrain/', 'run_cfg': {'num_gpus': 1}, 'type': 'opencompass.models.InternVLChat'}
command torchrun --master_port=24929 --nproc_per_node 1 /mnt/hwfile/wangweiyun/workspace_tcx/opencompass/opencompass/tasks/openicl_infer.py tmp/61647_params.py
use_backend False {'abbr': 'internvl-chat-20b', 'batch_size': 4, 'max_out_len': 1024, 'model_args': {'device': 'cuda'}, 'path': '/mnt/petrelfs/wangweiyun/workspace_cz/InternVL/internvl_chat_dev/share_internvl/InternVL2-26B-Pretrain/', 'run_cfg': {'num_gpus': 1}, 'type': 'opencompass.models.InternVLChat'}
command torchrun --master_port=26125 --nproc_per_node 1 /mnt/hwfile/wangweiyun/workspace_tcx/opencompass/opencompass/tasks/openicl_infer.py tmp/61533_params.py
use_backend False {'abbr': 'internvl-chat-20b', 'batch_size': 4, 'max_out_len': 1024, 'model_args': {'device': 'cuda'}, 'path': '/mnt/petrelfs/wangweiyun/workspace_cz/InternVL/internvl_chat_dev/share_internvl/InternVL2-26B-Pretrain/', 'run_cfg': {'num_gpus': 1}, 'type': 'opencompass.models.InternVLChat'}
command torchrun --master_port=14589 --nproc_per_node 1 /mnt/hwfile/wangweiyun/workspace_tcx/opencompass/opencompass/tasks/openicl_infer.py tmp/61612_params.py
use_backend False {'abbr': 'internvl-chat-20b', 'batch_size': 4, 'max_out_len': 1024, 'model_args': {'device': 'cuda'}, 'path': '/mnt/petrelfs/wangweiyun/workspace_cz/InternVL/internvl_chat_dev/share_internvl/InternVL2-26B-Pretrain/', 'run_cfg': {'num_gpus': 1}, 'type': 'opencompass.models.InternVLChat'}
command torchrun --master_port=30595 --nproc_per_node 1 /mnt/hwfile/wangweiyun/workspace_tcx/opencompass/opencompass/tasks/openicl_infer.py tmp/61488_params.py
use_backend False {'abbr': 'internvl-chat-20b', 'batch_size': 4, 'max_out_len': 1024, 'model_args': {'device': 'cuda'}, 'path': '/mnt/petrelfs/wangweiyun/workspace_cz/InternVL/internvl_chat_dev/share_internvl/InternVL2-26B-Pretrain/', 'run_cfg': {'num_gpus': 1}, 'type': 'opencompass.models.InternVLChat'}
command torchrun --master_port=27107 --nproc_per_node 1 /mnt/hwfile/wangweiyun/workspace_tcx/opencompass/opencompass/tasks/openicl_infer.py tmp/61536_params.py
use_backend False {'abbr': 'internvl-chat-20b', 'batch_size': 4, 'max_out_len': 1024, 'model_args': {'device': 'cuda'}, 'path': '/mnt/petrelfs/wangweiyun/workspace_cz/InternVL/internvl_chat_dev/share_internvl/InternVL2-26B-Pretrain/', 'run_cfg': {'num_gpus': 1}, 'type': 'opencompass.models.InternVLChat'}
command torchrun --master_port=18904 --nproc_per_node 1 /mnt/hwfile/wangweiyun/workspace_tcx/opencompass/opencompass/tasks/openicl_infer.py tmp/61608_params.py
use_backend False {'abbr': 'internvl-chat-20b', 'batch_size': 4, 'max_out_len': 1024, 'model_args': {'device': 'cuda'}, 'path': '/mnt/petrelfs/wangweiyun/workspace_cz/InternVL/internvl_chat_dev/share_internvl/InternVL2-26B-Pretrain/', 'run_cfg': {'num_gpus': 1}, 'type': 'opencompass.models.InternVLChat'}
use_backendcommand torchrun --master_port=27067 --nproc_per_node 1 /mnt/hwfile/wangweiyun/workspace_tcx/opencompass/opencompass/tasks/openicl_infer.py tmp/61613_params.py
 False {'abbr': 'internvl-chat-20b', 'batch_size': 4, 'max_out_len': 1024, 'model_args': {'device': 'cuda'}, 'path': '/mnt/petrelfs/wangweiyun/workspace_cz/InternVL/internvl_chat_dev/share_internvl/InternVL2-26B-Pretrain/', 'run_cfg': {'num_gpus': 1}, 'type': 'opencompass.models.InternVLChat'}
command torchrun --master_port=24349 --nproc_per_node 1 /mnt/hwfile/wangweiyun/workspace_tcx/opencompass/opencompass/tasks/openicl_infer.py tmp/61502_params.py
use_backend False {'abbr': 'internvl-chat-20b', 'batch_size': 4, 'max_out_len': 1024, 'model_args': {'device': 'cuda'}, 'path': '/mnt/petrelfs/wangweiyun/workspace_cz/InternVL/internvl_chat_dev/share_internvl/InternVL2-26B-Pretrain/', 'run_cfg': {'num_gpus': 1}, 'type': 'opencompass.models.InternVLChat'}
command torchrun --master_port=18647 --nproc_per_node 1 /mnt/hwfile/wangweiyun/workspace_tcx/opencompass/opencompass/tasks/openicl_infer.py tmp/61624_params.py
use_backend False {'abbr': 'internvl-chat-20b', 'batch_size': 4, 'max_out_len': 1024, 'model_args': {'device': 'cuda'}, 'path': '/mnt/petrelfs/wangweiyun/workspace_cz/InternVL/internvl_chat_dev/share_internvl/InternVL2-26B-Pretrain/', 'run_cfg': {'num_gpus': 1}, 'type': 'opencompass.models.InternVLChat'}
command torchrun --master_port=15091 --nproc_per_node 1 /mnt/hwfile/wangweiyun/workspace_tcx/opencompass/opencompass/tasks/openicl_infer.py tmp/61508_params.py
use_backend False {'abbr': 'internvl-chat-20b', 'batch_size': 4, 'max_out_len': 1024, 'model_args': {'device': 'cuda'}, 'path': '/mnt/petrelfs/wangweiyun/workspace_cz/InternVL/internvl_chat_dev/share_internvl/InternVL2-26B-Pretrain/', 'run_cfg': {'num_gpus': 1}, 'type': 'opencompass.models.InternVLChat'}
command torchrun --master_port=23905 --nproc_per_node 1 /mnt/hwfile/wangweiyun/workspace_tcx/opencompass/opencompass/tasks/openicl_infer.py tmp/61697_params.py
use_backend False {'abbr': 'internvl-chat-20b', 'batch_size': 4, 'max_out_len': 1024, 'model_args': {'device': 'cuda'}, 'path': '/mnt/petrelfs/wangweiyun/workspace_cz/InternVL/internvl_chat_dev/share_internvl/InternVL2-26B-Pretrain/', 'run_cfg': {'num_gpus': 1}, 'type': 'opencompass.models.InternVLChat'}
command torchrun --master_port=25995 --nproc_per_node 1 /mnt/hwfile/wangweiyun/workspace_tcx/opencompass/opencompass/tasks/openicl_infer.py tmp/61673_params.py
use_backend False {'abbr': 'internvl-chat-20b', 'batch_size': 4, 'max_out_len': 1024, 'model_args': {'device': 'cuda'}, 'path': '/mnt/petrelfs/wangweiyun/workspace_cz/InternVL/internvl_chat_dev/share_internvl/InternVL2-26B-Pretrain/', 'run_cfg': {'num_gpus': 1}, 'type': 'opencompass.models.InternVLChat'}
command torchrun --master_port=17722 --nproc_per_node 1 /mnt/hwfile/wangweiyun/workspace_tcx/opencompass/opencompass/tasks/openicl_infer.py tmp/61513_params.py
use_backend False {'abbr': 'internvl-chat-20b', 'batch_size': 4, 'max_out_len': 1024, 'model_args': {'device': 'cuda'}, 'path': '/mnt/petrelfs/wangweiyun/workspace_cz/InternVL/internvl_chat_dev/share_internvl/InternVL2-26B-Pretrain/', 'run_cfg': {'num_gpus': 1}, 'type': 'opencompass.models.InternVLChat'}
command torchrun --master_port=15129 --nproc_per_node 1 /mnt/hwfile/wangweiyun/workspace_tcx/opencompass/opencompass/tasks/openicl_infer.py tmp/61617_params.py
use_backend False {'abbr': 'internvl-chat-20b', 'batch_size': 4, 'max_out_len': 1024, 'model_args': {'device': 'cuda'}, 'path': '/mnt/petrelfs/wangweiyun/workspace_cz/InternVL/internvl_chat_dev/share_internvl/InternVL2-26B-Pretrain/', 'run_cfg': {'num_gpus': 1}, 'type': 'opencompass.models.InternVLChat'}
command torchrun --master_port=12063 --nproc_per_node 1 /mnt/hwfile/wangweiyun/workspace_tcx/opencompass/opencompass/tasks/openicl_infer.py tmp/61701_params.py
use_backend False {'abbr': 'internvl-chat-20b', 'batch_size': 4, 'max_out_len': 1024, 'model_args': {'device': 'cuda'}, 'path': '/mnt/petrelfs/wangweiyun/workspace_cz/InternVL/internvl_chat_dev/share_internvl/InternVL2-26B-Pretrain/', 'run_cfg': {'num_gpus': 1}, 'type': 'opencompass.models.InternVLChat'}
command torchrun --master_port=17903 --nproc_per_node 1 /mnt/hwfile/wangweiyun/workspace_tcx/opencompass/opencompass/tasks/openicl_infer.py tmp/61517_params.py
use_backend False {'abbr': 'internvl-chat-20b', 'batch_size': 4, 'max_out_len': 1024, 'model_args': {'device': 'cuda'}, 'path': '/mnt/petrelfs/wangweiyun/workspace_cz/InternVL/internvl_chat_dev/share_internvl/InternVL2-26B-Pretrain/', 'run_cfg': {'num_gpus': 1}, 'type': 'opencompass.models.InternVLChat'}
command torchrun --master_port=30652 --nproc_per_node 1 /mnt/hwfile/wangweiyun/workspace_tcx/opencompass/opencompass/tasks/openicl_infer.py tmp/61485_params.py
use_backend False {'abbr': 'internvl-chat-20b', 'batch_size': 4, 'max_out_len': 1024, 'model_args': {'device': 'cuda'}, 'path': '/mnt/petrelfs/wangweiyun/workspace_cz/InternVL/internvl_chat_dev/share_internvl/InternVL2-26B-Pretrain/', 'run_cfg': {'num_gpus': 1}, 'type': 'opencompass.models.InternVLChat'}
command torchrun --master_port=25797 --nproc_per_node 1 /mnt/hwfile/wangweiyun/workspace_tcx/opencompass/opencompass/tasks/openicl_infer.py tmp/61636_params.py
use_backend False {'abbr': 'internvl-chat-20b', 'batch_size': 4, 'max_out_len': 1024, 'model_args': {'device': 'cuda'}, 'path': '/mnt/petrelfs/wangweiyun/workspace_cz/InternVL/internvl_chat_dev/share_internvl/InternVL2-26B-Pretrain/', 'run_cfg': {'num_gpus': 1}, 'type': 'opencompass.models.InternVLChat'}
use_backendcommand torchrun --master_port=20437 --nproc_per_node 1 /mnt/hwfile/wangweiyun/workspace_tcx/opencompass/opencompass/tasks/openicl_infer.py tmp/61463_params.py
 False {'abbr': 'internvl-chat-20b', 'batch_size': 4, 'max_out_len': 1024, 'model_args': {'device': 'cuda'}, 'path': '/mnt/petrelfs/wangweiyun/workspace_cz/InternVL/internvl_chat_dev/share_internvl/InternVL2-26B-Pretrain/', 'run_cfg': {'num_gpus': 1}, 'type': 'opencompass.models.InternVLChat'}
command torchrun --master_port=21118 --nproc_per_node 1 /mnt/hwfile/wangweiyun/workspace_tcx/opencompass/opencompass/tasks/openicl_infer.py tmp/61662_params.py
use_backend False {'abbr': 'internvl-chat-20b', 'batch_size': 4, 'max_out_len': 1024, 'model_args': {'device': 'cuda'}, 'path': '/mnt/petrelfs/wangweiyun/workspace_cz/InternVL/internvl_chat_dev/share_internvl/InternVL2-26B-Pretrain/', 'run_cfg': {'num_gpus': 1}, 'type': 'opencompass.models.InternVLChat'}
command torchrun --master_port=13753 --nproc_per_node 1 /mnt/hwfile/wangweiyun/workspace_tcx/opencompass/opencompass/tasks/openicl_infer.py tmp/61516_params.py
use_backend False {'abbr': 'internvl-chat-20b', 'batch_size': 4, 'max_out_len': 1024, 'model_args': {'device': 'cuda'}, 'path': '/mnt/petrelfs/wangweiyun/workspace_cz/InternVL/internvl_chat_dev/share_internvl/InternVL2-26B-Pretrain/', 'run_cfg': {'num_gpus': 1}, 'type': 'opencompass.models.InternVLChat'}
command torchrun --master_port=26202 --nproc_per_node 1 /mnt/hwfile/wangweiyun/workspace_tcx/opencompass/opencompass/tasks/openicl_infer.py tmp/61626_params.py
use_backend False {'abbr': 'internvl-chat-20b', 'batch_size': 4, 'max_out_len': 1024, 'model_args': {'device': 'cuda'}, 'path': '/mnt/petrelfs/wangweiyun/workspace_cz/InternVL/internvl_chat_dev/share_internvl/InternVL2-26B-Pretrain/', 'run_cfg': {'num_gpus': 1}, 'type': 'opencompass.models.InternVLChat'}
command torchrun --master_port=30597 --nproc_per_node 1 /mnt/hwfile/wangweiyun/workspace_tcx/opencompass/opencompass/tasks/openicl_infer.py tmp/61528_params.py
use_backend False {'abbr': 'internvl-chat-20b', 'batch_size': 4, 'max_out_len': 1024, 'model_args': {'device': 'cuda'}, 'path': '/mnt/petrelfs/wangweiyun/workspace_cz/InternVL/internvl_chat_dev/share_internvl/InternVL2-26B-Pretrain/', 'run_cfg': {'num_gpus': 1}, 'type': 'opencompass.models.InternVLChat'}
command torchrun --master_port=22287 --nproc_per_node 1 /mnt/hwfile/wangweiyun/workspace_tcx/opencompass/opencompass/tasks/openicl_infer.py tmp/61506_params.py
use_backend False {'abbr': 'internvl-chat-20b', 'batch_size': 4, 'max_out_len': 1024, 'model_args': {'device': 'cuda'}, 'path': '/mnt/petrelfs/wangweiyun/workspace_cz/InternVL/internvl_chat_dev/share_internvl/InternVL2-26B-Pretrain/', 'run_cfg': {'num_gpus': 1}, 'type': 'opencompass.models.InternVLChat'}
command torchrun --master_port=27109 --nproc_per_node 1 /mnt/hwfile/wangweiyun/workspace_tcx/opencompass/opencompass/tasks/openicl_infer.py tmp/61481_params.py
use_backend False {'abbr': 'internvl-chat-20b', 'batch_size': 4, 'max_out_len': 1024, 'model_args': {'device': 'cuda'}, 'path': '/mnt/petrelfs/wangweiyun/workspace_cz/InternVL/internvl_chat_dev/share_internvl/InternVL2-26B-Pretrain/', 'run_cfg': {'num_gpus': 1}, 'type': 'opencompass.models.InternVLChat'}
command torchrun --master_port=27673 --nproc_per_node 1 /mnt/hwfile/wangweiyun/workspace_tcx/opencompass/opencompass/tasks/openicl_infer.py tmp/61527_params.py
use_backend False {'abbr': 'internvl-chat-20b', 'batch_size': 4, 'max_out_len': 1024, 'model_args': {'device': 'cuda'}, 'path': '/mnt/petrelfs/wangweiyun/workspace_cz/InternVL/internvl_chat_dev/share_internvl/InternVL2-26B-Pretrain/', 'run_cfg': {'num_gpus': 1}, 'type': 'opencompass.models.InternVLChat'}
command torchrun --master_port=27995 --nproc_per_node 1 /mnt/hwfile/wangweiyun/workspace_tcx/opencompass/opencompass/tasks/openicl_infer.py tmp/61540_params.py
use_backend False {'abbr': 'internvl-chat-20b', 'batch_size': 4, 'max_out_len': 1024, 'model_args': {'device': 'cuda'}, 'path': '/mnt/petrelfs/wangweiyun/workspace_cz/InternVL/internvl_chat_dev/share_internvl/InternVL2-26B-Pretrain/', 'run_cfg': {'num_gpus': 1}, 'type': 'opencompass.models.InternVLChat'}
command torchrun --master_port=19175 --nproc_per_node 1 /mnt/hwfile/wangweiyun/workspace_tcx/opencompass/opencompass/tasks/openicl_infer.py tmp/61638_params.py
use_backenduse_backend False  False {'abbr': 'internvl-chat-20b', 'batch_size': 4, 'max_out_len': 1024, 'model_args': {'device': 'cuda'}, 'path': '/mnt/petrelfs/wangweiyun/workspace_cz/InternVL/internvl_chat_dev/share_internvl/InternVL2-26B-Pretrain/', 'run_cfg': {'num_gpus': 1}, 'type': 'opencompass.models.InternVLChat'}
{'abbr': 'internvl-chat-20b', 'batch_size': 4, 'max_out_len': 1024, 'model_args': {'device': 'cuda'}, 'path': '/mnt/petrelfs/wangweiyun/workspace_cz/InternVL/internvl_chat_dev/share_internvl/InternVL2-26B-Pretrain/', 'run_cfg': {'num_gpus': 1}, 'type': 'opencompass.models.InternVLChat'}
use_backend False {'abbr': 'internvl-chat-20b', 'batch_size': 4, 'max_out_len': 1024, 'model_args': {'device': 'cuda'}, 'path': '/mnt/petrelfs/wangweiyun/workspace_cz/InternVL/internvl_chat_dev/share_internvl/InternVL2-26B-Pretrain/', 'run_cfg': {'num_gpus': 1}, 'type': 'opencompass.models.InternVLChat'}command torchrun --master_port=16940 --nproc_per_node 1 /mnt/hwfile/wangweiyun/workspace_tcx/opencompass/opencompass/tasks/openicl_infer.py tmp/61641_params.py
command
 torchrun --master_port=24665 --nproc_per_node 1 /mnt/hwfile/wangweiyun/workspace_tcx/opencompass/opencompass/tasks/openicl_infer.py tmp/61509_params.py
command torchrun --master_port=17789 --nproc_per_node 1 /mnt/hwfile/wangweiyun/workspace_tcx/opencompass/opencompass/tasks/openicl_infer.py tmp/61477_params.py
use_backend False {'abbr': 'internvl-chat-20b', 'batch_size': 4, 'max_out_len': 1024, 'model_args': {'device': 'cuda'}, 'path': '/mnt/petrelfs/wangweiyun/workspace_cz/InternVL/internvl_chat_dev/share_internvl/InternVL2-26B-Pretrain/', 'run_cfg': {'num_gpus': 1}, 'type': 'opencompass.models.InternVLChat'}
command torchrun --master_port=18840 --nproc_per_node 1 /mnt/hwfile/wangweiyun/workspace_tcx/opencompass/opencompass/tasks/openicl_infer.py tmp/61453_params.py
use_backend False {'abbr': 'internvl-chat-20b', 'batch_size': 4, 'max_out_len': 1024, 'model_args': {'device': 'cuda'}, 'path': '/mnt/petrelfs/wangweiyun/workspace_cz/InternVL/internvl_chat_dev/share_internvl/InternVL2-26B-Pretrain/', 'run_cfg': {'num_gpus': 1}, 'type': 'opencompass.models.InternVLChat'}
command torchrun --master_port=26681 --nproc_per_node 1 /mnt/hwfile/wangweiyun/workspace_tcx/opencompass/opencompass/tasks/openicl_infer.py tmp/61629_params.py
use_backend False {'abbr': 'internvl-chat-20b', 'batch_size': 4, 'max_out_len': 1024, 'model_args': {'device': 'cuda'}, 'path': '/mnt/petrelfs/wangweiyun/workspace_cz/InternVL/internvl_chat_dev/share_internvl/InternVL2-26B-Pretrain/', 'run_cfg': {'num_gpus': 1}, 'type': 'opencompass.models.InternVLChat'}
command torchrun --master_port=14843 --nproc_per_node 1 /mnt/hwfile/wangweiyun/workspace_tcx/opencompass/opencompass/tasks/openicl_infer.py tmp/61449_params.py
use_backend False {'abbr': 'internvl-chat-20b', 'batch_size': 4, 'max_out_len': 1024, 'model_args': {'device': 'cuda'}, 'path': '/mnt/petrelfs/wangweiyun/workspace_cz/InternVL/internvl_chat_dev/share_internvl/InternVL2-26B-Pretrain/', 'run_cfg': {'num_gpus': 1}, 'type': 'opencompass.models.InternVLChat'}
command torchrun --master_port=13329 --nproc_per_node 1 /mnt/hwfile/wangweiyun/workspace_tcx/opencompass/opencompass/tasks/openicl_infer.py tmp/61510_params.py
use_backend False {'abbr': 'internvl-chat-20b', 'batch_size': 4, 'max_out_len': 1024, 'model_args': {'device': 'cuda'}, 'path': '/mnt/petrelfs/wangweiyun/workspace_cz/InternVL/internvl_chat_dev/share_internvl/InternVL2-26B-Pretrain/', 'run_cfg': {'num_gpus': 1}, 'type': 'opencompass.models.InternVLChat'}
command torchrun --master_port=23933 --nproc_per_node 1 /mnt/hwfile/wangweiyun/workspace_tcx/opencompass/opencompass/tasks/openicl_infer.py tmp/61628_params.py
use_backend False {'abbr': 'internvl-chat-20b', 'batch_size': 4, 'max_out_len': 1024, 'model_args': {'device': 'cuda'}, 'path': '/mnt/petrelfs/wangweiyun/workspace_cz/InternVL/internvl_chat_dev/share_internvl/InternVL2-26B-Pretrain/', 'run_cfg': {'num_gpus': 1}, 'type': 'opencompass.models.InternVLChat'}
use_backend False command torchrun --master_port=16069 --nproc_per_node 1 /mnt/hwfile/wangweiyun/workspace_tcx/opencompass/opencompass/tasks/openicl_infer.py tmp/61469_params.py
{'abbr': 'internvl-chat-20b', 'batch_size': 4, 'max_out_len': 1024, 'model_args': {'device': 'cuda'}, 'path': '/mnt/petrelfs/wangweiyun/workspace_cz/InternVL/internvl_chat_dev/share_internvl/InternVL2-26B-Pretrain/', 'run_cfg': {'num_gpus': 1}, 'type': 'opencompass.models.InternVLChat'}
command torchrun --master_port=14111 --nproc_per_node 1 /mnt/hwfile/wangweiyun/workspace_tcx/opencompass/opencompass/tasks/openicl_infer.py tmp/61639_params.py
use_backend False {'abbr': 'internvl-chat-20b', 'batch_size': 4, 'max_out_len': 1024, 'model_args': {'device': 'cuda'}, 'path': '/mnt/petrelfs/wangweiyun/workspace_cz/InternVL/internvl_chat_dev/share_internvl/InternVL2-26B-Pretrain/', 'run_cfg': {'num_gpus': 1}, 'type': 'opencompass.models.InternVLChat'}
command torchrun --master_port=26093 --nproc_per_node 1 /mnt/hwfile/wangweiyun/workspace_tcx/opencompass/opencompass/tasks/openicl_infer.py tmp/61525_params.py
use_backend False {'abbr': 'internvl-chat-20b', 'batch_size': 4, 'max_out_len': 1024, 'model_args': {'device': 'cuda'}, 'path': '/mnt/petrelfs/wangweiyun/workspace_cz/InternVL/internvl_chat_dev/share_internvl/InternVL2-26B-Pretrain/', 'run_cfg': {'num_gpus': 1}, 'type': 'opencompass.models.InternVLChat'}
command torchrun --master_port=15292 --nproc_per_node 1 /mnt/hwfile/wangweiyun/workspace_tcx/opencompass/opencompass/tasks/openicl_infer.py tmp/61523_params.py
use_backend False {'abbr': 'internvl-chat-20b', 'batch_size': 4, 'max_out_len': 1024, 'model_args': {'device': 'cuda'}, 'path': '/mnt/petrelfs/wangweiyun/workspace_cz/InternVL/internvl_chat_dev/share_internvl/InternVL2-26B-Pretrain/', 'run_cfg': {'num_gpus': 1}, 'type': 'opencompass.models.InternVLChat'}
command torchrun --master_port=26907 --nproc_per_node 1 /mnt/hwfile/wangweiyun/workspace_tcx/opencompass/opencompass/tasks/openicl_infer.py tmp/61465_params.py
use_backend False {'abbr': 'internvl-chat-20b', 'batch_size': 4, 'max_out_len': 1024, 'model_args': {'device': 'cuda'}, 'path': '/mnt/petrelfs/wangweiyun/workspace_cz/InternVL/internvl_chat_dev/share_internvl/InternVL2-26B-Pretrain/', 'run_cfg': {'num_gpus': 1}, 'type': 'opencompass.models.InternVLChat'}
command torchrun --master_port=19140 --nproc_per_node 1 /mnt/hwfile/wangweiyun/workspace_tcx/opencompass/opencompass/tasks/openicl_infer.py tmp/61640_params.py
use_backend False {'abbr': 'internvl-chat-20b', 'batch_size': 4, 'max_out_len': 1024, 'model_args': {'device': 'cuda'}, 'path': '/mnt/petrelfs/wangweiyun/workspace_cz/InternVL/internvl_chat_dev/share_internvl/InternVL2-26B-Pretrain/', 'run_cfg': {'num_gpus': 1}, 'type': 'opencompass.models.InternVLChat'}
command torchrun --master_port=27303 --nproc_per_node 1 /mnt/hwfile/wangweiyun/workspace_tcx/opencompass/opencompass/tasks/openicl_infer.py tmp/61611_params.py
use_backend False {'abbr': 'internvl-chat-20b', 'batch_size': 4, 'max_out_len': 1024, 'model_args': {'device': 'cuda'}, 'path': '/mnt/petrelfs/wangweiyun/workspace_cz/InternVL/internvl_chat_dev/share_internvl/InternVL2-26B-Pretrain/', 'run_cfg': {'num_gpus': 1}, 'type': 'opencompass.models.InternVLChat'}
command torchrun --master_port=25314 --nproc_per_node 1 /mnt/hwfile/wangweiyun/workspace_tcx/opencompass/opencompass/tasks/openicl_infer.py tmp/61524_params.py
use_backend False {'abbr': 'internvl-chat-20b', 'batch_size': 4, 'max_out_len': 1024, 'model_args': {'device': 'cuda'}, 'path': '/mnt/petrelfs/wangweiyun/workspace_cz/InternVL/internvl_chat_dev/share_internvl/InternVL2-26B-Pretrain/', 'run_cfg': {'num_gpus': 1}, 'type': 'opencompass.models.InternVLChat'}
command torchrun --master_port=17627 --nproc_per_node 1 /mnt/hwfile/wangweiyun/workspace_tcx/opencompass/opencompass/tasks/openicl_infer.py tmp/61512_params.py
use_backend False {'abbr': 'internvl-chat-20b', 'batch_size': 4, 'max_out_len': 1024, 'model_args': {'device': 'cuda'}, 'path': '/mnt/petrelfs/wangweiyun/workspace_cz/InternVL/internvl_chat_dev/share_internvl/InternVL2-26B-Pretrain/', 'run_cfg': {'num_gpus': 1}, 'type': 'opencompass.models.InternVLChat'}
command torchrun --master_port=15364 --nproc_per_node 1 /mnt/hwfile/wangweiyun/workspace_tcx/opencompass/opencompass/tasks/openicl_infer.py tmp/61646_params.py
use_backend False {'abbr': 'internvl-chat-20b', 'batch_size': 4, 'max_out_len': 1024, 'model_args': {'device': 'cuda'}, 'path': '/mnt/petrelfs/wangweiyun/workspace_cz/InternVL/internvl_chat_dev/share_internvl/InternVL2-26B-Pretrain/', 'run_cfg': {'num_gpus': 1}, 'type': 'opencompass.models.InternVLChat'}
command torchrun --master_port=17393 --nproc_per_node 1 /mnt/hwfile/wangweiyun/workspace_tcx/opencompass/opencompass/tasks/openicl_infer.py tmp/61504_params.py
use_backend False {'abbr': 'internvl-chat-20b', 'batch_size': 4, 'max_out_len': 1024, 'model_args': {'device': 'cuda'}, 'path': '/mnt/petrelfs/wangweiyun/workspace_cz/InternVL/internvl_chat_dev/share_internvl/InternVL2-26B-Pretrain/', 'run_cfg': {'num_gpus': 1}, 'type': 'opencompass.models.InternVLChat'}
command torchrun --master_port=25295 --nproc_per_node 1 /mnt/hwfile/wangweiyun/workspace_tcx/opencompass/opencompass/tasks/openicl_infer.py tmp/61501_params.py
use_backend False {'abbr': 'internvl-chat-20b', 'batch_size': 4, 'max_out_len': 1024, 'model_args': {'device': 'cuda'}, 'path': '/mnt/petrelfs/wangweiyun/workspace_cz/InternVL/internvl_chat_dev/share_internvl/InternVL2-26B-Pretrain/', 'run_cfg': {'num_gpus': 1}, 'type': 'opencompass.models.InternVLChat'}
command torchrun --master_port=31326 --nproc_per_node 1 /mnt/hwfile/wangweiyun/workspace_tcx/opencompass/opencompass/tasks/openicl_infer.py tmp/61620_params.py
use_backend False {'abbr': 'internvl-chat-20b', 'batch_size': 4, 'max_out_len': 1024, 'model_args': {'device': 'cuda'}, 'path': '/mnt/petrelfs/wangweiyun/workspace_cz/InternVL/internvl_chat_dev/share_internvl/InternVL2-26B-Pretrain/', 'run_cfg': {'num_gpus': 1}, 'type': 'opencompass.models.InternVLChat'}
command torchrun --master_port=23101 --nproc_per_node 1 /mnt/hwfile/wangweiyun/workspace_tcx/opencompass/opencompass/tasks/openicl_infer.py tmp/61237_params.py
use_backend False use_backend False use_backend False use_backend False use_backend False use_backend False use_backend False use_backend False use_backend False use_backend False use_backend False use_backend False use_backenduse_backend  use_backenduse_backendFalseFalse    {'abbr': 'internvl-chat-20b', 'batch_size': 4, 'max_out_len': 1024, 'model_args': {'device': 'cuda'}, 'path': '/mnt/petrelfs/wangweiyun/workspace_cz/InternVL/internvl_chat_dev/share_internvl/InternVL2-26B-Pretrain/', 'run_cfg': {'num_gpus': 1}, 'type': 'opencompass.models.InternVLChat'}
FalseFalse{'abbr': 'internvl-chat-20b', 'batch_size': 4, 'max_out_len': 1024, 'model_args': {'device': 'cuda'}, 'path': '/mnt/petrelfs/wangweiyun/workspace_cz/InternVL/internvl_chat_dev/share_internvl/InternVL2-26B-Pretrain/', 'run_cfg': {'num_gpus': 1}, 'type': 'opencompass.models.InternVLChat'}
command  command {'abbr': 'internvl-chat-20b', 'batch_size': 4, 'max_out_len': 1024, 'model_args': {'device': 'cuda'}, 'path': '/mnt/petrelfs/wangweiyun/workspace_cz/InternVL/internvl_chat_dev/share_internvl/InternVL2-26B-Pretrain/', 'run_cfg': {'num_gpus': 1}, 'type': 'opencompass.models.InternVLChat'}{'abbr': 'internvl-chat-20b', 'batch_size': 4, 'max_out_len': 1024, 'model_args': {'device': 'cuda'}, 'path': '/mnt/petrelfs/wangweiyun/workspace_cz/InternVL/internvl_chat_dev/share_internvl/InternVL2-26B-Pretrain/', 'run_cfg': {'num_gpus': 1}, 'type': 'opencompass.models.InternVLChat'}
 torchrun --master_port=24107 --nproc_per_node 1 /mnt/hwfile/wangweiyun/workspace_tcx/opencompass/opencompass/tasks/openicl_infer.py tmp/61203_params.py
command
torchrun --master_port=21988 --nproc_per_node 1 /mnt/hwfile/wangweiyun/workspace_tcx/opencompass/opencompass/tasks/openicl_infer.py tmp/61192_params.py
command use_backenduse_backend torchrun --master_port=17160 --nproc_per_node 1 /mnt/hwfile/wangweiyun/workspace_tcx/opencompass/opencompass/tasks/openicl_infer.py tmp/61220_params.py torchrun --master_port=23367 --nproc_per_node 1 /mnt/hwfile/wangweiyun/workspace_tcx/opencompass/opencompass/tasks/openicl_infer.py tmp/61225_params.py
False
 {'abbr': 'internvl-chat-20b', 'batch_size': 4, 'max_out_len': 1024, 'model_args': {'device': 'cuda'}, 'path': '/mnt/petrelfs/wangweiyun/workspace_cz/InternVL/internvl_chat_dev/share_internvl/InternVL2-26B-Pretrain/', 'run_cfg': {'num_gpus': 1}, 'type': 'opencompass.models.InternVLChat'}
command use_backenduse_backendtorchrun --master_port=24782 --nproc_per_node 1 /mnt/hwfile/wangweiyun/workspace_tcx/opencompass/opencompass/tasks/openicl_infer.py tmp/61205_params.py
  FalseFalse   False{'abbr': 'internvl-chat-20b', 'batch_size': 4, 'max_out_len': 1024, 'model_args': {'device': 'cuda'}, 'path': '/mnt/petrelfs/wangweiyun/workspace_cz/InternVL/internvl_chat_dev/share_internvl/InternVL2-26B-Pretrain/', 'run_cfg': {'num_gpus': 1}, 'type': 'opencompass.models.InternVLChat'} {'abbr': 'internvl-chat-20b', 'batch_size': 4, 'max_out_len': 1024, 'model_args': {'device': 'cuda'}, 'path': '/mnt/petrelfs/wangweiyun/workspace_cz/InternVL/internvl_chat_dev/share_internvl/InternVL2-26B-Pretrain/', 'run_cfg': {'num_gpus': 1}, 'type': 'opencompass.models.InternVLChat'}
{'abbr': 'internvl-chat-20b', 'batch_size': 4, 'max_out_len': 1024, 'model_args': {'device': 'cuda'}, 'path': '/mnt/petrelfs/wangweiyun/workspace_cz/InternVL/internvl_chat_dev/share_internvl/InternVL2-26B-Pretrain/', 'run_cfg': {'num_gpus': 1}, 'type': 'opencompass.models.InternVLChat'}
use_backenduse_backend False
command Falsecommand   command torchrun --master_port=22138 --nproc_per_node 1 /mnt/hwfile/wangweiyun/workspace_tcx/opencompass/opencompass/tasks/openicl_infer.py tmp/61440_params.py{'abbr': 'internvl-chat-20b', 'batch_size': 4, 'max_out_len': 1024, 'model_args': {'device': 'cuda'}, 'path': '/mnt/petrelfs/wangweiyun/workspace_cz/InternVL/internvl_chat_dev/share_internvl/InternVL2-26B-Pretrain/', 'run_cfg': {'num_gpus': 1}, 'type': 'opencompass.models.InternVLChat'}torchrun --master_port=13387 --nproc_per_node 1 /mnt/hwfile/wangweiyun/workspace_tcx/opencompass/opencompass/tasks/openicl_infer.py tmp/61614_params.py
 {'abbr': 'internvl-chat-20b', 'batch_size': 4, 'max_out_len': 1024, 'model_args': {'device': 'cuda'}, 'path': '/mnt/petrelfs/wangweiyun/workspace_cz/InternVL/internvl_chat_dev/share_internvl/InternVL2-26B-Pretrain/', 'run_cfg': {'num_gpus': 1}, 'type': 'opencompass.models.InternVLChat'}

torchrun --master_port=18899 --nproc_per_node 1 /mnt/hwfile/wangweiyun/workspace_tcx/opencompass/opencompass/tasks/openicl_infer.py tmp/61206_params.py

commandcommand  torchrun --master_port=22793 --nproc_per_node 1 /mnt/hwfile/wangweiyun/workspace_tcx/opencompass/opencompass/tasks/openicl_infer.py tmp/61498_params.pytorchrun --master_port=13334 --nproc_per_node 1 /mnt/hwfile/wangweiyun/workspace_tcx/opencompass/opencompass/tasks/openicl_infer.py tmp/61460_params.py

use_backenduse_backend  FalseFalse {'abbr': 'internvl-chat-20b', 'batch_size': 4, 'max_out_len': 1024, 'model_args': {'device': 'cuda'}, 'path': '/mnt/petrelfs/wangweiyun/workspace_cz/InternVL/internvl_chat_dev/share_internvl/InternVL2-26B-Pretrain/', 'run_cfg': {'num_gpus': 1}, 'type': 'opencompass.models.InternVLChat'} 
{'abbr': 'internvl-chat-20b', 'batch_size': 4, 'max_out_len': 1024, 'model_args': {'device': 'cuda'}, 'path': '/mnt/petrelfs/wangweiyun/workspace_cz/InternVL/internvl_chat_dev/share_internvl/InternVL2-26B-Pretrain/', 'run_cfg': {'num_gpus': 1}, 'type': 'opencompass.models.InternVLChat'}
commandcommand  torchrun --master_port=31859 --nproc_per_node 1 /mnt/hwfile/wangweiyun/workspace_tcx/opencompass/opencompass/tasks/openicl_infer.py tmp/61445_params.pytorchrun --master_port=25871 --nproc_per_node 1 /mnt/hwfile/wangweiyun/workspace_tcx/opencompass/opencompass/tasks/openicl_infer.py tmp/61618_params.py

use_backenduse_backend  FalseFalse  {'abbr': 'internvl-chat-20b', 'batch_size': 4, 'max_out_len': 1024, 'model_args': {'device': 'cuda'}, 'path': '/mnt/petrelfs/wangweiyun/workspace_cz/InternVL/internvl_chat_dev/share_internvl/InternVL2-26B-Pretrain/', 'run_cfg': {'num_gpus': 1}, 'type': 'opencompass.models.InternVLChat'}{'abbr': 'internvl-chat-20b', 'batch_size': 4, 'max_out_len': 1024, 'model_args': {'device': 'cuda'}, 'path': '/mnt/petrelfs/wangweiyun/workspace_cz/InternVL/internvl_chat_dev/share_internvl/InternVL2-26B-Pretrain/', 'run_cfg': {'num_gpus': 1}, 'type': 'opencompass.models.InternVLChat'}

commandcommand  torchrun --master_port=12150 --nproc_per_node 1 /mnt/hwfile/wangweiyun/workspace_tcx/opencompass/opencompass/tasks/openicl_infer.py tmp/61241_params.pytorchrun --master_port=17066 --nproc_per_node 1 /mnt/hwfile/wangweiyun/workspace_tcx/opencompass/opencompass/tasks/openicl_infer.py tmp/61216_params.py

use_backenduse_backend{'abbr': 'internvl-chat-20b', 'batch_size': 4, 'max_out_len': 1024, 'model_args': {'device': 'cuda'}, 'path': '/mnt/petrelfs/wangweiyun/workspace_cz/InternVL/internvl_chat_dev/share_internvl/InternVL2-26B-Pretrain/', 'run_cfg': {'num_gpus': 1}, 'type': 'opencompass.models.InternVLChat'}use_backenduse_backenduse_backenduse_backend    
False  FalseFalseFalse FalseFalse command  {'abbr': 'internvl-chat-20b', 'batch_size': 4, 'max_out_len': 1024, 'model_args': {'device': 'cuda'}, 'path': '/mnt/petrelfs/wangweiyun/workspace_cz/InternVL/internvl_chat_dev/share_internvl/InternVL2-26B-Pretrain/', 'run_cfg': {'num_gpus': 1}, 'type': 'opencompass.models.InternVLChat'}
  {'abbr': 'internvl-chat-20b', 'batch_size': 4, 'max_out_len': 1024, 'model_args': {'device': 'cuda'}, 'path': '/mnt/petrelfs/wangweiyun/workspace_cz/InternVL/internvl_chat_dev/share_internvl/InternVL2-26B-Pretrain/', 'run_cfg': {'num_gpus': 1}, 'type': 'opencompass.models.InternVLChat'}
{'abbr': 'internvl-chat-20b', 'batch_size': 4, 'max_out_len': 1024, 'model_args': {'device': 'cuda'}, 'path': '/mnt/petrelfs/wangweiyun/workspace_cz/InternVL/internvl_chat_dev/share_internvl/InternVL2-26B-Pretrain/', 'run_cfg': {'num_gpus': 1}, 'type': 'opencompass.models.InternVLChat'} {'abbr': 'internvl-chat-20b', 'batch_size': 4, 'max_out_len': 1024, 'model_args': {'device': 'cuda'}, 'path': '/mnt/petrelfs/wangweiyun/workspace_cz/InternVL/internvl_chat_dev/share_internvl/InternVL2-26B-Pretrain/', 'run_cfg': {'num_gpus': 1}, 'type': 'opencompass.models.InternVLChat'}{'abbr': 'internvl-chat-20b', 'batch_size': 4, 'max_out_len': 1024, 'model_args': {'device': 'cuda'}, 'path': '/mnt/petrelfs/wangweiyun/workspace_cz/InternVL/internvl_chat_dev/share_internvl/InternVL2-26B-Pretrain/', 'run_cfg': {'num_gpus': 1}, 'type': 'opencompass.models.InternVLChat'}{'abbr': 'internvl-chat-20b', 'batch_size': 4, 'max_out_len': 1024, 'model_args': {'device': 'cuda'}, 'path': '/mnt/petrelfs/wangweiyun/workspace_cz/InternVL/internvl_chat_dev/share_internvl/InternVL2-26B-Pretrain/', 'run_cfg': {'num_gpus': 1}, 'type': 'opencompass.models.InternVLChat'}
torchrun --master_port=21004 --nproc_per_node 1 /mnt/hwfile/wangweiyun/workspace_tcx/opencompass/opencompass/tasks/openicl_infer.py tmp/61461_params.pycommandcommand



command   commandcommandtorchrun --master_port=31715 --nproc_per_node 1 /mnt/hwfile/wangweiyun/workspace_tcx/opencompass/opencompass/tasks/openicl_infer.py tmp/61240_params.pytorchrun --master_port=16309 --nproc_per_node 1 /mnt/hwfile/wangweiyun/workspace_tcx/opencompass/opencompass/tasks/openicl_infer.py tmp/61199_params.py
torchrun --master_port=24709 --nproc_per_node 1 /mnt/hwfile/wangweiyun/workspace_tcx/opencompass/opencompass/tasks/openicl_infer.py tmp/61228_params.py  

torchrun --master_port=19968 --nproc_per_node 1 /mnt/hwfile/wangweiyun/workspace_tcx/opencompass/opencompass/tasks/openicl_infer.py tmp/61507_params.pytorchrun --master_port=23031 --nproc_per_node 1 /mnt/hwfile/wangweiyun/workspace_tcx/opencompass/opencompass/tasks/openicl_infer.py tmp/61208_params.pycommand

 torchrun --master_port=20362 --nproc_per_node 1 /mnt/hwfile/wangweiyun/workspace_tcx/opencompass/opencompass/tasks/openicl_infer.py tmp/61505_params.py
use_backenduse_backenduse_backenduse_backenduse_backenduse_backenduse_backend{'abbr': 'internvl-chat-20b', 'batch_size': 4, 'max_out_len': 1024, 'model_args': {'device': 'cuda'}, 'path': '/mnt/petrelfs/wangweiyun/workspace_cz/InternVL/internvl_chat_dev/share_internvl/InternVL2-26B-Pretrain/', 'run_cfg': {'num_gpus': 1}, 'type': 'opencompass.models.InternVLChat'}{'abbr': 'internvl-chat-20b', 'batch_size': 4, 'max_out_len': 1024, 'model_args': {'device': 'cuda'}, 'path': '/mnt/petrelfs/wangweiyun/workspace_cz/InternVL/internvl_chat_dev/share_internvl/InternVL2-26B-Pretrain/', 'run_cfg': {'num_gpus': 1}, 'type': 'opencompass.models.InternVLChat'}use_backenduse_backenduse_backenduse_backend{'abbr': 'internvl-chat-20b', 'batch_size': 4, 'max_out_len': 1024, 'model_args': {'device': 'cuda'}, 'path': '/mnt/petrelfs/wangweiyun/workspace_cz/InternVL/internvl_chat_dev/share_internvl/InternVL2-26B-Pretrain/', 'run_cfg': {'num_gpus': 1}, 'type': 'opencompass.models.InternVLChat'}use_backenduse_backend{'abbr': 'internvl-chat-20b', 'batch_size': 4, 'max_out_len': 1024, 'model_args': {'device': 'cuda'}, 'path': '/mnt/petrelfs/wangweiyun/workspace_cz/InternVL/internvl_chat_dev/share_internvl/InternVL2-26B-Pretrain/', 'run_cfg': {'num_gpus': 1}, 'type': 'opencompass.models.InternVLChat'}use_backend     {'abbr': 'internvl-chat-20b', 'batch_size': 4, 'max_out_len': 1024, 'model_args': {'device': 'cuda'}, 'path': '/mnt/petrelfs/wangweiyun/workspace_cz/InternVL/internvl_chat_dev/share_internvl/InternVL2-26B-Pretrain/', 'run_cfg': {'num_gpus': 1}, 'type': 'opencompass.models.InternVLChat'}   
use_backend  
use_backenduse_backenduse_backend 
    FalseFalseFalseFalseFalseFalseFalse FalseFalseFalse False False False 
command
command command        False {'abbr': 'internvl-chat-20b', 'batch_size': 4, 'max_out_len': 1024, 'model_args': {'device': 'cuda'}, 'path': '/mnt/petrelfs/wangweiyun/workspace_cz/InternVL/internvl_chat_dev/share_internvl/InternVL2-26B-Pretrain/', 'run_cfg': {'num_gpus': 1}, 'type': 'opencompass.models.InternVLChat'}{'abbr': 'internvl-chat-20b', 'batch_size': 4, 'max_out_len': 1024, 'model_args': {'device': 'cuda'}, 'path': '/mnt/petrelfs/wangweiyun/workspace_cz/InternVL/internvl_chat_dev/share_internvl/InternVL2-26B-Pretrain/', 'run_cfg': {'num_gpus': 1}, 'type': 'opencompass.models.InternVLChat'}{'abbr': 'internvl-chat-20b', 'batch_size': 4, 'max_out_len': 1024, 'model_args': {'device': 'cuda'}, 'path': '/mnt/petrelfs/wangweiyun/workspace_cz/InternVL/internvl_chat_dev/share_internvl/InternVL2-26B-Pretrain/', 'run_cfg': {'num_gpus': 1}, 'type': 'opencompass.models.InternVLChat'}False
 {'abbr': 'internvl-chat-20b', 'batch_size': 4, 'max_out_len': 1024, 'model_args': {'device': 'cuda'}, 'path': '/mnt/petrelfs/wangweiyun/workspace_cz/InternVL/internvl_chat_dev/share_internvl/InternVL2-26B-Pretrain/', 'run_cfg': {'num_gpus': 1}, 'type': 'opencompass.models.InternVLChat'}{'abbr': 'internvl-chat-20b', 'batch_size': 4, 'max_out_len': 1024, 'model_args': {'device': 'cuda'}, 'path': '/mnt/petrelfs/wangweiyun/workspace_cz/InternVL/internvl_chat_dev/share_internvl/InternVL2-26B-Pretrain/', 'run_cfg': {'num_gpus': 1}, 'type': 'opencompass.models.InternVLChat'} False {'abbr': 'internvl-chat-20b', 'batch_size': 4, 'max_out_len': 1024, 'model_args': {'device': 'cuda'}, 'path': '/mnt/petrelfs/wangweiyun/workspace_cz/InternVL/internvl_chat_dev/share_internvl/InternVL2-26B-Pretrain/', 'run_cfg': {'num_gpus': 1}, 'type': 'opencompass.models.InternVLChat'}{'abbr': 'internvl-chat-20b', 'batch_size': 4, 'max_out_len': 1024, 'model_args': {'device': 'cuda'}, 'path': '/mnt/petrelfs/wangweiyun/workspace_cz/InternVL/internvl_chat_dev/share_internvl/InternVL2-26B-Pretrain/', 'run_cfg': {'num_gpus': 1}, 'type': 'opencompass.models.InternVLChat'}{'abbr': 'internvl-chat-20b', 'batch_size': 4, 'max_out_len': 1024, 'model_args': {'device': 'cuda'}, 'path': '/mnt/petrelfs/wangweiyun/workspace_cz/InternVL/internvl_chat_dev/share_internvl/InternVL2-26B-Pretrain/', 'run_cfg': {'num_gpus': 1}, 'type': 'opencompass.models.InternVLChat'}{'abbr': 'internvl-chat-20b', 'batch_size': 4, 'max_out_len': 1024, 'model_args': {'device': 'cuda'}, 'path': '/mnt/petrelfs/wangweiyun/workspace_cz/InternVL/internvl_chat_dev/share_internvl/InternVL2-26B-Pretrain/', 'run_cfg': {'num_gpus': 1}, 'type': 'opencompass.models.InternVLChat'}
{'abbr': 'internvl-chat-20b', 'batch_size': 4, 'max_out_len': 1024, 'model_args': {'device': 'cuda'}, 'path': '/mnt/petrelfs/wangweiyun/workspace_cz/InternVL/internvl_chat_dev/share_internvl/InternVL2-26B-Pretrain/', 'run_cfg': {'num_gpus': 1}, 'type': 'opencompass.models.InternVLChat'}
{'abbr': 'internvl-chat-20b', 'batch_size': 4, 'max_out_len': 1024, 'model_args': {'device': 'cuda'}, 'path': '/mnt/petrelfs/wangweiyun/workspace_cz/InternVL/internvl_chat_dev/share_internvl/InternVL2-26B-Pretrain/', 'run_cfg': {'num_gpus': 1}, 'type': 'opencompass.models.InternVLChat'}torchrun --master_port=20953 --nproc_per_node 1 /mnt/hwfile/wangweiyun/workspace_tcx/opencompass/opencompass/tasks/openicl_infer.py tmp/61514_params.py
 torchrun --master_port=26650 --nproc_per_node 1 /mnt/hwfile/wangweiyun/workspace_tcx/opencompass/opencompass/tasks/openicl_infer.py tmp/61493_params.py{'abbr': 'internvl-chat-20b', 'batch_size': 4, 'max_out_len': 1024, 'model_args': {'device': 'cuda'}, 'path': '/mnt/petrelfs/wangweiyun/workspace_cz/InternVL/internvl_chat_dev/share_internvl/InternVL2-26B-Pretrain/', 'run_cfg': {'num_gpus': 1}, 'type': 'opencompass.models.InternVLChat'}command

command



command

command  commandcommand commandcommand commandcommand Falsecommand        {'abbr': 'internvl-chat-20b', 'batch_size': 4, 'max_out_len': 1024, 'model_args': {'device': 'cuda'}, 'path': '/mnt/petrelfs/wangweiyun/workspace_cz/InternVL/internvl_chat_dev/share_internvl/InternVL2-26B-Pretrain/', 'run_cfg': {'num_gpus': 1}, 'type': 'opencompass.models.InternVLChat'}use_backenduse_backenduse_backenduse_backend{'abbr': 'internvl-chat-20b', 'batch_size': 4, 'max_out_len': 1024, 'model_args': {'device': 'cuda'}, 'path': '/mnt/petrelfs/wangweiyun/workspace_cz/InternVL/internvl_chat_dev/share_internvl/InternVL2-26B-Pretrain/', 'run_cfg': {'num_gpus': 1}, 'type': 'opencompass.models.InternVLChat'}{'abbr': 'internvl-chat-20b', 'batch_size': 4, 'max_out_len': 1024, 'model_args': {'device': 'cuda'}, 'path': '/mnt/petrelfs/wangweiyun/workspace_cz/InternVL/internvl_chat_dev/share_internvl/InternVL2-26B-Pretrain/', 'run_cfg': {'num_gpus': 1}, 'type': 'opencompass.models.InternVLChat'}use_backenduse_backenduse_backenduse_backenduse_backend
use_backenduse_backenduse_backenduse_backend False 
 {'abbr': 'internvl-chat-20b', 'batch_size': 4, 'max_out_len': 1024, 'model_args': {'device': 'cuda'}, 'path': '/mnt/petrelfs/wangweiyun/workspace_cz/InternVL/internvl_chat_dev/share_internvl/InternVL2-26B-Pretrain/', 'run_cfg': {'num_gpus': 1}, 'type': 'opencompass.models.InternVLChat'}
{'abbr': 'internvl-chat-20b', 'batch_size': 4, 'max_out_len': 1024, 'model_args': {'device': 'cuda'}, 'path': '/mnt/petrelfs/wangweiyun/workspace_cz/InternVL/internvl_chat_dev/share_internvl/InternVL2-26B-Pretrain/', 'run_cfg': {'num_gpus': 1}, 'type': 'opencompass.models.InternVLChat'}      use_backend torchrun --master_port=30850 --nproc_per_node 1 /mnt/hwfile/wangweiyun/workspace_tcx/opencompass/opencompass/tasks/openicl_infer.py tmp/61515_params.pyuse_backenduse_backenduse_backend use_backenduse_backendFalseuse_backend use_backenduse_backendtorchrun --master_port=30874 --nproc_per_node 1 /mnt/hwfile/wangweiyun/workspace_tcx/opencompass/opencompass/tasks/openicl_infer.py tmp/61482_params.pyuse_backenduse_backendtorchrun --master_port=31010 --nproc_per_node 1 /mnt/hwfile/wangweiyun/workspace_tcx/opencompass/opencompass/tasks/openicl_infer.py tmp/61204_params.pytorchrun --master_port=15699 --nproc_per_node 1 /mnt/hwfile/wangweiyun/workspace_tcx/opencompass/opencompass/tasks/openicl_infer.py tmp/61214_params.py torchrun --master_port=17567 --nproc_per_node 1 /mnt/hwfile/wangweiyun/workspace_tcx/opencompass/opencompass/tasks/openicl_infer.py tmp/61494_params.pyuse_backendFalse
{'abbr': 'internvl-chat-20b', 'batch_size': 4, 'max_out_len': 1024, 'model_args': {'device': 'cuda'}, 'path': '/mnt/petrelfs/wangweiyun/workspace_cz/InternVL/internvl_chat_dev/share_internvl/InternVL2-26B-Pretrain/', 'run_cfg': {'num_gpus': 1}, 'type': 'opencompass.models.InternVLChat'}FalseFalseFalse
{'abbr': 'internvl-chat-20b', 'batch_size': 4, 'max_out_len': 1024, 'model_args': {'device': 'cuda'}, 'path': '/mnt/petrelfs/wangweiyun/workspace_cz/InternVL/internvl_chat_dev/share_internvl/InternVL2-26B-Pretrain/', 'run_cfg': {'num_gpus': 1}, 'type': 'opencompass.models.InternVLChat'}FalseFalse
False  use_backend     
  
commandcommand
use_backend
False {'abbr': 'internvl-chat-20b', 'batch_size': 4, 'max_out_len': 1024, 'model_args': {'device': 'cuda'}, 'path': '/mnt/petrelfs/wangweiyun/workspace_cz/InternVL/internvl_chat_dev/share_internvl/InternVL2-26B-Pretrain/', 'run_cfg': {'num_gpus': 1}, 'type': 'opencompass.models.InternVLChat'}torchrun --master_port=23251 --nproc_per_node 1 /mnt/hwfile/wangweiyun/workspace_tcx/opencompass/opencompass/tasks/openicl_infer.py tmp/61235_params.py{'abbr': 'internvl-chat-20b', 'batch_size': 4, 'max_out_len': 1024, 'model_args': {'device': 'cuda'}, 'path': '/mnt/petrelfs/wangweiyun/workspace_cz/InternVL/internvl_chat_dev/share_internvl/InternVL2-26B-Pretrain/', 'run_cfg': {'num_gpus': 1}, 'type': 'opencompass.models.InternVLChat'}use_backenduse_backenduse_backendtorchrun --master_port=24168 --nproc_per_node 1 /mnt/hwfile/wangweiyun/workspace_tcx/opencompass/opencompass/tasks/openicl_infer.py tmp/61425_params.py use_backendtorchrun --master_port=14821 --nproc_per_node 1 /mnt/hwfile/wangweiyun/workspace_tcx/opencompass/opencompass/tasks/openicl_infer.py tmp/61500_params.py  use_backend{'abbr': 'internvl-chat-20b', 'batch_size': 4, 'max_out_len': 1024, 'model_args': {'device': 'cuda'}, 'path': '/mnt/petrelfs/wangweiyun/workspace_cz/InternVL/internvl_chat_dev/share_internvl/InternVL2-26B-Pretrain/', 'run_cfg': {'num_gpus': 1}, 'type': 'opencompass.models.InternVLChat'}
use_backendFalsecommand  command  FalseFalseFalseFalse False FalseFalse 
 False
False  

 False command  
  False   False{'abbr': 'internvl-chat-20b', 'batch_size': 4, 'max_out_len': 1024, 'model_args': {'device': 'cuda'}, 'path': '/mnt/petrelfs/wangweiyun/workspace_cz/InternVL/internvl_chat_dev/share_internvl/InternVL2-26B-Pretrain/', 'run_cfg': {'num_gpus': 1}, 'type': 'opencompass.models.InternVLChat'}  {'abbr': 'internvl-chat-20b', 'batch_size': 4, 'max_out_len': 1024, 'model_args': {'device': 'cuda'}, 'path': '/mnt/petrelfs/wangweiyun/workspace_cz/InternVL/internvl_chat_dev/share_internvl/InternVL2-26B-Pretrain/', 'run_cfg': {'num_gpus': 1}, 'type': 'opencompass.models.InternVLChat'}  
{'abbr': 'internvl-chat-20b', 'batch_size': 4, 'max_out_len': 1024, 'model_args': {'device': 'cuda'}, 'path': '/mnt/petrelfs/wangweiyun/workspace_cz/InternVL/internvl_chat_dev/share_internvl/InternVL2-26B-Pretrain/', 'run_cfg': {'num_gpus': 1}, 'type': 'opencompass.models.InternVLChat'} torchrun --master_port=23210 --nproc_per_node 1 /mnt/hwfile/wangweiyun/workspace_tcx/opencompass/opencompass/tasks/openicl_infer.py tmp/61480_params.pytorchrun --master_port=17671 --nproc_per_node 1 /mnt/hwfile/wangweiyun/workspace_tcx/opencompass/opencompass/tasks/openicl_infer.py tmp/61224_params.py
{'abbr': 'internvl-chat-20b', 'batch_size': 4, 'max_out_len': 1024, 'model_args': {'device': 'cuda'}, 'path': '/mnt/petrelfs/wangweiyun/workspace_cz/InternVL/internvl_chat_dev/share_internvl/InternVL2-26B-Pretrain/', 'run_cfg': {'num_gpus': 1}, 'type': 'opencompass.models.InternVLChat'}{'abbr': 'internvl-chat-20b', 'batch_size': 4, 'max_out_len': 1024, 'model_args': {'device': 'cuda'}, 'path': '/mnt/petrelfs/wangweiyun/workspace_cz/InternVL/internvl_chat_dev/share_internvl/InternVL2-26B-Pretrain/', 'run_cfg': {'num_gpus': 1}, 'type': 'opencompass.models.InternVLChat'} {'abbr': 'internvl-chat-20b', 'batch_size': 4, 'max_out_len': 1024, 'model_args': {'device': 'cuda'}, 'path': '/mnt/petrelfs/wangweiyun/workspace_cz/InternVL/internvl_chat_dev/share_internvl/InternVL2-26B-Pretrain/', 'run_cfg': {'num_gpus': 1}, 'type': 'opencompass.models.InternVLChat'}{'abbr': 'internvl-chat-20b', 'batch_size': 4, 'max_out_len': 1024, 'model_args': {'device': 'cuda'}, 'path': '/mnt/petrelfs/wangweiyun/workspace_cz/InternVL/internvl_chat_dev/share_internvl/InternVL2-26B-Pretrain/', 'run_cfg': {'num_gpus': 1}, 'type': 'opencompass.models.InternVLChat'}False{'abbr': 'internvl-chat-20b', 'batch_size': 4, 'max_out_len': 1024, 'model_args': {'device': 'cuda'}, 'path': '/mnt/petrelfs/wangweiyun/workspace_cz/InternVL/internvl_chat_dev/share_internvl/InternVL2-26B-Pretrain/', 'run_cfg': {'num_gpus': 1}, 'type': 'opencompass.models.InternVLChat'}False  False False {'abbr': 'internvl-chat-20b', 'batch_size': 4, 'max_out_len': 1024, 'model_args': {'device': 'cuda'}, 'path': '/mnt/petrelfs/wangweiyun/workspace_cz/InternVL/internvl_chat_dev/share_internvl/InternVL2-26B-Pretrain/', 'run_cfg': {'num_gpus': 1}, 'type': 'opencompass.models.InternVLChat'}{'abbr': 'internvl-chat-20b', 'batch_size': 4, 'max_out_len': 1024, 'model_args': {'device': 'cuda'}, 'path': '/mnt/petrelfs/wangweiyun/workspace_cz/InternVL/internvl_chat_dev/share_internvl/InternVL2-26B-Pretrain/', 'run_cfg': {'num_gpus': 1}, 'type': 'opencompass.models.InternVLChat'}{'abbr': 'internvl-chat-20b', 'batch_size': 4, 'max_out_len': 1024, 'model_args': {'device': 'cuda'}, 'path': '/mnt/petrelfs/wangweiyun/workspace_cz/InternVL/internvl_chat_dev/share_internvl/InternVL2-26B-Pretrain/', 'run_cfg': {'num_gpus': 1}, 'type': 'opencompass.models.InternVLChat'} torchrun --master_port=17076 --nproc_per_node 1 /mnt/hwfile/wangweiyun/workspace_tcx/opencompass/opencompass/tasks/openicl_infer.py tmp/61542_params.pytorchrun --master_port=16661 --nproc_per_node 1 /mnt/hwfile/wangweiyun/workspace_tcx/opencompass/opencompass/tasks/openicl_infer.py tmp/61444_params.pyFalsecommandcommandcommand



{'abbr': 'internvl-chat-20b', 'batch_size': 4, 'max_out_len': 1024, 'model_args': {'device': 'cuda'}, 'path': '/mnt/petrelfs/wangweiyun/workspace_cz/InternVL/internvl_chat_dev/share_internvl/InternVL2-26B-Pretrain/', 'run_cfg': {'num_gpus': 1}, 'type': 'opencompass.models.InternVLChat'}

 

{'abbr': 'internvl-chat-20b', 'batch_size': 4, 'max_out_len': 1024, 'model_args': {'device': 'cuda'}, 'path': '/mnt/petrelfs/wangweiyun/workspace_cz/InternVL/internvl_chat_dev/share_internvl/InternVL2-26B-Pretrain/', 'run_cfg': {'num_gpus': 1}, 'type': 'opencompass.models.InternVLChat'}{'abbr': 'internvl-chat-20b', 'batch_size': 4, 'max_out_len': 1024, 'model_args': {'device': 'cuda'}, 'path': '/mnt/petrelfs/wangweiyun/workspace_cz/InternVL/internvl_chat_dev/share_internvl/InternVL2-26B-Pretrain/', 'run_cfg': {'num_gpus': 1}, 'type': 'opencompass.models.InternVLChat'}{'abbr': 'internvl-chat-20b', 'batch_size': 4, 'max_out_len': 1024, 'model_args': {'device': 'cuda'}, 'path': '/mnt/petrelfs/wangweiyun/workspace_cz/InternVL/internvl_chat_dev/share_internvl/InternVL2-26B-Pretrain/', 'run_cfg': {'num_gpus': 1}, 'type': 'opencompass.models.InternVLChat'}command
 torchrun --master_port=16851 --nproc_per_node 1 /mnt/hwfile/wangweiyun/workspace_tcx/opencompass/opencompass/tasks/openicl_infer.py tmp/61609_params.py{'abbr': 'internvl-chat-20b', 'batch_size': 4, 'max_out_len': 1024, 'model_args': {'device': 'cuda'}, 'path': '/mnt/petrelfs/wangweiyun/workspace_cz/InternVL/internvl_chat_dev/share_internvl/InternVL2-26B-Pretrain/', 'run_cfg': {'num_gpus': 1}, 'type': 'opencompass.models.InternVLChat'} 
{'abbr': 'internvl-chat-20b', 'batch_size': 4, 'max_out_len': 1024, 'model_args': {'device': 'cuda'}, 'path': '/mnt/petrelfs/wangweiyun/workspace_cz/InternVL/internvl_chat_dev/share_internvl/InternVL2-26B-Pretrain/', 'run_cfg': {'num_gpus': 1}, 'type': 'opencompass.models.InternVLChat'}

 
 {'abbr': 'internvl-chat-20b', 'batch_size': 4, 'max_out_len': 1024, 'model_args': {'device': 'cuda'}, 'path': '/mnt/petrelfs/wangweiyun/workspace_cz/InternVL/internvl_chat_dev/share_internvl/InternVL2-26B-Pretrain/', 'run_cfg': {'num_gpus': 1}, 'type': 'opencompass.models.InternVLChat'} 
torchrun --master_port=28556 --nproc_per_node 1 /mnt/hwfile/wangweiyun/workspace_tcx/opencompass/opencompass/tasks/openicl_infer.py tmp/61511_params.py{'abbr': 'internvl-chat-20b', 'batch_size': 4, 'max_out_len': 1024, 'model_args': {'device': 'cuda'}, 'path': '/mnt/petrelfs/wangweiyun/workspace_cz/InternVL/internvl_chat_dev/share_internvl/InternVL2-26B-Pretrain/', 'run_cfg': {'num_gpus': 1}, 'type': 'opencompass.models.InternVLChat'}
command

 
 torchrun --master_port=17254 --nproc_per_node 1 /mnt/hwfile/wangweiyun/workspace_tcx/opencompass/opencompass/tasks/openicl_infer.py tmp/61450_params.py
commandcommand

command
command
command commandcommand
   command   command commandcommandcommand   commandcommand
  use_backendcommandtorchrun --master_port=26257 --nproc_per_node 1 /mnt/hwfile/wangweiyun/workspace_tcx/opencompass/opencompass/tasks/openicl_infer.py tmp/61679_params.pytorchrun --master_port=27273 --nproc_per_node 1 /mnt/hwfile/wangweiyun/workspace_tcx/opencompass/opencompass/tasks/openicl_infer.py tmp/61451_params.py{'abbr': 'internvl-chat-20b', 'batch_size': 4, 'max_out_len': 1024, 'model_args': {'device': 'cuda'}, 'path': '/mnt/petrelfs/wangweiyun/workspace_cz/InternVL/internvl_chat_dev/share_internvl/InternVL2-26B-Pretrain/', 'run_cfg': {'num_gpus': 1}, 'type': 'opencompass.models.InternVLChat'}command
use_backend  
torchrun --master_port=12936 --nproc_per_node 1 /mnt/hwfile/wangweiyun/workspace_tcx/opencompass/opencompass/tasks/openicl_infer.py tmp/61483_params.py use_backend 
False command
 
Falsetorchrun --master_port=27145 --nproc_per_node 1 /mnt/hwfile/wangweiyun/workspace_tcx/opencompass/opencompass/tasks/openicl_infer.py tmp/61464_params.py {'abbr': 'internvl-chat-20b', 'batch_size': 4, 'max_out_len': 1024, 'model_args': {'device': 'cuda'}, 'path': '/mnt/petrelfs/wangweiyun/workspace_cz/InternVL/internvl_chat_dev/share_internvl/InternVL2-26B-Pretrain/', 'run_cfg': {'num_gpus': 1}, 'type': 'opencompass.models.InternVLChat'}False {'abbr': 'internvl-chat-20b', 'batch_size': 4, 'max_out_len': 1024, 'model_args': {'device': 'cuda'}, 'path': '/mnt/petrelfs/wangweiyun/workspace_cz/InternVL/internvl_chat_dev/share_internvl/InternVL2-26B-Pretrain/', 'run_cfg': {'num_gpus': 1}, 'type': 'opencompass.models.InternVLChat'}
  torchrun --master_port=23331 --nproc_per_node 1 /mnt/hwfile/wangweiyun/workspace_tcx/opencompass/opencompass/tasks/openicl_infer.py tmp/61487_params.py 
 False

{'abbr': 'internvl-chat-20b', 'batch_size': 4, 'max_out_len': 1024, 'model_args': {'device': 'cuda'}, 'path': '/mnt/petrelfs/wangweiyun/workspace_cz/InternVL/internvl_chat_dev/share_internvl/InternVL2-26B-Pretrain/', 'run_cfg': {'num_gpus': 1}, 'type': 'opencompass.models.InternVLChat'}{'abbr': 'internvl-chat-20b', 'batch_size': 4, 'max_out_len': 1024, 'model_args': {'device': 'cuda'}, 'path': '/mnt/petrelfs/wangweiyun/workspace_cz/InternVL/internvl_chat_dev/share_internvl/InternVL2-26B-Pretrain/', 'run_cfg': {'num_gpus': 1}, 'type': 'opencompass.models.InternVLChat'}False 
torchrun --master_port=19421 --nproc_per_node 1 /mnt/hwfile/wangweiyun/workspace_tcx/opencompass/opencompass/tasks/openicl_infer.py tmp/61478_params.py
 {'abbr': 'internvl-chat-20b', 'batch_size': 4, 'max_out_len': 1024, 'model_args': {'device': 'cuda'}, 'path': '/mnt/petrelfs/wangweiyun/workspace_cz/InternVL/internvl_chat_dev/share_internvl/InternVL2-26B-Pretrain/', 'run_cfg': {'num_gpus': 1}, 'type': 'opencompass.models.InternVLChat'}

commandcommand {'abbr': 'internvl-chat-20b', 'batch_size': 4, 'max_out_len': 1024, 'model_args': {'device': 'cuda'}, 'path': '/mnt/petrelfs/wangweiyun/workspace_cz/InternVL/internvl_chat_dev/share_internvl/InternVL2-26B-Pretrain/', 'run_cfg': {'num_gpus': 1}, 'type': 'opencompass.models.InternVLChat'}

{'abbr': 'internvl-chat-20b', 'batch_size': 4, 'max_out_len': 1024, 'model_args': {'device': 'cuda'}, 'path': '/mnt/petrelfs/wangweiyun/workspace_cz/InternVL/internvl_chat_dev/share_internvl/InternVL2-26B-Pretrain/', 'run_cfg': {'num_gpus': 1}, 'type': 'opencompass.models.InternVLChat'}
 commandcommandtorchrun --master_port=25494 --nproc_per_node 1 /mnt/hwfile/wangweiyun/workspace_tcx/opencompass/opencompass/tasks/openicl_infer.py tmp/61217_params.py 
torchrun --master_port=27635 --nproc_per_node 1 /mnt/hwfile/wangweiyun/workspace_tcx/opencompass/opencompass/tasks/openicl_infer.py tmp/61196_params.pycommandtorchrun --master_port=15703 --nproc_per_node 1 /mnt/hwfile/wangweiyun/workspace_tcx/opencompass/opencompass/tasks/openicl_infer.py tmp/61226_params.py torchrun --master_port=24106 --nproc_per_node 1 /mnt/hwfile/wangweiyun/workspace_tcx/opencompass/opencompass/tasks/openicl_infer.py tmp/61458_params.py
commandcommand 

  torchrun --master_port=27043 --nproc_per_node 1 /mnt/hwfile/wangweiyun/workspace_tcx/opencompass/opencompass/tasks/openicl_infer.py tmp/61212_params.pytorchrun --master_port=14260 --nproc_per_node 1 /mnt/hwfile/wangweiyun/workspace_tcx/opencompass/opencompass/tasks/openicl_infer.py tmp/61222_params.py Falsetorchrun --master_port=23113 --nproc_per_node 1 /mnt/hwfile/wangweiyun/workspace_tcx/opencompass/opencompass/tasks/openicl_infer.py tmp/61522_params.py


 {'abbr': 'internvl-chat-20b', 'batch_size': 4, 'max_out_len': 1024, 'model_args': {'device': 'cuda'}, 'path': '/mnt/petrelfs/wangweiyun/workspace_cz/InternVL/internvl_chat_dev/share_internvl/InternVL2-26B-Pretrain/', 'run_cfg': {'num_gpus': 1}, 'type': 'opencompass.models.InternVLChat'}
command torchrun --master_port=25247 --nproc_per_node 1 /mnt/hwfile/wangweiyun/workspace_tcx/opencompass/opencompass/tasks/openicl_infer.py tmp/61607_params.py  use_backenduse_backenduse_backendtorchrun --master_port=13601 --nproc_per_node 1 /mnt/hwfile/wangweiyun/workspace_tcx/opencompass/opencompass/tasks/openicl_infer.py tmp/61443_params.pytorchrun --master_port=13957 --nproc_per_node 1 /mnt/hwfile/wangweiyun/workspace_tcx/opencompass/opencompass/tasks/openicl_infer.py tmp/61213_params.pycommand use_backendcommanduse_backend{'abbr': 'internvl-chat-20b', 'batch_size': 4, 'max_out_len': 1024, 'model_args': {'device': 'cuda'}, 'path': '/mnt/petrelfs/wangweiyun/workspace_cz/InternVL/internvl_chat_dev/share_internvl/InternVL2-26B-Pretrain/', 'run_cfg': {'num_gpus': 1}, 'type': 'opencompass.models.InternVLChat'}use_backendtorchrun --master_port=22847 --nproc_per_node 1 /mnt/hwfile/wangweiyun/workspace_tcx/opencompass/opencompass/tasks/openicl_infer.py tmp/61215_params.py

use_backend{'abbr': 'internvl-chat-20b', 'batch_size': 4, 'max_out_len': 1024, 'model_args': {'device': 'cuda'}, 'path': '/mnt/petrelfs/wangweiyun/workspace_cz/InternVL/internvl_chat_dev/share_internvl/InternVL2-26B-Pretrain/', 'run_cfg': {'num_gpus': 1}, 'type': 'opencompass.models.InternVLChat'}
{'abbr': 'internvl-chat-20b', 'batch_size': 4, 'max_out_len': 1024, 'model_args': {'device': 'cuda'}, 'path': '/mnt/petrelfs/wangweiyun/workspace_cz/InternVL/internvl_chat_dev/share_internvl/InternVL2-26B-Pretrain/', 'run_cfg': {'num_gpus': 1}, 'type': 'opencompass.models.InternVLChat'} False False  
 
    {'abbr': 'internvl-chat-20b', 'batch_size': 4, 'max_out_len': 1024, 'model_args': {'device': 'cuda'}, 'path': '/mnt/petrelfs/wangweiyun/workspace_cz/InternVL/internvl_chat_dev/share_internvl/InternVL2-26B-Pretrain/', 'run_cfg': {'num_gpus': 1}, 'type': 'opencompass.models.InternVLChat'} torchrun --master_port=28382 --nproc_per_node 1 /mnt/hwfile/wangweiyun/workspace_tcx/opencompass/opencompass/tasks/openicl_infer.py tmp/61231_params.pycommandtorchrun --master_port=23801 --nproc_per_node 1 /mnt/hwfile/wangweiyun/workspace_tcx/opencompass/opencompass/tasks/openicl_infer.py tmp/61476_params.pytorchrun --master_port=27792 --nproc_per_node 1 /mnt/hwfile/wangweiyun/workspace_tcx/opencompass/opencompass/tasks/openicl_infer.py tmp/61503_params.pytorchrun --master_port=22916 --nproc_per_node 1 /mnt/hwfile/wangweiyun/workspace_tcx/opencompass/opencompass/tasks/openicl_infer.py tmp/61610_params.py Falseuse_backend{'abbr': 'internvl-chat-20b', 'batch_size': 4, 'max_out_len': 1024, 'model_args': {'device': 'cuda'}, 'path': '/mnt/petrelfs/wangweiyun/workspace_cz/InternVL/internvl_chat_dev/share_internvl/InternVL2-26B-Pretrain/', 'run_cfg': {'num_gpus': 1}, 'type': 'opencompass.models.InternVLChat'}torchrun --master_port=24298 --nproc_per_node 1 /mnt/hwfile/wangweiyun/workspace_tcx/opencompass/opencompass/tasks/openicl_infer.py tmp/61441_params.pyFalsetorchrun --master_port=25598 --nproc_per_node 1 /mnt/hwfile/wangweiyun/workspace_tcx/opencompass/opencompass/tasks/openicl_infer.py tmp/61197_params.pyuse_backendtorchrun --master_port=21263 --nproc_per_node 1 /mnt/hwfile/wangweiyun/workspace_tcx/opencompass/opencompass/tasks/openicl_infer.py tmp/61467_params.py{'abbr': 'internvl-chat-20b', 'batch_size': 4, 'max_out_len': 1024, 'model_args': {'device': 'cuda'}, 'path': '/mnt/petrelfs/wangweiyun/workspace_cz/InternVL/internvl_chat_dev/share_internvl/InternVL2-26B-Pretrain/', 'run_cfg': {'num_gpus': 1}, 'type': 'opencompass.models.InternVLChat'}False 
False 
torchrun --master_port=31268 --nproc_per_node 1 /mnt/hwfile/wangweiyun/workspace_tcx/opencompass/opencompass/tasks/openicl_infer.py tmp/61221_params.pyFalse  Falsetorchrun --master_port=30663 --nproc_per_node 1 /mnt/hwfile/wangweiyun/workspace_tcx/opencompass/opencompass/tasks/openicl_infer.py tmp/61457_params.py 
 

{'abbr': 'internvl-chat-20b', 'batch_size': 4, 'max_out_len': 1024, 'model_args': {'device': 'cuda'}, 'path': '/mnt/petrelfs/wangweiyun/workspace_cz/InternVL/internvl_chat_dev/share_internvl/InternVL2-26B-Pretrain/', 'run_cfg': {'num_gpus': 1}, 'type': 'opencompass.models.InternVLChat'}
 {'abbr': 'internvl-chat-20b', 'batch_size': 4, 'max_out_len': 1024, 'model_args': {'device': 'cuda'}, 'path': '/mnt/petrelfs/wangweiyun/workspace_cz/InternVL/internvl_chat_dev/share_internvl/InternVL2-26B-Pretrain/', 'run_cfg': {'num_gpus': 1}, 'type': 'opencompass.models.InternVLChat'}
 
Falsetorchrun --master_port=17986 --nproc_per_node 1 /mnt/hwfile/wangweiyun/workspace_tcx/opencompass/opencompass/tasks/openicl_infer.py tmp/61239_params.pyuse_backendtorchrun --master_port=17901 --nproc_per_node 1 /mnt/hwfile/wangweiyun/workspace_tcx/opencompass/opencompass/tasks/openicl_infer.py tmp/61202_params.pycommandtorchrun --master_port=31709 --nproc_per_node 1 /mnt/hwfile/wangweiyun/workspace_tcx/opencompass/opencompass/tasks/openicl_infer.py tmp/61209_params.py {'abbr': 'internvl-chat-20b', 'batch_size': 4, 'max_out_len': 1024, 'model_args': {'device': 'cuda'}, 'path': '/mnt/petrelfs/wangweiyun/workspace_cz/InternVL/internvl_chat_dev/share_internvl/InternVL2-26B-Pretrain/', 'run_cfg': {'num_gpus': 1}, 'type': 'opencompass.models.InternVLChat'}torchrun --master_port=20434 --nproc_per_node 1 /mnt/hwfile/wangweiyun/workspace_tcx/opencompass/opencompass/tasks/openicl_infer.py tmp/61232_params.pytorchrun --master_port=25988 --nproc_per_node 1 /mnt/hwfile/wangweiyun/workspace_tcx/opencompass/opencompass/tasks/openicl_infer.py tmp/61193_params.pytorchrun --master_port=16682 --nproc_per_node 1 /mnt/hwfile/wangweiyun/workspace_tcx/opencompass/opencompass/tasks/openicl_infer.py tmp/61473_params.pytorchrun --master_port=20556 --nproc_per_node 1 /mnt/hwfile/wangweiyun/workspace_tcx/opencompass/opencompass/tasks/openicl_infer.py tmp/61470_params.pytorchrun --master_port=21286 --nproc_per_node 1 /mnt/hwfile/wangweiyun/workspace_tcx/opencompass/opencompass/tasks/openicl_infer.py tmp/61234_params.py
use_backendcommand

 
 FalseFalse
 torchrun --master_port=22307 --nproc_per_node 1 /mnt/hwfile/wangweiyun/workspace_tcx/opencompass/opencompass/tasks/openicl_infer.py tmp/61484_params.py
command 
 

 




Falsecommand
    torchrun --master_port=13288 --nproc_per_node 1 /mnt/hwfile/wangweiyun/workspace_tcx/opencompass/opencompass/tasks/openicl_infer.py tmp/61210_params.pycommand{'abbr': 'internvl-chat-20b', 'batch_size': 4, 'max_out_len': 1024, 'model_args': {'device': 'cuda'}, 'path': '/mnt/petrelfs/wangweiyun/workspace_cz/InternVL/internvl_chat_dev/share_internvl/InternVL2-26B-Pretrain/', 'run_cfg': {'num_gpus': 1}, 'type': 'opencompass.models.InternVLChat'}command torchrun --master_port=17857 --nproc_per_node 1 /mnt/hwfile/wangweiyun/workspace_tcx/opencompass/opencompass/tasks/openicl_infer.py tmp/61518_params.py {'abbr': 'internvl-chat-20b', 'batch_size': 4, 'max_out_len': 1024, 'model_args': {'device': 'cuda'}, 'path': '/mnt/petrelfs/wangweiyun/workspace_cz/InternVL/internvl_chat_dev/share_internvl/InternVL2-26B-Pretrain/', 'run_cfg': {'num_gpus': 1}, 'type': 'opencompass.models.InternVLChat'}

torchrun --master_port=12379 --nproc_per_node 1 /mnt/hwfile/wangweiyun/workspace_tcx/opencompass/opencompass/tasks/openicl_infer.py tmp/61219_params.py{'abbr': 'internvl-chat-20b', 'batch_size': 4, 'max_out_len': 1024, 'model_args': {'device': 'cuda'}, 'path': '/mnt/petrelfs/wangweiyun/workspace_cz/InternVL/internvl_chat_dev/share_internvl/InternVL2-26B-Pretrain/', 'run_cfg': {'num_gpus': 1}, 'type': 'opencompass.models.InternVLChat'} torchrun --master_port=17714 --nproc_per_node 1 /mnt/hwfile/wangweiyun/workspace_tcx/opencompass/opencompass/tasks/openicl_infer.py tmp/61201_params.py




torchrun --master_port=25328 --nproc_per_node 1 /mnt/hwfile/wangweiyun/workspace_tcx/opencompass/opencompass/tasks/openicl_infer.py tmp/61230_params.py{'abbr': 'internvl-chat-20b', 'batch_size': 4, 'max_out_len': 1024, 'model_args': {'device': 'cuda'}, 'path': '/mnt/petrelfs/wangweiyun/workspace_cz/InternVL/internvl_chat_dev/share_internvl/InternVL2-26B-Pretrain/', 'run_cfg': {'num_gpus': 1}, 'type': 'opencompass.models.InternVLChat'}{'abbr': 'internvl-chat-20b', 'batch_size': 4, 'max_out_len': 1024, 'model_args': {'device': 'cuda'}, 'path': '/mnt/petrelfs/wangweiyun/workspace_cz/InternVL/internvl_chat_dev/share_internvl/InternVL2-26B-Pretrain/', 'run_cfg': {'num_gpus': 1}, 'type': 'opencompass.models.InternVLChat'}command

{'abbr': 'internvl-chat-20b', 'batch_size': 4, 'max_out_len': 1024, 'model_args': {'device': 'cuda'}, 'path': '/mnt/petrelfs/wangweiyun/workspace_cz/InternVL/internvl_chat_dev/share_internvl/InternVL2-26B-Pretrain/', 'run_cfg': {'num_gpus': 1}, 'type': 'opencompass.models.InternVLChat'}
{'abbr': 'internvl-chat-20b', 'batch_size': 4, 'max_out_len': 1024, 'model_args': {'device': 'cuda'}, 'path': '/mnt/petrelfs/wangweiyun/workspace_cz/InternVL/internvl_chat_dev/share_internvl/InternVL2-26B-Pretrain/', 'run_cfg': {'num_gpus': 1}, 'type': 'opencompass.models.InternVLChat'}
command


  torchrun --master_port=13233 --nproc_per_node 1 /mnt/hwfile/wangweiyun/workspace_tcx/opencompass/opencompass/tasks/openicl_infer.py tmp/61454_params.pytorchrun --master_port=18079 --nproc_per_node 1 /mnt/hwfile/wangweiyun/workspace_tcx/opencompass/opencompass/tasks/openicl_infer.py tmp/61245_params.py

commandcommand use_backend torchrun --master_port=14962 --nproc_per_node 1 /mnt/hwfile/wangweiyun/workspace_tcx/opencompass/opencompass/tasks/openicl_infer.py tmp/61472_params.pytorchrun --master_port=22401 --nproc_per_node 1 /mnt/hwfile/wangweiyun/workspace_tcx/opencompass/opencompass/tasks/openicl_infer.py tmp/61471_params.py

command
{'abbr': 'internvl-chat-20b', 'batch_size': 4, 'max_out_len': 1024, 'model_args': {'device': 'cuda'}, 'path': '/mnt/petrelfs/wangweiyun/workspace_cz/InternVL/internvl_chat_dev/share_internvl/InternVL2-26B-Pretrain/', 'run_cfg': {'num_gpus': 1}, 'type': 'opencompass.models.InternVLChat'}  
torchrun --master_port=22509 --nproc_per_node 1 /mnt/hwfile/wangweiyun/workspace_tcx/opencompass/opencompass/tasks/openicl_infer.py tmp/61194_params.pycommand{'abbr': 'internvl-chat-20b', 'batch_size': 4, 'max_out_len': 1024, 'model_args': {'device': 'cuda'}, 'path': '/mnt/petrelfs/wangweiyun/workspace_cz/InternVL/internvl_chat_dev/share_internvl/InternVL2-26B-Pretrain/', 'run_cfg': {'num_gpus': 1}, 'type': 'opencompass.models.InternVLChat'} commandFalse

commandtorchrun --master_port=24380 --nproc_per_node 1 /mnt/hwfile/wangweiyun/workspace_tcx/opencompass/opencompass/tasks/openicl_infer.py tmp/61492_params.pycommandcommand  

  command torchrun --master_port=27261 --nproc_per_node 1 /mnt/hwfile/wangweiyun/workspace_tcx/opencompass/opencompass/tasks/openicl_infer.py tmp/61223_params.py
commandtorchrun --master_port=22679 --nproc_per_node 1 /mnt/hwfile/wangweiyun/workspace_tcx/opencompass/opencompass/tasks/openicl_infer.py tmp/61218_params.py  Falsetorchrun --master_port=29759 --nproc_per_node 1 /mnt/hwfile/wangweiyun/workspace_tcx/opencompass/opencompass/tasks/openicl_infer.py tmp/61207_params.pytorchrun --master_port=21903 --nproc_per_node 1 /mnt/hwfile/wangweiyun/workspace_tcx/opencompass/opencompass/tasks/openicl_infer.py tmp/61211_params.py{'abbr': 'internvl-chat-20b', 'batch_size': 4, 'max_out_len': 1024, 'model_args': {'device': 'cuda'}, 'path': '/mnt/petrelfs/wangweiyun/workspace_cz/InternVL/internvl_chat_dev/share_internvl/InternVL2-26B-Pretrain/', 'run_cfg': {'num_gpus': 1}, 'type': 'opencompass.models.InternVLChat'}torchrun --master_port=12484 --nproc_per_node 1 /mnt/hwfile/wangweiyun/workspace_tcx/opencompass/opencompass/tasks/openicl_infer.py tmp/61452_params.py {'abbr': 'internvl-chat-20b', 'batch_size': 4, 'max_out_len': 1024, 'model_args': {'device': 'cuda'}, 'path': '/mnt/petrelfs/wangweiyun/workspace_cz/InternVL/internvl_chat_dev/share_internvl/InternVL2-26B-Pretrain/', 'run_cfg': {'num_gpus': 1}, 'type': 'opencompass.models.InternVLChat'}


commandcommand

 
 torchrun --master_port=29702 --nproc_per_node 1 /mnt/hwfile/wangweiyun/workspace_tcx/opencompass/opencompass/tasks/openicl_infer.py tmp/61233_params.py
 torchrun --master_port=15309 --nproc_per_node 1 /mnt/hwfile/wangweiyun/workspace_tcx/opencompass/opencompass/tasks/openicl_infer.py tmp/61442_params.py
torchrun --master_port=15713 --nproc_per_node 1 /mnt/hwfile/wangweiyun/workspace_tcx/opencompass/opencompass/tasks/openicl_infer.py tmp/61466_params.py
torchrun --master_port=16087 --nproc_per_node 1 /mnt/hwfile/wangweiyun/workspace_tcx/opencompass/opencompass/tasks/openicl_infer.py tmp/61190_params.py
command torchrun --master_port=23838 --nproc_per_node 1 /mnt/hwfile/wangweiyun/workspace_tcx/opencompass/opencompass/tasks/openicl_infer.py tmp/61242_params.py
{'abbr': 'internvl-chat-20b', 'batch_size': 4, 'max_out_len': 1024, 'model_args': {'device': 'cuda'}, 'path': '/mnt/petrelfs/wangweiyun/workspace_cz/InternVL/internvl_chat_dev/share_internvl/InternVL2-26B-Pretrain/', 'run_cfg': {'num_gpus': 1}, 'type': 'opencompass.models.InternVLChat'}
command torchrun --master_port=12846 --nproc_per_node 1 /mnt/hwfile/wangweiyun/workspace_tcx/opencompass/opencompass/tasks/openicl_infer.py tmp/61195_params.py
command torchrun --master_port=15689 --nproc_per_node 1 /mnt/hwfile/wangweiyun/workspace_tcx/opencompass/opencompass/tasks/openicl_infer.py tmp/61191_params.py
command use_backendtorchrun --master_port=27365 --nproc_per_node 1 /mnt/hwfile/wangweiyun/workspace_tcx/opencompass/opencompass/tasks/openicl_infer.py tmp/61625_params.py
 False {'abbr': 'internvl-chat-20b', 'batch_size': 4, 'max_out_len': 1024, 'model_args': {'device': 'cuda'}, 'path': '/mnt/petrelfs/wangweiyun/workspace_cz/InternVL/internvl_chat_dev/share_internvl/InternVL2-26B-Pretrain/', 'run_cfg': {'num_gpus': 1}, 'type': 'opencompass.models.InternVLChat'}
command torchrun --master_port=12436 --nproc_per_node 1 /mnt/hwfile/wangweiyun/workspace_tcx/opencompass/opencompass/tasks/openicl_infer.py tmp/61448_params.py
 False {'abbr': 'internvl-chat-20b', 'batch_size': 4, 'max_out_len': 1024, 'model_args': {'device': 'cuda'}, 'path': '/mnt/petrelfs/wangweiyun/workspace_cz/InternVL/internvl_chat_dev/share_internvl/InternVL2-26B-Pretrain/', 'run_cfg': {'num_gpus': 1}, 'type': 'opencompass.models.InternVLChat'}
command torchrun --master_port=20686 --nproc_per_node 1 /mnt/hwfile/wangweiyun/workspace_tcx/opencompass/opencompass/tasks/openicl_infer.py tmp/61227_params.py

[                              ] 1/256, 0.0 task/s, elapsed: 558s, ETA: 142355s
[                               ] 2/256, 0.0 task/s, elapsed: 583s, ETA: 74091s
[                               ] 3/256, 0.0 task/s, elapsed: 585s, ETA: 49368s
[                               ] 4/256, 0.0 task/s, elapsed: 598s, ETA: 37697s
[                               ] 5/256, 0.0 task/s, elapsed: 631s, ETA: 31694s
[                               ] 6/256, 0.0 task/s, elapsed: 641s, ETA: 26723s
[                               ] 7/256, 0.0 task/s, elapsed: 648s, ETA: 23066s
[                               ] 8/256, 0.0 task/s, elapsed: 650s, ETA: 20164s
[>                              ] 9/256, 0.0 task/s, elapsed: 659s, ETA: 18081s
[>                             ] 10/256, 0.0 task/s, elapsed: 673s, ETA: 16567s
[>                             ] 11/256, 0.0 task/s, elapsed: 675s, ETA: 15024s
[>                             ] 12/256, 0.0 task/s, elapsed: 675s, ETA: 13720s
[>                             ] 13/256, 0.0 task/s, elapsed: 677s, ETA: 12657s
[>                             ] 14/256, 0.0 task/s, elapsed: 689s, ETA: 11907s
[>                             ] 15/256, 0.0 task/s, elapsed: 692s, ETA: 11110s
[>                             ] 16/256, 0.0 task/s, elapsed: 695s, ETA: 10419s
[>                             ] 17/256, 0.0 task/s, elapsed: 696s, ETA:  9790s
[>>                            ] 18/256, 0.0 task/s, elapsed: 699s, ETA:  9241s
[>>                            ] 19/256, 0.0 task/s, elapsed: 707s, ETA:  8818s
[>>                            ] 20/256, 0.0 task/s, elapsed: 707s, ETA:  8345s
[>>                            ] 21/256, 0.0 task/s, elapsed: 711s, ETA:  7956s
[>>                            ] 22/256, 0.0 task/s, elapsed: 712s, ETA:  7570s
[>>                            ] 23/256, 0.0 task/s, elapsed: 713s, ETA:  7227s
[>>                            ] 24/256, 0.0 task/s, elapsed: 713s, ETA:  6896s
[>>                            ] 25/256, 0.0 task/s, elapsed: 714s, ETA:  6597s
[>>>                           ] 26/256, 0.0 task/s, elapsed: 719s, ETA:  6363s
[>>>                           ] 27/256, 0.0 task/s, elapsed: 725s, ETA:  6149s
[>>>                           ] 28/256, 0.0 task/s, elapsed: 727s, ETA:  5916s
[>>>                           ] 29/256, 0.0 task/s, elapsed: 727s, ETA:  5692s
[>>>                           ] 30/256, 0.0 task/s, elapsed: 729s, ETA:  5491s
[>>>                           ] 31/256, 0.0 task/s, elapsed: 730s, ETA:  5301s
[>>>                           ] 32/256, 0.0 task/s, elapsed: 731s, ETA:  5115s
[>>>                           ] 33/256, 0.0 task/s, elapsed: 737s, ETA:  4977s
[>>>                           ] 34/256, 0.0 task/s, elapsed: 740s, ETA:  4830s
[>>>>                          ] 35/256, 0.0 task/s, elapsed: 742s, ETA:  4685s
[>>>>                          ] 36/256, 0.0 task/s, elapsed: 746s, ETA:  4557s
[>>>>                          ] 37/256, 0.0 task/s, elapsed: 747s, ETA:  4423s
[>>>>                          ] 38/256, 0.1 task/s, elapsed: 750s, ETA:  4304s
[>>>>                          ] 39/256, 0.1 task/s, elapsed: 750s, ETA:  4176s
[>>>>                          ] 40/256, 0.1 task/s, elapsed: 754s, ETA:  4070s
[>>>>                          ] 41/256, 0.1 task/s, elapsed: 754s, ETA:  3953s
[>>>>                          ] 42/256, 0.1 task/s, elapsed: 754s, ETA:  3842s
[>>>>>                         ] 43/256, 0.1 task/s, elapsed: 758s, ETA:  3752s
[>>>>>                         ] 44/256, 0.1 task/s, elapsed: 758s, ETA:  3652s
[>>>>>                         ] 45/256, 0.1 task/s, elapsed: 762s, ETA:  3575s
[>>>>>                         ] 46/256, 0.1 task/s, elapsed: 762s, ETA:  3481s
[>>>>>                         ] 47/256, 0.1 task/s, elapsed: 763s, ETA:  3395s
[>>>>>                         ] 48/256, 0.1 task/s, elapsed: 764s, ETA:  3310s
[>>>>>                         ] 49/256, 0.1 task/s, elapsed: 765s, ETA:  3230s
[>>>>>                         ] 50/256, 0.1 task/s, elapsed: 766s, ETA:  3156s
[>>>>>                         ] 51/256, 0.1 task/s, elapsed: 766s, ETA:  3079s
[>>>>>>                        ] 52/256, 0.1 task/s, elapsed: 767s, ETA:  3010s
[>>>>>>                        ] 53/256, 0.1 task/s, elapsed: 768s, ETA:  2941s
[>>>>>>                        ] 54/256, 0.1 task/s, elapsed: 769s, ETA:  2875s
[>>>>>>                        ] 55/256, 0.1 task/s, elapsed: 771s, ETA:  2817s
[>>>>>>                        ] 56/256, 0.1 task/s, elapsed: 771s, ETA:  2753s
[>>>>>>                        ] 57/256, 0.1 task/s, elapsed: 774s, ETA:  2703s
[>>>>>>                        ] 58/256, 0.1 task/s, elapsed: 774s, ETA:  2643s
[>>>>>>                        ] 59/256, 0.1 task/s, elapsed: 776s, ETA:  2589s
[>>>>>>>                       ] 60/256, 0.1 task/s, elapsed: 778s, ETA:  2541s
[>>>>>>>                       ] 61/256, 0.1 task/s, elapsed: 778s, ETA:  2488s
[>>>>>>>                       ] 62/256, 0.1 task/s, elapsed: 779s, ETA:  2438s
[>>>>>>>                       ] 63/256, 0.1 task/s, elapsed: 782s, ETA:  2394s
[>>>>>>>                       ] 64/256, 0.1 task/s, elapsed: 782s, ETA:  2345s
[>>>>>>>                       ] 65/256, 0.1 task/s, elapsed: 782s, ETA:  2297s
[>>>>>>>                       ] 66/256, 0.1 task/s, elapsed: 783s, ETA:  2254s
[>>>>>>>                       ] 67/256, 0.1 task/s, elapsed: 785s, ETA:  2216s
[>>>>>>>                       ] 68/256, 0.1 task/s, elapsed: 787s, ETA:  2176s
[>>>>>>>>                      ] 69/256, 0.1 task/s, elapsed: 787s, ETA:  2133s
[>>>>>>>>                      ] 70/256, 0.1 task/s, elapsed: 788s, ETA:  2095s
[>>>>>>>>                      ] 71/256, 0.1 task/s, elapsed: 792s, ETA:  2063s
[>>>>>>>>                      ] 72/256, 0.1 task/s, elapsed: 793s, ETA:  2026s
[>>>>>>>>                      ] 73/256, 0.1 task/s, elapsed: 797s, ETA:  1999s
[>>>>>>>>                      ] 74/256, 0.1 task/s, elapsed: 798s, ETA:  1962s
[>>>>>>>>                      ] 75/256, 0.1 task/s, elapsed: 798s, ETA:  1926s
[>>>>>>>>                      ] 76/256, 0.1 task/s, elapsed: 799s, ETA:  1891s
[>>>>>>>>>                     ] 77/256, 0.1 task/s, elapsed: 800s, ETA:  1859s
[>>>>>>>>>                     ] 78/256, 0.1 task/s, elapsed: 800s, ETA:  1826s
[>>>>>>>>>                     ] 79/256, 0.1 task/s, elapsed: 801s, ETA:  1794s
[>>>>>>>>>                     ] 80/256, 0.1 task/s, elapsed: 801s, ETA:  1762s
[>>>>>>>>>                     ] 81/256, 0.1 task/s, elapsed: 802s, ETA:  1733s
[>>>>>>>>>                     ] 82/256, 0.1 task/s, elapsed: 803s, ETA:  1704s
[>>>>>>>>>                     ] 83/256, 0.1 task/s, elapsed: 804s, ETA:  1676s
[>>>>>>>>>                     ] 84/256, 0.1 task/s, elapsed: 805s, ETA:  1648s
[>>>>>>>>>                     ] 85/256, 0.1 task/s, elapsed: 807s, ETA:  1623s
[>>>>>>>>>>                    ] 86/256, 0.1 task/s, elapsed: 807s, ETA:  1596s
[>>>>>>>>>>                    ] 87/256, 0.1 task/s, elapsed: 811s, ETA:  1575s
[>>>>>>>>>>                    ] 88/256, 0.1 task/s, elapsed: 812s, ETA:  1550s
[>>>>>>>>>>                    ] 89/256, 0.1 task/s, elapsed: 813s, ETA:  1525s
[>>>>>>>>>>                    ] 90/256, 0.1 task/s, elapsed: 814s, ETA:  1501s
[>>>>>>>>>>                    ] 91/256, 0.1 task/s, elapsed: 814s, ETA:  1476s
[>>>>>>>>>>                    ] 92/256, 0.1 task/s, elapsed: 815s, ETA:  1453s
[>>>>>>>>>>                    ] 93/256, 0.1 task/s, elapsed: 817s, ETA:  1432s
[>>>>>>>>>>>                   ] 94/256, 0.1 task/s, elapsed: 819s, ETA:  1411s
[>>>>>>>>>>>                   ] 95/256, 0.1 task/s, elapsed: 820s, ETA:  1389s
[>>>>>>>>>>>                   ] 96/256, 0.1 task/s, elapsed: 823s, ETA:  1371s
[>>>>>>>>>>>                   ] 97/256, 0.1 task/s, elapsed: 823s, ETA:  1349s
[>>>>>>>>>>>                   ] 98/256, 0.1 task/s, elapsed: 824s, ETA:  1328s
[>>>>>>>>>>>                   ] 99/256, 0.1 task/s, elapsed: 828s, ETA:  1312s
[>>>>>>>>>>>                  ] 100/256, 0.1 task/s, elapsed: 828s, ETA:  1292s
[>>>>>>>>>>>                  ] 101/256, 0.1 task/s, elapsed: 828s, ETA:  1271s
[>>>>>>>>>>>                  ] 102/256, 0.1 task/s, elapsed: 829s, ETA:  1251s
[>>>>>>>>>>>                  ] 103/256, 0.1 task/s, elapsed: 829s, ETA:  1231s
[>>>>>>>>>>>                  ] 104/256, 0.1 task/s, elapsed: 830s, ETA:  1214s
[>>>>>>>>>>>                  ] 105/256, 0.1 task/s, elapsed: 832s, ETA:  1196s
[>>>>>>>>>>>>                 ] 106/256, 0.1 task/s, elapsed: 835s, ETA:  1181s
[>>>>>>>>>>>>                 ] 107/256, 0.1 task/s, elapsed: 838s, ETA:  1167s
[>>>>>>>>>>>>                 ] 108/256, 0.1 task/s, elapsed: 839s, ETA:  1149s
[>>>>>>>>>>>>                 ] 109/256, 0.1 task/s, elapsed: 839s, ETA:  1131s
[>>>>>>>>>>>>                 ] 110/256, 0.1 task/s, elapsed: 841s, ETA:  1116s
[>>>>>>>>>>>>                 ] 111/256, 0.1 task/s, elapsed: 843s, ETA:  1101s
[>>>>>>>>>>>>                 ] 112/256, 0.1 task/s, elapsed: 843s, ETA:  1084s
[>>>>>>>>>>>>                 ] 113/256, 0.1 task/s, elapsed: 844s, ETA:  1068s
[>>>>>>>>>>>>                 ] 114/256, 0.1 task/s, elapsed: 847s, ETA:  1055s
[>>>>>>>>>>>>>                ] 115/256, 0.1 task/s, elapsed: 847s, ETA:  1039s
[>>>>>>>>>>>>>                ] 116/256, 0.1 task/s, elapsed: 848s, ETA:  1024s
[>>>>>>>>>>>>>                ] 117/256, 0.1 task/s, elapsed: 848s, ETA:  1008s
[>>>>>>>>>>>>>                ] 118/256, 0.1 task/s, elapsed: 849s, ETA:   993s
[>>>>>>>>>>>>>                ] 119/256, 0.1 task/s, elapsed: 850s, ETA:   978s
[>>>>>>>>>>>>>                ] 120/256, 0.1 task/s, elapsed: 852s, ETA:   965s
[>>>>>>>>>>>>>                ] 121/256, 0.1 task/s, elapsed: 852s, ETA:   951s
[>>>>>>>>>>>>>                ] 122/256, 0.1 task/s, elapsed: 853s, ETA:   937s
[>>>>>>>>>>>>>                ] 123/256, 0.1 task/s, elapsed: 854s, ETA:   924s
[>>>>>>>>>>>>>>               ] 124/256, 0.1 task/s, elapsed: 856s, ETA:   911s
[>>>>>>>>>>>>>>               ] 125/256, 0.1 task/s, elapsed: 856s, ETA:   897s
[>>>>>>>>>>>>>>               ] 126/256, 0.1 task/s, elapsed: 857s, ETA:   884s
[>>>>>>>>>>>>>>               ] 127/256, 0.1 task/s, elapsed: 857s, ETA:   871s
[>>>>>>>>>>>>>>               ] 128/256, 0.1 task/s, elapsed: 857s, ETA:   857s
[>>>>>>>>>>>>>>               ] 129/256, 0.2 task/s, elapsed: 859s, ETA:   846s
[>>>>>>>>>>>>>>               ] 130/256, 0.2 task/s, elapsed: 860s, ETA:   834s
[>>>>>>>>>>>>>>               ] 131/256, 0.2 task/s, elapsed: 862s, ETA:   823s
[>>>>>>>>>>>>>>               ] 132/256, 0.2 task/s, elapsed: 862s, ETA:   810s
[>>>>>>>>>>>>>>>              ] 133/256, 0.2 task/s, elapsed: 865s, ETA:   800s
[>>>>>>>>>>>>>>>              ] 134/256, 0.2 task/s, elapsed: 866s, ETA:   788s
[>>>>>>>>>>>>>>>              ] 135/256, 0.2 task/s, elapsed: 867s, ETA:   777s
[>>>>>>>>>>>>>>>              ] 136/256, 0.2 task/s, elapsed: 867s, ETA:   765s
[>>>>>>>>>>>>>>>              ] 137/256, 0.2 task/s, elapsed: 869s, ETA:   755s
[>>>>>>>>>>>>>>>              ] 138/256, 0.2 task/s, elapsed: 869s, ETA:   743s
[>>>>>>>>>>>>>>>              ] 139/256, 0.2 task/s, elapsed: 871s, ETA:   733s
[>>>>>>>>>>>>>>>              ] 140/256, 0.2 task/s, elapsed: 871s, ETA:   722s
[>>>>>>>>>>>>>>>              ] 141/256, 0.2 task/s, elapsed: 872s, ETA:   712s
[>>>>>>>>>>>>>>>>             ] 142/256, 0.2 task/s, elapsed: 874s, ETA:   701s
[>>>>>>>>>>>>>>>>             ] 143/256, 0.2 task/s, elapsed: 875s, ETA:   691s
[>>>>>>>>>>>>>>>>             ] 144/256, 0.2 task/s, elapsed: 876s, ETA:   681s
[>>>>>>>>>>>>>>>>             ] 145/256, 0.2 task/s, elapsed: 877s, ETA:   671s
[>>>>>>>>>>>>>>>>             ] 146/256, 0.2 task/s, elapsed: 877s, ETA:   661s
[>>>>>>>>>>>>>>>>             ] 147/256, 0.2 task/s, elapsed: 879s, ETA:   652s
[>>>>>>>>>>>>>>>>             ] 148/256, 0.2 task/s, elapsed: 879s, ETA:   642s
[>>>>>>>>>>>>>>>>             ] 149/256, 0.2 task/s, elapsed: 880s, ETA:   632s
[>>>>>>>>>>>>>>>>             ] 150/256, 0.2 task/s, elapsed: 884s, ETA:   625s
[>>>>>>>>>>>>>>>>>            ] 151/256, 0.2 task/s, elapsed: 885s, ETA:   615s
[>>>>>>>>>>>>>>>>>            ] 152/256, 0.2 task/s, elapsed: 886s, ETA:   606s
[>>>>>>>>>>>>>>>>>            ] 153/256, 0.2 task/s, elapsed: 886s, ETA:   597s
[>>>>>>>>>>>>>>>>>            ] 154/256, 0.2 task/s, elapsed: 887s, ETA:   587s
[>>>>>>>>>>>>>>>>>            ] 155/256, 0.2 task/s, elapsed: 887s, ETA:   578s
[>>>>>>>>>>>>>>>>>            ] 156/256, 0.2 task/s, elapsed: 887s, ETA:   569s
[>>>>>>>>>>>>>>>>>            ] 157/256, 0.2 task/s, elapsed: 888s, ETA:   560s
[>>>>>>>>>>>>>>>>>            ] 158/256, 0.2 task/s, elapsed: 888s, ETA:   551s
[>>>>>>>>>>>>>>>>>>           ] 159/256, 0.2 task/s, elapsed: 889s, ETA:   542s
[>>>>>>>>>>>>>>>>>>           ] 160/256, 0.2 task/s, elapsed: 894s, ETA:   536s
[>>>>>>>>>>>>>>>>>>           ] 161/256, 0.2 task/s, elapsed: 896s, ETA:   529s
[>>>>>>>>>>>>>>>>>>           ] 162/256, 0.2 task/s, elapsed: 898s, ETA:   521s
[>>>>>>>>>>>>>>>>>>           ] 163/256, 0.2 task/s, elapsed: 899s, ETA:   513s
[>>>>>>>>>>>>>>>>>>           ] 164/256, 0.2 task/s, elapsed: 902s, ETA:   506s
[>>>>>>>>>>>>>>>>>>           ] 165/256, 0.2 task/s, elapsed: 903s, ETA:   498s
[>>>>>>>>>>>>>>>>>>           ] 166/256, 0.2 task/s, elapsed: 904s, ETA:   490s
[>>>>>>>>>>>>>>>>>>           ] 167/256, 0.2 task/s, elapsed: 904s, ETA:   482s
[>>>>>>>>>>>>>>>>>>>          ] 168/256, 0.2 task/s, elapsed: 905s, ETA:   474s
[>>>>>>>>>>>>>>>>>>>          ] 169/256, 0.2 task/s, elapsed: 905s, ETA:   466s
[>>>>>>>>>>>>>>>>>>>          ] 170/256, 0.2 task/s, elapsed: 906s, ETA:   458s
[>>>>>>>>>>>>>>>>>>>          ] 171/256, 0.2 task/s, elapsed: 906s, ETA:   450s
[>>>>>>>>>>>>>>>>>>>          ] 172/256, 0.2 task/s, elapsed: 907s, ETA:   443s
[>>>>>>>>>>>>>>>>>>>          ] 173/256, 0.2 task/s, elapsed: 910s, ETA:   437s
[>>>>>>>>>>>>>>>>>>>          ] 174/256, 0.2 task/s, elapsed: 910s, ETA:   429s
[>>>>>>>>>>>>>>>>>>>          ] 175/256, 0.2 task/s, elapsed: 910s, ETA:   421s
[>>>>>>>>>>>>>>>>>>>          ] 176/256, 0.2 task/s, elapsed: 910s, ETA:   414s
[>>>>>>>>>>>>>>>>>>>>         ] 177/256, 0.2 task/s, elapsed: 911s, ETA:   406s
[>>>>>>>>>>>>>>>>>>>>         ] 178/256, 0.2 task/s, elapsed: 912s, ETA:   400s
[>>>>>>>>>>>>>>>>>>>>         ] 179/256, 0.2 task/s, elapsed: 912s, ETA:   392s
[>>>>>>>>>>>>>>>>>>>>         ] 180/256, 0.2 task/s, elapsed: 917s, ETA:   387s
[>>>>>>>>>>>>>>>>>>>>         ] 181/256, 0.2 task/s, elapsed: 919s, ETA:   381s
[>>>>>>>>>>>>>>>>>>>>         ] 182/256, 0.2 task/s, elapsed: 919s, ETA:   374s
[>>>>>>>>>>>>>>>>>>>>         ] 183/256, 0.2 task/s, elapsed: 920s, ETA:   367s
[>>>>>>>>>>>>>>>>>>>>         ] 184/256, 0.2 task/s, elapsed: 920s, ETA:   360s
[>>>>>>>>>>>>>>>>>>>>         ] 185/256, 0.2 task/s, elapsed: 921s, ETA:   354s
[>>>>>>>>>>>>>>>>>>>>>        ] 186/256, 0.2 task/s, elapsed: 923s, ETA:   347s
[>>>>>>>>>>>>>>>>>>>>>        ] 187/256, 0.2 task/s, elapsed: 924s, ETA:   341s
[>>>>>>>>>>>>>>>>>>>>>        ] 188/256, 0.2 task/s, elapsed: 924s, ETA:   334s
[>>>>>>>>>>>>>>>>>>>>>        ] 189/256, 0.2 task/s, elapsed: 925s, ETA:   328s
[>>>>>>>>>>>>>>>>>>>>>        ] 190/256, 0.2 task/s, elapsed: 925s, ETA:   321s
[>>>>>>>>>>>>>>>>>>>>>        ] 191/256, 0.2 task/s, elapsed: 927s, ETA:   316s
[>>>>>>>>>>>>>>>>>>>>>        ] 192/256, 0.2 task/s, elapsed: 928s, ETA:   309s
[>>>>>>>>>>>>>>>>>>>>>        ] 193/256, 0.2 task/s, elapsed: 928s, ETA:   303s
[>>>>>>>>>>>>>>>>>>>>>        ] 194/256, 0.2 task/s, elapsed: 930s, ETA:   297s
[>>>>>>>>>>>>>>>>>>>>>>       ] 195/256, 0.2 task/s, elapsed: 930s, ETA:   291s
[>>>>>>>>>>>>>>>>>>>>>>       ] 196/256, 0.2 task/s, elapsed: 931s, ETA:   285s
[>>>>>>>>>>>>>>>>>>>>>>       ] 197/256, 0.2 task/s, elapsed: 936s, ETA:   280s
[>>>>>>>>>>>>>>>>>>>>>>       ] 198/256, 0.2 task/s, elapsed: 937s, ETA:   275s
[>>>>>>>>>>>>>>>>>>>>>>       ] 199/256, 0.2 task/s, elapsed: 939s, ETA:   269s
[>>>>>>>>>>>>>>>>>>>>>>       ] 200/256, 0.2 task/s, elapsed: 940s, ETA:   263s
[>>>>>>>>>>>>>>>>>>>>>>       ] 201/256, 0.2 task/s, elapsed: 940s, ETA:   257s
[>>>>>>>>>>>>>>>>>>>>>>       ] 202/256, 0.2 task/s, elapsed: 941s, ETA:   252s
[>>>>>>>>>>>>>>>>>>>>>>       ] 203/256, 0.2 task/s, elapsed: 941s, ETA:   246s
[>>>>>>>>>>>>>>>>>>>>>>>      ] 204/256, 0.2 task/s, elapsed: 943s, ETA:   240s
[>>>>>>>>>>>>>>>>>>>>>>>      ] 205/256, 0.2 task/s, elapsed: 945s, ETA:   235s
[>>>>>>>>>>>>>>>>>>>>>>>      ] 206/256, 0.2 task/s, elapsed: 949s, ETA:   230s
[>>>>>>>>>>>>>>>>>>>>>>>      ] 207/256, 0.2 task/s, elapsed: 949s, ETA:   225s
[>>>>>>>>>>>>>>>>>>>>>>>      ] 208/256, 0.2 task/s, elapsed: 950s, ETA:   219s
[>>>>>>>>>>>>>>>>>>>>>>>      ] 209/256, 0.2 task/s, elapsed: 952s, ETA:   214s
[>>>>>>>>>>>>>>>>>>>>>>>      ] 210/256, 0.2 task/s, elapsed: 953s, ETA:   209s
[>>>>>>>>>>>>>>>>>>>>>>>      ] 211/256, 0.2 task/s, elapsed: 953s, ETA:   203s
[>>>>>>>>>>>>>>>>>>>>>>>>     ] 212/256, 0.2 task/s, elapsed: 953s, ETA:   198s
[>>>>>>>>>>>>>>>>>>>>>>>>     ] 213/256, 0.2 task/s, elapsed: 954s, ETA:   192s
[>>>>>>>>>>>>>>>>>>>>>>>>     ] 214/256, 0.2 task/s, elapsed: 954s, ETA:   187s
[>>>>>>>>>>>>>>>>>>>>>>>>     ] 215/256, 0.2 task/s, elapsed: 962s, ETA:   183s
[>>>>>>>>>>>>>>>>>>>>>>>>     ] 216/256, 0.2 task/s, elapsed: 964s, ETA:   178s
[>>>>>>>>>>>>>>>>>>>>>>>>     ] 217/256, 0.2 task/s, elapsed: 965s, ETA:   173s
[>>>>>>>>>>>>>>>>>>>>>>>>     ] 218/256, 0.2 task/s, elapsed: 965s, ETA:   168s
[>>>>>>>>>>>>>>>>>>>>>>>>     ] 219/256, 0.2 task/s, elapsed: 966s, ETA:   163s
[>>>>>>>>>>>>>>>>>>>>>>>>     ] 220/256, 0.2 task/s, elapsed: 971s, ETA:   159s
[>>>>>>>>>>>>>>>>>>>>>>>>>    ] 221/256, 0.2 task/s, elapsed: 973s, ETA:   154s
[>>>>>>>>>>>>>>>>>>>>>>>>>    ] 222/256, 0.2 task/s, elapsed: 973s, ETA:   149s
[>>>>>>>>>>>>>>>>>>>>>>>>>    ] 223/256, 0.2 task/s, elapsed: 974s, ETA:   144s
[>>>>>>>>>>>>>>>>>>>>>>>>>    ] 224/256, 0.2 task/s, elapsed: 974s, ETA:   139s
[>>>>>>>>>>>>>>>>>>>>>>>>>    ] 225/256, 0.2 task/s, elapsed: 978s, ETA:   135s
[>>>>>>>>>>>>>>>>>>>>>>>>>    ] 226/256, 0.2 task/s, elapsed: 978s, ETA:   130s
[>>>>>>>>>>>>>>>>>>>>>>>>>    ] 227/256, 0.2 task/s, elapsed: 984s, ETA:   126s
[>>>>>>>>>>>>>>>>>>>>>>>>>    ] 228/256, 0.2 task/s, elapsed: 988s, ETA:   121s
[>>>>>>>>>>>>>>>>>>>>>>>>>    ] 229/256, 0.2 task/s, elapsed: 989s, ETA:   117s
[>>>>>>>>>>>>>>>>>>>>>>>>>>   ] 230/256, 0.2 task/s, elapsed: 992s, ETA:   112s
[>>>>>>>>>>>>>>>>>>>>>>>>>>   ] 231/256, 0.2 task/s, elapsed: 994s, ETA:   108s
[>>>>>>>>>>>>>>>>>>>>>>>>>>   ] 232/256, 0.2 task/s, elapsed: 995s, ETA:   103s
[>>>>>>>>>>>>>>>>>>>>>>>>>>   ] 233/256, 0.2 task/s, elapsed: 998s, ETA:    98s
[>>>>>>>>>>>>>>>>>>>>>>>>>>   ] 234/256, 0.2 task/s, elapsed: 998s, ETA:    94s
[>>>>>>>>>>>>>>>>>>>>>>>>>   ] 235/256, 0.2 task/s, elapsed: 1001s, ETA:    89s
[>>>>>>>>>>>>>>>>>>>>>>>>>   ] 236/256, 0.2 task/s, elapsed: 1004s, ETA:    85s
[>>>>>>>>>>>>>>>>>>>>>>>>>   ] 237/256, 0.2 task/s, elapsed: 1006s, ETA:    81s
[>>>>>>>>>>>>>>>>>>>>>>>>>>  ] 238/256, 0.2 task/s, elapsed: 1010s, ETA:    76s
[>>>>>>>>>>>>>>>>>>>>>>>>>>  ] 239/256, 0.2 task/s, elapsed: 1010s, ETA:    72s
[>>>>>>>>>>>>>>>>>>>>>>>>>>  ] 240/256, 0.2 task/s, elapsed: 1012s, ETA:    67s
[>>>>>>>>>>>>>>>>>>>>>>>>>>  ] 241/256, 0.2 task/s, elapsed: 1018s, ETA:    63s
[>>>>>>>>>>>>>>>>>>>>>>>>>>  ] 242/256, 0.2 task/s, elapsed: 1018s, ETA:    59s
[>>>>>>>>>>>>>>>>>>>>>>>>>>  ] 243/256, 0.2 task/s, elapsed: 1021s, ETA:    55s
[>>>>>>>>>>>>>>>>>>>>>>>>>>  ] 244/256, 0.2 task/s, elapsed: 1021s, ETA:    50s
[>>>>>>>>>>>>>>>>>>>>>>>>>>  ] 245/256, 0.2 task/s, elapsed: 1022s, ETA:    46s
[>>>>>>>>>>>>>>>>>>>>>>>>>>  ] 246/256, 0.2 task/s, elapsed: 1024s, ETA:    42s
[>>>>>>>>>>>>>>>>>>>>>>>>>>> ] 247/256, 0.2 task/s, elapsed: 1028s, ETA:    37s
[>>>>>>>>>>>>>>>>>>>>>>>>>>> ] 248/256, 0.2 task/s, elapsed: 1036s, ETA:    33s
[>>>>>>>>>>>>>>>>>>>>>>>>>>> ] 249/256, 0.2 task/s, elapsed: 1037s, ETA:    29s
[>>>>>>>>>>>>>>>>>>>>>>>>>>> ] 250/256, 0.2 task/s, elapsed: 1037s, ETA:    25s
[>>>>>>>>>>>>>>>>>>>>>>>>>>> ] 251/256, 0.2 task/s, elapsed: 1051s, ETA:    21s
[>>>>>>>>>>>>>>>>>>>>>>>>>>> ] 252/256, 0.2 task/s, elapsed: 1056s, ETA:    17s
[>>>>>>>>>>>>>>>>>>>>>>>>>>> ] 253/256, 0.2 task/s, elapsed: 1057s, ETA:    13s
[>>>>>>>>>>>>>>>>>>>>>>>>>>> ] 254/256, 0.2 task/s, elapsed: 1077s, ETA:     8s
[>>>>>>>>>>>>>>>>>>>>>>>>>>> ] 255/256, 0.2 task/s, elapsed: 1091s, ETA:     4s
[>>>>>>>>>>>>>>>>>>>>>>>>>>>>] 256/256, 0.2 task/s, elapsed: 1105s, ETA:     0s
12/05 03:30:44 - OpenCompass - INFO - Partitioned into 287 tasks.
[                                                  ] 0/287, elapsed: 0s, ETA:
[                                ] 1/287, 0.0 task/s, elapsed: 22s, ETA:  6365s
[                                ] 2/287, 0.1 task/s, elapsed: 22s, ETA:  3189s
[                                ] 3/287, 0.1 task/s, elapsed: 22s, ETA:  2122s
[                                ] 4/287, 0.2 task/s, elapsed: 23s, ETA:  1622s
[                                ] 5/287, 0.2 task/s, elapsed: 23s, ETA:  1298s
[                                ] 6/287, 0.3 task/s, elapsed: 23s, ETA:  1080s
[                                ] 7/287, 0.3 task/s, elapsed: 23s, ETA:   931s
[                                ] 8/287, 0.3 task/s, elapsed: 23s, ETA:   812s
[>                               ] 9/287, 0.4 task/s, elapsed: 23s, ETA:   722s
[>                              ] 10/287, 0.4 task/s, elapsed: 24s, ETA:   658s
[>                              ] 11/287, 0.5 task/s, elapsed: 24s, ETA:   596s
[>                              ] 12/287, 0.5 task/s, elapsed: 24s, ETA:   558s
[>                              ] 13/287, 0.5 task/s, elapsed: 24s, ETA:   513s
[>                              ] 14/287, 0.6 task/s, elapsed: 24s, ETA:   476s
[>                              ] 15/287, 0.6 task/s, elapsed: 24s, ETA:   444s
[>                              ] 16/287, 0.7 task/s, elapsed: 24s, ETA:   414s
[>                              ] 17/287, 0.7 task/s, elapsed: 25s, ETA:   389s
[>                              ] 18/287, 0.7 task/s, elapsed: 25s, ETA:   368s
[>>                             ] 19/287, 0.7 task/s, elapsed: 25s, ETA:   359s
[>>                             ] 20/287, 0.8 task/s, elapsed: 26s, ETA:   341s
[>>                             ] 21/287, 0.8 task/s, elapsed: 26s, ETA:   323s
[>>                             ] 22/287, 0.9 task/s, elapsed: 26s, ETA:   308s
[>>                             ] 23/287, 0.9 task/s, elapsed: 26s, ETA:   293s
[>>                             ] 24/287, 0.9 task/s, elapsed: 26s, ETA:   280s
[>>                             ] 25/287, 0.8 task/s, elapsed: 33s, ETA:   349s
[>>                             ] 26/287, 0.8 task/s, elapsed: 33s, ETA:   335s
[>>                             ] 27/287, 0.8 task/s, elapsed: 33s, ETA:   322s
[>>>                            ] 28/287, 0.8 task/s, elapsed: 33s, ETA:   310s
[>>>                            ] 29/287, 0.9 task/s, elapsed: 34s, ETA:   298s
[>>>                            ] 30/287, 0.9 task/s, elapsed: 34s, ETA:   287s
[>>>                            ] 31/287, 0.9 task/s, elapsed: 34s, ETA:   277s
[>>>                            ] 32/287, 0.9 task/s, elapsed: 34s, ETA:   269s
[>>>                            ] 33/287, 1.0 task/s, elapsed: 34s, ETA:   260s
[>>>                            ] 34/287, 1.0 task/s, elapsed: 34s, ETA:   252s
[>>>                            ] 35/287, 1.0 task/s, elapsed: 34s, ETA:   244s
[>>>                            ] 36/287, 1.1 task/s, elapsed: 34s, ETA:   236s
[>>>                            ] 37/287, 1.1 task/s, elapsed: 34s, ETA:   230s
[>>>>                           ] 38/287, 1.1 task/s, elapsed: 34s, ETA:   223s
[>>>>                           ] 39/287, 1.1 task/s, elapsed: 34s, ETA:   217s
[>>>>                           ] 40/287, 1.2 task/s, elapsed: 34s, ETA:   210s
[>>>>                           ] 41/287, 1.2 task/s, elapsed: 34s, ETA:   204s
[>>>>                           ] 42/287, 1.2 task/s, elapsed: 34s, ETA:   199s
[>>>>                           ] 43/287, 1.3 task/s, elapsed: 34s, ETA:   193s
[>>>>                           ] 44/287, 1.3 task/s, elapsed: 34s, ETA:   188s
[>>>>                           ] 45/287, 1.3 task/s, elapsed: 34s, ETA:   183s
[>>>>                           ] 46/287, 1.3 task/s, elapsed: 34s, ETA:   179s
[>>>>>                          ] 47/287, 1.4 task/s, elapsed: 34s, ETA:   174s
[>>>>>                          ] 48/287, 1.4 task/s, elapsed: 34s, ETA:   170s
[>>>>>                          ] 49/287, 1.4 task/s, elapsed: 34s, ETA:   166s
[>>>>>                          ] 50/287, 1.5 task/s, elapsed: 34s, ETA:   162s
[>>>>>                          ] 51/287, 1.5 task/s, elapsed: 34s, ETA:   158s
[>>>>>                          ] 52/287, 1.5 task/s, elapsed: 34s, ETA:   154s
[>>>>>                          ] 53/287, 1.6 task/s, elapsed: 34s, ETA:   151s
[>>>>>                          ] 54/287, 1.6 task/s, elapsed: 34s, ETA:   147s
[>>>>>                          ] 55/287, 1.6 task/s, elapsed: 34s, ETA:   144s
[>>>>>>                         ] 56/287, 1.6 task/s, elapsed: 34s, ETA:   141s
[>>>>>>                         ] 57/287, 1.7 task/s, elapsed: 34s, ETA:   138s
[>>>>>>                         ] 58/287, 1.7 task/s, elapsed: 34s, ETA:   135s
[>>>>>>                         ] 59/287, 1.7 task/s, elapsed: 34s, ETA:   132s
[>>>>>>                         ] 60/287, 1.8 task/s, elapsed: 34s, ETA:   129s
[>>>>>>                         ] 61/287, 1.8 task/s, elapsed: 34s, ETA:   127s
[>>>>>>                         ] 62/287, 1.8 task/s, elapsed: 34s, ETA:   124s
[>>>>>>                         ] 63/287, 1.8 task/s, elapsed: 34s, ETA:   122s
[>>>>>>                         ] 64/287, 1.9 task/s, elapsed: 34s, ETA:   119s
[>>>>>>>                        ] 65/287, 1.9 task/s, elapsed: 34s, ETA:   117s
[>>>>>>>                        ] 66/287, 1.9 task/s, elapsed: 34s, ETA:   115s
[>>>>>>>                        ] 67/287, 1.9 task/s, elapsed: 34s, ETA:   113s
[>>>>>>>                        ] 68/287, 2.0 task/s, elapsed: 34s, ETA:   111s
[>>>>>>>                        ] 69/287, 2.0 task/s, elapsed: 35s, ETA:   109s
[>>>>>>>                        ] 70/287, 2.0 task/s, elapsed: 35s, ETA:   107s
[>>>>>>>                        ] 71/287, 2.0 task/s, elapsed: 35s, ETA:   105s
[>>>>>>>                        ] 72/287, 2.0 task/s, elapsed: 35s, ETA:   106s
[>>>>>>>                        ] 73/287, 2.0 task/s, elapsed: 36s, ETA:   104s
[>>>>>>>                        ] 74/287, 2.1 task/s, elapsed: 36s, ETA:   103s
[>>>>>>>>                       ] 75/287, 2.1 task/s, elapsed: 36s, ETA:   101s
[>>>>>>>>                       ] 76/287, 2.1 task/s, elapsed: 36s, ETA:    99s
[>>>>>>>>                       ] 77/287, 2.2 task/s, elapsed: 36s, ETA:    98s
[>>>>>>>>                       ] 78/287, 2.2 task/s, elapsed: 36s, ETA:    96s
[>>>>>>>>                       ] 79/287, 2.2 task/s, elapsed: 36s, ETA:    94s
[>>>>>>>>                       ] 80/287, 2.2 task/s, elapsed: 36s, ETA:    93s
[>>>>>>>>                       ] 81/287, 2.3 task/s, elapsed: 36s, ETA:    91s
[>>>>>>>>                       ] 82/287, 2.3 task/s, elapsed: 36s, ETA:    90s
[>>>>>>>>                       ] 83/287, 2.3 task/s, elapsed: 36s, ETA:    89s
[>>>>>>>>>                      ] 84/287, 2.3 task/s, elapsed: 36s, ETA:    87s
[>>>>>>>>>                      ] 85/287, 2.4 task/s, elapsed: 36s, ETA:    86s
[>>>>>>>>>                      ] 86/287, 2.4 task/s, elapsed: 36s, ETA:    84s
[>>>>>>>>>                      ] 87/287, 2.4 task/s, elapsed: 36s, ETA:    83s
[>>>>>>>>>                      ] 88/287, 2.4 task/s, elapsed: 36s, ETA:    82s
[>>>>>>>>>                      ] 89/287, 2.5 task/s, elapsed: 36s, ETA:    80s
[>>>>>>>>>                      ] 90/287, 2.5 task/s, elapsed: 36s, ETA:    79s
[>>>>>>>>>                      ] 91/287, 2.5 task/s, elapsed: 36s, ETA:    78s
[>>>>>>>>>                      ] 92/287, 2.5 task/s, elapsed: 36s, ETA:    77s
[>>>>>>>>>>                     ] 93/287, 2.6 task/s, elapsed: 36s, ETA:    76s
[>>>>>>>>>>                     ] 94/287, 2.6 task/s, elapsed: 36s, ETA:    75s
[>>>>>>>>>>                     ] 95/287, 2.6 task/s, elapsed: 36s, ETA:    73s
[>>>>>>>>>>                     ] 96/287, 2.6 task/s, elapsed: 36s, ETA:    72s
[>>>>>>>>>>                     ] 97/287, 2.7 task/s, elapsed: 36s, ETA:    71s
[>>>>>>>>>>                     ] 98/287, 2.7 task/s, elapsed: 36s, ETA:    70s
[>>>>>>>>>>                     ] 99/287, 2.7 task/s, elapsed: 36s, ETA:    69s
[>>>>>>>>>>                    ] 100/287, 2.7 task/s, elapsed: 36s, ETA:    68s
[>>>>>>>>>>                    ] 101/287, 2.8 task/s, elapsed: 36s, ETA:    67s
[>>>>>>>>>>                    ] 102/287, 2.8 task/s, elapsed: 36s, ETA:    66s
[>>>>>>>>>>                    ] 103/287, 2.8 task/s, elapsed: 36s, ETA:    65s
[>>>>>>>>>>                    ] 104/287, 2.9 task/s, elapsed: 36s, ETA:    64s
[>>>>>>>>>>                    ] 105/287, 2.9 task/s, elapsed: 36s, ETA:    63s
[>>>>>>>>>>>                   ] 106/287, 2.9 task/s, elapsed: 36s, ETA:    62s
[>>>>>>>>>>>                   ] 107/287, 2.9 task/s, elapsed: 36s, ETA:    61s
[>>>>>>>>>>>                   ] 108/287, 3.0 task/s, elapsed: 36s, ETA:    60s
[>>>>>>>>>>>                   ] 109/287, 3.0 task/s, elapsed: 36s, ETA:    59s
[>>>>>>>>>>>                   ] 110/287, 3.0 task/s, elapsed: 36s, ETA:    59s
[>>>>>>>>>>>                   ] 111/287, 3.0 task/s, elapsed: 36s, ETA:    58s
[>>>>>>>>>>>                   ] 112/287, 3.1 task/s, elapsed: 36s, ETA:    57s
[>>>>>>>>>>>                   ] 113/287, 3.1 task/s, elapsed: 36s, ETA:    56s
[>>>>>>>>>>>                   ] 114/287, 3.1 task/s, elapsed: 37s, ETA:    55s
[>>>>>>>>>>>>                  ] 115/287, 3.1 task/s, elapsed: 37s, ETA:    55s
[>>>>>>>>>>>>                  ] 116/287, 3.2 task/s, elapsed: 37s, ETA:    54s
[>>>>>>>>>>>>                  ] 117/287, 3.2 task/s, elapsed: 37s, ETA:    53s
[>>>>>>>>>>>>                  ] 118/287, 3.2 task/s, elapsed: 37s, ETA:    53s
[>>>>>>>>>>>>                  ] 119/287, 3.2 task/s, elapsed: 37s, ETA:    52s
[>>>>>>>>>>>>                  ] 120/287, 3.0 task/s, elapsed: 40s, ETA:    55s
[>>>>>>>>>>>>                  ] 121/287, 3.0 task/s, elapsed: 40s, ETA:    55s
[>>>>>>>>>>>>                  ] 122/287, 3.0 task/s, elapsed: 40s, ETA:    54s
[>>>>>>>>>>>>                  ] 123/287, 3.1 task/s, elapsed: 40s, ETA:    53s
[>>>>>>>>>>>>                  ] 124/287, 3.1 task/s, elapsed: 40s, ETA:    53s
[>>>>>>>>>>>>>                 ] 125/287, 3.1 task/s, elapsed: 40s, ETA:    52s
[>>>>>>>>>>>>>                 ] 126/287, 3.1 task/s, elapsed: 40s, ETA:    51s
[>>>>>>>>>>>>>                 ] 127/287, 3.2 task/s, elapsed: 40s, ETA:    51s
[>>>>>>>>>>>>>                 ] 128/287, 3.2 task/s, elapsed: 40s, ETA:    50s
[>>>>>>>>>>>>>                 ] 129/287, 3.2 task/s, elapsed: 40s, ETA:    49s
[>>>>>>>>>>>>>                 ] 130/287, 3.2 task/s, elapsed: 40s, ETA:    49s
[>>>>>>>>>>>>>                 ] 131/287, 3.2 task/s, elapsed: 40s, ETA:    48s
[>>>>>>>>>>>>>                 ] 132/287, 3.3 task/s, elapsed: 40s, ETA:    47s
[>>>>>>>>>>>>>                 ] 133/287, 3.3 task/s, elapsed: 40s, ETA:    47s
[>>>>>>>>>>>>>>                ] 134/287, 3.3 task/s, elapsed: 40s, ETA:    46s
[>>>>>>>>>>>>>>                ] 135/287, 3.3 task/s, elapsed: 40s, ETA:    46s
[>>>>>>>>>>>>>>                ] 136/287, 3.4 task/s, elapsed: 40s, ETA:    45s
[>>>>>>>>>>>>>>                ] 137/287, 3.4 task/s, elapsed: 40s, ETA:    44s
[>>>>>>>>>>>>>>                ] 138/287, 3.4 task/s, elapsed: 40s, ETA:    44s
[>>>>>>>>>>>>>>                ] 139/287, 3.4 task/s, elapsed: 40s, ETA:    43s
[>>>>>>>>>>>>>>                ] 140/287, 3.4 task/s, elapsed: 41s, ETA:    43s
[>>>>>>>>>>>>>>                ] 141/287, 3.5 task/s, elapsed: 41s, ETA:    42s
[>>>>>>>>>>>>>>                ] 142/287, 3.5 task/s, elapsed: 41s, ETA:    42s
[>>>>>>>>>>>>>>                ] 143/287, 3.5 task/s, elapsed: 41s, ETA:    41s
[>>>>>>>>>>>>>>>               ] 144/287, 3.5 task/s, elapsed: 41s, ETA:    41s
[>>>>>>>>>>>>>>>               ] 145/287, 3.3 task/s, elapsed: 44s, ETA:    43s
[>>>>>>>>>>>>>>>               ] 146/287, 3.4 task/s, elapsed: 44s, ETA:    42s
[>>>>>>>>>>>>>>>               ] 147/287, 3.4 task/s, elapsed: 44s, ETA:    42s
[>>>>>>>>>>>>>>>               ] 148/287, 3.4 task/s, elapsed: 44s, ETA:    41s
[>>>>>>>>>>>>>>>               ] 149/287, 3.4 task/s, elapsed: 44s, ETA:    40s
[>>>>>>>>>>>>>>>               ] 150/287, 3.4 task/s, elapsed: 44s, ETA:    40s
[>>>>>>>>>>>>>>>               ] 151/287, 3.4 task/s, elapsed: 44s, ETA:    40s
[>>>>>>>>>>>>>>>               ] 152/287, 3.5 task/s, elapsed: 44s, ETA:    39s
[>>>>>>>>>>>>>>>               ] 153/287, 3.5 task/s, elapsed: 44s, ETA:    39s
[>>>>>>>>>>>>>>>>              ] 154/287, 3.5 task/s, elapsed: 44s, ETA:    38s
[>>>>>>>>>>>>>>>>              ] 155/287, 3.5 task/s, elapsed: 44s, ETA:    38s
[>>>>>>>>>>>>>>>>              ] 156/287, 3.5 task/s, elapsed: 44s, ETA:    37s
[>>>>>>>>>>>>>>>>              ] 157/287, 3.6 task/s, elapsed: 44s, ETA:    37s
[>>>>>>>>>>>>>>>>              ] 158/287, 3.6 task/s, elapsed: 44s, ETA:    36s
[>>>>>>>>>>>>>>>>              ] 159/287, 3.6 task/s, elapsed: 44s, ETA:    36s
[>>>>>>>>>>>>>>>>              ] 160/287, 3.6 task/s, elapsed: 44s, ETA:    35s
[>>>>>>>>>>>>>>>>              ] 161/287, 3.6 task/s, elapsed: 44s, ETA:    35s
[>>>>>>>>>>>>>>>>              ] 162/287, 3.7 task/s, elapsed: 44s, ETA:    34s
[>>>>>>>>>>>>>>>>>             ] 163/287, 3.7 task/s, elapsed: 44s, ETA:    34s
[>>>>>>>>>>>>>>>>>             ] 164/287, 3.7 task/s, elapsed: 44s, ETA:    33s
[>>>>>>>>>>>>>>>>>             ] 165/287, 3.7 task/s, elapsed: 44s, ETA:    33s
[>>>>>>>>>>>>>>>>>             ] 166/287, 3.7 task/s, elapsed: 44s, ETA:    32s
[>>>>>>>>>>>>>>>>>             ] 167/287, 3.8 task/s, elapsed: 44s, ETA:    32s
[>>>>>>>>>>>>>>>>>             ] 168/287, 3.8 task/s, elapsed: 44s, ETA:    32s
[>>>>>>>>>>>>>>>>>             ] 169/287, 3.8 task/s, elapsed: 44s, ETA:    31s
[>>>>>>>>>>>>>>>>>             ] 170/287, 3.8 task/s, elapsed: 44s, ETA:    31s
[>>>>>>>>>>>>>>>>>             ] 171/287, 3.8 task/s, elapsed: 44s, ETA:    30s
[>>>>>>>>>>>>>>>>>             ] 172/287, 3.9 task/s, elapsed: 45s, ETA:    30s
[>>>>>>>>>>>>>>>>>>            ] 173/287, 3.9 task/s, elapsed: 45s, ETA:    29s
[>>>>>>>>>>>>>>>>>>            ] 174/287, 3.9 task/s, elapsed: 45s, ETA:    29s
[>>>>>>>>>>>>>>>>>>            ] 175/287, 3.9 task/s, elapsed: 45s, ETA:    29s
[>>>>>>>>>>>>>>>>>>            ] 176/287, 3.9 task/s, elapsed: 45s, ETA:    28s
[>>>>>>>>>>>>>>>>>>            ] 177/287, 4.0 task/s, elapsed: 45s, ETA:    28s
[>>>>>>>>>>>>>>>>>>            ] 178/287, 4.0 task/s, elapsed: 45s, ETA:    27s
[>>>>>>>>>>>>>>>>>>            ] 179/287, 4.0 task/s, elapsed: 45s, ETA:    27s
[>>>>>>>>>>>>>>>>>>            ] 180/287, 4.0 task/s, elapsed: 45s, ETA:    27s
[>>>>>>>>>>>>>>>>>>            ] 181/287, 4.0 task/s, elapsed: 45s, ETA:    26s
[>>>>>>>>>>>>>>>>>>>           ] 182/287, 4.1 task/s, elapsed: 45s, ETA:    26s
[>>>>>>>>>>>>>>>>>>>           ] 183/287, 4.1 task/s, elapsed: 45s, ETA:    25s
[>>>>>>>>>>>>>>>>>>>           ] 184/287, 4.1 task/s, elapsed: 45s, ETA:    25s
[>>>>>>>>>>>>>>>>>>>           ] 185/287, 4.1 task/s, elapsed: 45s, ETA:    25s
[>>>>>>>>>>>>>>>>>>>           ] 186/287, 4.1 task/s, elapsed: 45s, ETA:    24s
[>>>>>>>>>>>>>>>>>>>           ] 187/287, 4.2 task/s, elapsed: 45s, ETA:    24s
[>>>>>>>>>>>>>>>>>>>           ] 188/287, 4.2 task/s, elapsed: 45s, ETA:    24s
[>>>>>>>>>>>>>>>>>>>           ] 189/287, 4.2 task/s, elapsed: 45s, ETA:    23s
[>>>>>>>>>>>>>>>>>>>           ] 190/287, 4.2 task/s, elapsed: 45s, ETA:    23s
[>>>>>>>>>>>>>>>>>>>           ] 191/287, 4.2 task/s, elapsed: 45s, ETA:    23s
[>>>>>>>>>>>>>>>>>>>>          ] 192/287, 4.3 task/s, elapsed: 45s, ETA:    22s
[>>>>>>>>>>>>>>>>>>>>          ] 193/287, 4.3 task/s, elapsed: 45s, ETA:    22s
[>>>>>>>>>>>>>>>>>>>>          ] 194/287, 4.3 task/s, elapsed: 45s, ETA:    22s
[>>>>>>>>>>>>>>>>>>>>          ] 195/287, 4.3 task/s, elapsed: 45s, ETA:    21s
[>>>>>>>>>>>>>>>>>>>>          ] 196/287, 4.4 task/s, elapsed: 45s, ETA:    21s
[>>>>>>>>>>>>>>>>>>>>          ] 197/287, 4.4 task/s, elapsed: 45s, ETA:    21s
[>>>>>>>>>>>>>>>>>>>>          ] 198/287, 4.4 task/s, elapsed: 45s, ETA:    20s
[>>>>>>>>>>>>>>>>>>>>          ] 199/287, 4.4 task/s, elapsed: 45s, ETA:    20s
[>>>>>>>>>>>>>>>>>>>>          ] 200/287, 4.4 task/s, elapsed: 45s, ETA:    20s
[>>>>>>>>>>>>>>>>>>>>>         ] 201/287, 4.4 task/s, elapsed: 45s, ETA:    19s
[>>>>>>>>>>>>>>>>>>>>>         ] 202/287, 4.5 task/s, elapsed: 45s, ETA:    19s
[>>>>>>>>>>>>>>>>>>>>>         ] 203/287, 4.5 task/s, elapsed: 45s, ETA:    19s
[>>>>>>>>>>>>>>>>>>>>>         ] 204/287, 4.5 task/s, elapsed: 45s, ETA:    18s
[>>>>>>>>>>>>>>>>>>>>>         ] 205/287, 4.5 task/s, elapsed: 45s, ETA:    18s
[>>>>>>>>>>>>>>>>>>>>>         ] 206/287, 4.6 task/s, elapsed: 45s, ETA:    18s
[>>>>>>>>>>>>>>>>>>>>>         ] 207/287, 4.6 task/s, elapsed: 45s, ETA:    18s
[>>>>>>>>>>>>>>>>>>>>>         ] 208/287, 4.6 task/s, elapsed: 45s, ETA:    17s
[>>>>>>>>>>>>>>>>>>>>>         ] 209/287, 4.6 task/s, elapsed: 45s, ETA:    17s
[>>>>>>>>>>>>>>>>>>>>>         ] 210/287, 4.6 task/s, elapsed: 45s, ETA:    17s
[>>>>>>>>>>>>>>>>>>>>>>        ] 211/287, 4.7 task/s, elapsed: 45s, ETA:    16s
[>>>>>>>>>>>>>>>>>>>>>>        ] 212/287, 4.7 task/s, elapsed: 45s, ETA:    16s
[>>>>>>>>>>>>>>>>>>>>>>        ] 213/287, 4.7 task/s, elapsed: 45s, ETA:    16s
[>>>>>>>>>>>>>>>>>>>>>>        ] 214/287, 4.7 task/s, elapsed: 45s, ETA:    15s
[>>>>>>>>>>>>>>>>>>>>>>        ] 215/287, 4.7 task/s, elapsed: 45s, ETA:    15s
[>>>>>>>>>>>>>>>>>>>>>>        ] 216/287, 4.8 task/s, elapsed: 45s, ETA:    15s
[>>>>>>>>>>>>>>>>>>>>>>        ] 217/287, 4.8 task/s, elapsed: 45s, ETA:    15s
[>>>>>>>>>>>>>>>>>>>>>>        ] 218/287, 4.8 task/s, elapsed: 46s, ETA:    14s
[>>>>>>>>>>>>>>>>>>>>>>        ] 219/287, 4.8 task/s, elapsed: 46s, ETA:    14s
[>>>>>>>>>>>>>>>>>>>>>>        ] 220/287, 4.7 task/s, elapsed: 47s, ETA:    14s
[>>>>>>>>>>>>>>>>>>>>>>>       ] 221/287, 4.7 task/s, elapsed: 47s, ETA:    14s
[>>>>>>>>>>>>>>>>>>>>>>>       ] 222/287, 4.7 task/s, elapsed: 47s, ETA:    14s
[>>>>>>>>>>>>>>>>>>>>>>>       ] 223/287, 4.7 task/s, elapsed: 47s, ETA:    14s
[>>>>>>>>>>>>>>>>>>>>>>>       ] 224/287, 4.7 task/s, elapsed: 48s, ETA:    13s
[>>>>>>>>>>>>>>>>>>>>>>>       ] 225/287, 4.6 task/s, elapsed: 49s, ETA:    13s
[>>>>>>>>>>>>>>>>>>>>>>>       ] 226/287, 4.7 task/s, elapsed: 49s, ETA:    13s
[>>>>>>>>>>>>>>>>>>>>>>>       ] 227/287, 4.6 task/s, elapsed: 49s, ETA:    13s
[>>>>>>>>>>>>>>>>>>>>>>>       ] 228/287, 4.6 task/s, elapsed: 50s, ETA:    13s
[>>>>>>>>>>>>>>>>>>>>>>>       ] 229/287, 4.5 task/s, elapsed: 51s, ETA:    13s
[>>>>>>>>>>>>>>>>>>>>>>>>      ] 230/287, 4.5 task/s, elapsed: 51s, ETA:    13s
[>>>>>>>>>>>>>>>>>>>>>>>>      ] 231/287, 4.5 task/s, elapsed: 51s, ETA:    12s
[>>>>>>>>>>>>>>>>>>>>>>>>      ] 232/287, 4.3 task/s, elapsed: 54s, ETA:    13s
[>>>>>>>>>>>>>>>>>>>>>>>>      ] 233/287, 4.3 task/s, elapsed: 54s, ETA:    13s
[>>>>>>>>>>>>>>>>>>>>>>>>      ] 234/287, 4.3 task/s, elapsed: 55s, ETA:    12s
[>>>>>>>>>>>>>>>>>>>>>>>>      ] 235/287, 3.8 task/s, elapsed: 62s, ETA:    14s
[>>>>>>>>>>>>>>>>>>>>>>>>      ] 236/287, 3.6 task/s, elapsed: 65s, ETA:    14s
[>>>>>>>>>>>>>>>>>>>>>>>>      ] 237/287, 3.6 task/s, elapsed: 66s, ETA:    14s
[>>>>>>>>>>>>>>>>>>>>>>>>      ] 238/287, 2.6 task/s, elapsed: 92s, ETA:    19s
[>>>>>>>>>>>>>>>>>>>>>>>>      ] 239/287, 2.6 task/s, elapsed: 92s, ETA:    18s
[>>>>>>>>>>>>>>>>>>>>>>>>>     ] 240/287, 2.6 task/s, elapsed: 92s, ETA:    18s
[>>>>>>>>>>>>>>>>>>>>>>>>>     ] 241/287, 2.6 task/s, elapsed: 92s, ETA:    18s
[>>>>>>>>>>>>>>>>>>>>>>>>>     ] 242/287, 2.6 task/s, elapsed: 92s, ETA:    17s
[>>>>>>>>>>>>>>>>>>>>>>>>>     ] 243/287, 2.6 task/s, elapsed: 92s, ETA:    17s
[>>>>>>>>>>>>>>>>>>>>>>>>>     ] 244/287, 2.7 task/s, elapsed: 92s, ETA:    16s
[>>>>>>>>>>>>>>>>>>>>>>>>>     ] 245/287, 2.7 task/s, elapsed: 92s, ETA:    16s
[>>>>>>>>>>>>>>>>>>>>>>>>>     ] 246/287, 2.7 task/s, elapsed: 92s, ETA:    15s
[>>>>>>>>>>>>>>>>>>>>>>>>>     ] 247/287, 2.7 task/s, elapsed: 92s, ETA:    15s
[>>>>>>>>>>>>>>>>>>>>>>>>>     ] 248/287, 2.7 task/s, elapsed: 92s, ETA:    14s
[>>>>>>>>>>>>>>>>>>>>>>>>>>    ] 249/287, 2.7 task/s, elapsed: 92s, ETA:    14s
[>>>>>>>>>>>>>>>>>>>>>>>>>>    ] 250/287, 2.7 task/s, elapsed: 92s, ETA:    14s
[>>>>>>>>>>>>>>>>>>>>>>>>>>    ] 251/287, 2.7 task/s, elapsed: 92s, ETA:    13s
[>>>>>>>>>>>>>>>>>>>>>>>>>>    ] 252/287, 2.7 task/s, elapsed: 92s, ETA:    13s
[>>>>>>>>>>>>>>>>>>>>>>>>>>    ] 253/287, 2.8 task/s, elapsed: 92s, ETA:    12s
[>>>>>>>>>>>>>>>>>>>>>>>>>>    ] 254/287, 2.8 task/s, elapsed: 92s, ETA:    12s
[>>>>>>>>>>>>>>>>>>>>>>>>>>    ] 255/287, 2.8 task/s, elapsed: 92s, ETA:    12s
[>>>>>>>>>>>>>>>>>>>>>>>>>>    ] 256/287, 2.8 task/s, elapsed: 92s, ETA:    11s
[>>>>>>>>>>>>>>>>>>>>>>>>>>    ] 257/287, 2.8 task/s, elapsed: 92s, ETA:    11s
[>>>>>>>>>>>>>>>>>>>>>>>>>>    ] 258/287, 2.8 task/s, elapsed: 92s, ETA:    10s
[>>>>>>>>>>>>>>>>>>>>>>>>>>>   ] 259/287, 2.8 task/s, elapsed: 92s, ETA:    10s
[>>>>>>>>>>>>>>>>>>>>>>>>>>>   ] 260/287, 2.8 task/s, elapsed: 92s, ETA:    10s
[>>>>>>>>>>>>>>>>>>>>>>>>>>>   ] 261/287, 2.8 task/s, elapsed: 92s, ETA:     9s
[>>>>>>>>>>>>>>>>>>>>>>>>>>>   ] 262/287, 2.7 task/s, elapsed: 97s, ETA:     9s
[>>>>>>>>>>>>>>>>>>>>>>>>>>>   ] 263/287, 2.7 task/s, elapsed: 97s, ETA:     9s
[>>>>>>>>>>>>>>>>>>>>>>>>>>>   ] 264/287, 2.7 task/s, elapsed: 97s, ETA:     8s
[>>>>>>>>>>>>>>>>>>>>>>>>>>>   ] 265/287, 2.7 task/s, elapsed: 97s, ETA:     8s
[>>>>>>>>>>>>>>>>>>>>>>>>>>>   ] 266/287, 2.7 task/s, elapsed: 97s, ETA:     8s
[>>>>>>>>>>>>>>>>>>>>>>>>>>>   ] 267/287, 2.7 task/s, elapsed: 97s, ETA:     7s
[>>>>>>>>>>>>>>>>>>>>>>>>>>>>  ] 268/287, 2.8 task/s, elapsed: 97s, ETA:     7s
[>>>>>>>>>>>>>>>>>>>>>>>>>>>>  ] 269/287, 2.8 task/s, elapsed: 97s, ETA:     7s
[>>>>>>>>>>>>>>>>>>>>>>>>>>>>  ] 270/287, 2.8 task/s, elapsed: 97s, ETA:     6s
[>>>>>>>>>>>>>>>>>>>>>>>>>>>>  ] 271/287, 2.8 task/s, elapsed: 97s, ETA:     6s
[>>>>>>>>>>>>>>>>>>>>>>>>>>>>  ] 272/287, 2.8 task/s, elapsed: 97s, ETA:     5s
[>>>>>>>>>>>>>>>>>>>>>>>>>>>>  ] 273/287, 2.8 task/s, elapsed: 97s, ETA:     5s
[>>>>>>>>>>>>>>>>>>>>>>>>>>>>  ] 274/287, 2.8 task/s, elapsed: 97s, ETA:     5s
[>>>>>>>>>>>>>>>>>>>>>>>>>>>>  ] 275/287, 2.8 task/s, elapsed: 97s, ETA:     4s
[>>>>>>>>>>>>>>>>>>>>>>>>>>>>  ] 276/287, 2.8 task/s, elapsed: 97s, ETA:     4s
[>>>>>>>>>>>>>>>>>>>>>>>>>>>>  ] 277/287, 2.8 task/s, elapsed: 97s, ETA:     4s
[>>>>>>>>>>>>>>>>>>>>>>>>>>>>> ] 278/287, 2.9 task/s, elapsed: 97s, ETA:     3s
[>>>>>>>>>>>>>>>>>>>>>>>>>>>>> ] 279/287, 2.9 task/s, elapsed: 97s, ETA:     3s
[>>>>>>>>>>>>>>>>>>>>>>>>>>>>> ] 280/287, 2.9 task/s, elapsed: 97s, ETA:     2s
[>>>>>>>>>>>>>>>>>>>>>>>>>>>>> ] 281/287, 2.9 task/s, elapsed: 97s, ETA:     2s
[>>>>>>>>>>>>>>>>>>>>>>>>>>>>> ] 282/287, 2.9 task/s, elapsed: 97s, ETA:     2s
[>>>>>>>>>>>>>>>>>>>>>>>>>>>>> ] 283/287, 2.9 task/s, elapsed: 97s, ETA:     1s
[>>>>>>>>>>>>>>>>>>>>>>>>>>>>> ] 284/287, 2.9 task/s, elapsed: 97s, ETA:     1s
[>>>>>>>>>>>>>>>>>>>>>>>>>>>>> ] 285/287, 2.9 task/s, elapsed: 97s, ETA:     1s
[>>>>>>>>>>>>>>>>>>>>>>>>>>>> ] 286/287, 2.2 task/s, elapsed: 129s, ETA:     0s
[>>>>>>>>>>>>>>>>>>>>>>>>>>>>>] 287/287, 2.2 task/s, elapsed: 131s, ETA:     0s
dataset                       version    metric                        mode    internvl-chat-20b
----------------------------  ---------  ----------------------------  ------  -------------------
mmlu                          -          naive_average                 gen     66.50
mmlu_pro                      -          -                             -       -
cmmlu                         -          naive_average                 gen     64.70
ceval                         -          naive_average                 gen     61.75
agieval                       -          -                             -       -
GaokaoBench                   -          weighted_average              gen     63.48
GPQA_extended                 -          -                             -       -
GPQA_main                     -          -                             -       -
GPQA_diamond                  -          -                             -       -
ARC-c                         -          -                             -       -
truthfulqa                    -          -                             -       -
triviaqa                      2121ce     score                         gen     61.82
triviaqa_wiki_1shot           -          -                             -       -
nq                            3dcea1     score                         gen     23.57
C3                            8c358f     accuracy                      gen     92.16
race-high                     9a54b6     accuracy                      gen     86.16
flores_100                    -          -                             -       -
winogrande                    b36770     accuracy                      gen     76.40
hellaswag                     e42710     accuracy                      gen     85.28
bbh                           -          naive_average                 gen     70.12
gsm8k                         1d7fe4     accuracy                      gen     80.67
math                          393424     accuracy                      gen     34.94
TheoremQA                     6f0af8     score                         gen     22.12
MathBench                     -          -                             -       -
openai_humaneval              8e312c     humaneval_pass@1              gen     71.34
humaneval_plus                -          -                             -       -
humanevalx                    -          -                             -       -
sanitized_mbpp                a447ff     score                         gen     70.82
mbpp_plus                     -          -                             -       -
mbpp_cn                       6fb572     score                         gen     55.80
leval                         -          -                             -       -
leval_closed                  -          -                             -       -
leval_open                    -          -                             -       -
longbench                     -          -                             -       -
longbench_single-document-qa  -          -                             -       -
longbench_multi-document-qa   -          -                             -       -
longbench_summarization       -          -                             -       -
longbench_few-shot-learning   -          -                             -       -
longbench_synthetic-tasks     -          -                             -       -
longbench_code-completion     -          -                             -       -
teval                         -          -                             -       -
teval_zh                      -          -                             -       -
IFEval                        3321a3     Prompt-level-strict-accuracy  gen     41.22
IFEval                        3321a3     Inst-level-strict-accuracy    gen     53.72
IFEval                        3321a3     Prompt-level-loose-accuracy   gen     46.21
IFEval                        3321a3     Inst-level-loose-accuracy     gen     58.51
12/05 03:33:05 - OpenCompass - INFO - write summary to /mnt/petrelfs/wangweiyun/workspace_cz/InternVL/internvl_chat_dev/share_internvl/InternVL2-26B-Pretrain/20241205_031205/summary/summary_20241205_031205.txt
12/05 03:33:05 - OpenCompass - INFO - write csv to /mnt/petrelfs/wangweiyun/workspace_cz/InternVL/internvl_chat_dev/share_internvl/InternVL2-26B-Pretrain/20241205_031205/summary/summary_20241205_031205.csv