2023/06/04 16:59:33 - mmengine - INFO - ------------------------------------------------------------ System environment: sys.platform: linux Python: 3.10.9 (main, Mar 8 2023, 10:47:38) [GCC 11.2.0] CUDA available: True numpy_random_seed: 449511118 GPU 0,1,2,3: NVIDIA A100-SXM4-80GB CUDA_HOME: /mnt/petrelfs/share/cuda-11.6 NVCC: Cuda compilation tools, release 11.6, V11.6.124 GCC: gcc (GCC) 7.5.0 PyTorch: 1.13.1 PyTorch compiling details: PyTorch built with: - GCC 9.3 - C++ Version: 201402 - Intel(R) oneAPI Math Kernel Library Version 2021.4-Product Build 20210904 for Intel(R) 64 architecture applications - Intel(R) MKL-DNN v2.6.0 (Git Hash 52b5f107dd9cf10910aaa19cb47f3abf9b349815) - OpenMP 201511 (a.k.a. OpenMP 4.5) - LAPACK is enabled (usually provided by MKL) - NNPACK is enabled - CPU capability usage: AVX2 - CUDA Runtime 11.6 - NVCC architecture flags: -gencode;arch=compute_37,code=sm_37;-gencode;arch=compute_50,code=sm_50;-gencode;arch=compute_60,code=sm_60;-gencode;arch=compute_61,code=sm_61;-gencode;arch=compute_70,code=sm_70;-gencode;arch=compute_75,code=sm_75;-gencode;arch=compute_80,code=sm_80;-gencode;arch=compute_86,code=sm_86;-gencode;arch=compute_37,code=compute_37 - CuDNN 8.3.2 (built against CUDA 11.5) - Magma 2.6.1 - Build settings: BLAS_INFO=mkl, BUILD_TYPE=Release, CUDA_VERSION=11.6, CUDNN_VERSION=8.3.2, CXX_COMPILER=/opt/rh/devtoolset-9/root/usr/bin/c++, CXX_FLAGS= -fabi-version=11 -Wno-deprecated -fvisibility-inlines-hidden -DUSE_PTHREADPOOL -fopenmp -DNDEBUG -DUSE_KINETO -DUSE_FBGEMM -DUSE_QNNPACK -DUSE_PYTORCH_QNNPACK -DUSE_XNNPACK -DSYMBOLICATE_MOBILE_DEBUG_HANDLE -DEDGE_PROFILER_USE_KINETO -O2 -fPIC -Wno-narrowing -Wall -Wextra -Werror=return-type -Werror=non-virtual-dtor -Wno-missing-field-initializers -Wno-type-limits -Wno-array-bounds -Wno-unknown-pragmas -Wunused-local-typedefs -Wno-unused-parameter -Wno-unused-function -Wno-unused-result -Wno-strict-overflow -Wno-strict-aliasing -Wno-error=deprecated-declarations -Wno-stringop-overflow -Wno-psabi -Wno-error=pedantic -Wno-error=redundant-decls -Wno-error=old-style-cast -fdiagnostics-color=always -faligned-new -Wno-unused-but-set-variable -Wno-maybe-uninitialized -fno-math-errno -fno-trapping-math -Werror=format -Werror=cast-function-type -Wno-stringop-overflow, LAPACK_INFO=mkl, PERF_WITH_AVX=1, PERF_WITH_AVX2=1, PERF_WITH_AVX512=1, TORCH_VERSION=1.13.1, USE_CUDA=ON, USE_CUDNN=ON, USE_EXCEPTION_PTR=1, USE_GFLAGS=OFF, USE_GLOG=OFF, USE_MKL=ON, USE_MKLDNN=ON, USE_MPI=OFF, USE_NCCL=ON, USE_NNPACK=ON, USE_OPENMP=ON, USE_ROCM=OFF, TorchVision: 0.14.1 OpenCV: 4.7.0 MMEngine: 0.7.3 Runtime environment: cudnn_benchmark: True mp_cfg: {'mp_start_method': 'fork', 'opencv_num_threads': 0} dist_cfg: {'backend': 'nccl'} seed: None deterministic: False Distributed launcher: slurm Distributed training: True GPU number: 4 ------------------------------------------------------------ 2023/06/04 16:59:38 - mmengine - INFO - Config: optim_wrapper = dict( optimizer=dict( type='AdamW', lr=0.0001, weight_decay=0.3, _scope_='mmpretrain'), paramwise_cfg=dict( custom_keys=dict({ '.cls_token': dict(decay_mult=0.0), '.pos_embed': dict(decay_mult=0.0) })), type='AmpOptimWrapper', dtype='bfloat16', clip_grad=None) param_scheduler = [ dict(type='CosineAnnealingLR', eta_min=1e-05, by_epoch=False, begin=0) ] train_cfg = dict(by_epoch=True, max_epochs=10, val_interval=1) val_cfg = dict() test_cfg = dict() auto_scale_lr = dict(base_batch_size=4096) model = dict( type='ImageClassifier', backbone=dict( frozen_stages=24, type='VisionTransformer', arch='l', img_size=224, patch_size=14, drop_rate=0.1, pre_norm=True, final_norm=False, init_cfg=dict( type='Pretrained', checkpoint='ckpt/openclip-ViT-L-14.pth', prefix='backbone')), neck=dict( type='CLIPProjection', in_channels=1024, out_channels=768, init_cfg=dict( type='Pretrained', checkpoint='ckpt/openclip-ViT-L-14.pth', prefix='backbone')), head=dict( type='LinearClsHead', num_classes=2, in_channels=768, loss=dict(type='CrossEntropyLoss', loss_weight=1.0), init_cfg=None), init_cfg=dict( type='TruncNormal', layer=['Conv2d', 'Linear'], std=0.02, bias=0.0), train_cfg=None) dataset_type = 'CustomDataset' data_preprocessor = dict( num_classes=2, mean=[123.675, 116.28, 103.53], std=[58.395, 57.12, 57.375], to_rgb=True) bgr_mean = [103.53, 116.28, 123.675] bgr_std = [57.375, 57.12, 58.395] train_pipeline = [ dict(type='LoadImageFromFile'), dict( type='RandomResizedCrop', scale=224, backend='pillow', interpolation='bicubic'), dict(type='RandomFlip', prob=0.5, direction='horizontal'), dict(type='PackInputs') ] test_pipeline = [ dict(type='LoadImageFromFile'), dict( type='ResizeEdge', scale=256, edge='short', backend='pillow', interpolation='bicubic'), dict(type='CenterCrop', crop_size=224), dict(type='PackInputs') ] train_dataloader = dict( pin_memory=True, persistent_workers=True, collate_fn=dict(type='default_collate'), batch_size=128, num_workers=10, dataset=dict( type='ConcatDataset', datasets=[ dict( type='CustomDataset', data_root='', ann_file= '/mnt/petrelfs/luzeyu/workspace/fakebench/dataset/meta/train/stablediffusionV1-5R2-dpmsolver-25-1m.csv', pipeline=[ dict(type='LoadImageFromFile'), dict( type='RandomResizedCrop', scale=224, backend='pillow', interpolation='bicubic'), dict(type='RandomFlip', prob=0.5, direction='horizontal'), dict(type='PackInputs') ]), dict( type='CustomDataset', data_root='', ann_file= '/mnt/petrelfs/luzeyu/workspace/fakebench/dataset/meta/train/cc1m.csv', pipeline=[ dict(type='LoadImageFromFile'), dict( type='RandomResizedCrop', scale=224, backend='pillow', interpolation='bicubic'), dict(type='RandomFlip', prob=0.5, direction='horizontal'), dict(type='PackInputs') ]) ]), sampler=dict(type='DefaultSampler', shuffle=True)) val_dataloader = dict( pin_memory=True, persistent_workers=True, collate_fn=dict(type='default_collate'), batch_size=128, num_workers=10, dataset=dict( type='ConcatDataset', datasets=[ dict( type='CustomDataset', data_root='/mnt/petrelfs/luzeyu/workspace/fakebench/dataset', ann_file= '/mnt/petrelfs/luzeyu/workspace/fakebench/dataset/meta/val/stablediffusionV1-5R2-dpmsolver-25-1w.tsv', pipeline=[ dict(type='LoadImageFromFile'), dict( type='RandomResizedCrop', scale=224, backend='pillow', interpolation='bicubic'), dict(type='RandomFlip', prob=0.5, direction='horizontal'), dict(type='PackInputs') ]), dict( type='CustomDataset', data_root='', ann_file= '/mnt/petrelfs/luzeyu/workspace/fakebench/dataset/meta/val/cc1w.csv', pipeline=[ dict(type='LoadImageFromFile'), dict( type='RandomResizedCrop', scale=224, backend='pillow', interpolation='bicubic'), dict(type='RandomFlip', prob=0.5, direction='horizontal'), dict(type='PackInputs') ]) ]), sampler=dict(type='DefaultSampler', shuffle=False)) val_evaluator = [ dict(type='Accuracy', topk=1), dict(type='SingleLabelMetric', average=None) ] test_dataloader = dict( pin_memory=True, persistent_workers=True, collate_fn=dict(type='default_collate'), batch_size=128, num_workers=10, dataset=dict( type='ConcatDataset', datasets=[ dict( type='CustomDataset', data_root='/mnt/petrelfs/luzeyu/workspace/fakebench/dataset', ann_file= '/mnt/petrelfs/luzeyu/workspace/fakebench/dataset/meta/val/stablediffusionV1-5R2-dpmsolver-25-1w.tsv', pipeline=[ dict(type='LoadImageFromFile'), dict( type='RandomResizedCrop', scale=224, backend='pillow', interpolation='bicubic'), dict(type='RandomFlip', prob=0.5, direction='horizontal'), dict(type='PackInputs') ]), dict( type='CustomDataset', data_root='', ann_file= '/mnt/petrelfs/luzeyu/workspace/fakebench/dataset/meta/val/cc1w.csv', pipeline=[ dict(type='LoadImageFromFile'), dict( type='RandomResizedCrop', scale=224, backend='pillow', interpolation='bicubic'), dict(type='RandomFlip', prob=0.5, direction='horizontal'), dict(type='PackInputs') ]) ]), sampler=dict(type='DefaultSampler', shuffle=False)) test_evaluator = [ dict(type='Accuracy', topk=1), dict(type='SingleLabelMetric', average=None) ] custom_hooks = [dict(type='EMAHook', momentum=0.0001, priority='ABOVE_NORMAL')] default_scope = 'mmpretrain' default_hooks = dict( timer=dict(type='IterTimerHook'), logger=dict(type='LoggerHook', interval=100), param_scheduler=dict(type='ParamSchedulerHook'), checkpoint=dict(type='CheckpointHook', interval=1), sampler_seed=dict(type='DistSamplerSeedHook'), visualization=dict(type='VisualizationHook', enable=True)) env_cfg = dict( cudnn_benchmark=True, mp_cfg=dict(mp_start_method='fork', opencv_num_threads=0), dist_cfg=dict(backend='nccl')) vis_backends = [dict(type='LocalVisBackend')] visualizer = dict( type='UniversalVisualizer', vis_backends=[ dict(type='LocalVisBackend'), dict(type='TensorboardVisBackend') ]) log_level = 'INFO' load_from = None resume = False randomness = dict(seed=None, deterministic=False) launcher = 'slurm' work_dir = 'workdir/clip_large_pretrain_4x256_sdv1_lr1e-4_wopostnorm_wneck' 2023/06/04 16:59:51 - mmengine - INFO - Hooks will be executed in the following order: before_run: (VERY_HIGH ) RuntimeInfoHook (ABOVE_NORMAL) EMAHook (BELOW_NORMAL) LoggerHook -------------------- after_load_checkpoint: (ABOVE_NORMAL) EMAHook -------------------- before_train: (VERY_HIGH ) RuntimeInfoHook (ABOVE_NORMAL) EMAHook (NORMAL ) IterTimerHook (VERY_LOW ) CheckpointHook -------------------- before_train_epoch: (VERY_HIGH ) RuntimeInfoHook (NORMAL ) IterTimerHook (NORMAL ) DistSamplerSeedHook -------------------- before_train_iter: (VERY_HIGH ) RuntimeInfoHook (NORMAL ) IterTimerHook -------------------- after_train_iter: (VERY_HIGH ) RuntimeInfoHook (ABOVE_NORMAL) EMAHook (NORMAL ) IterTimerHook (BELOW_NORMAL) LoggerHook (LOW ) ParamSchedulerHook (VERY_LOW ) CheckpointHook -------------------- after_train_epoch: (NORMAL ) IterTimerHook (LOW ) ParamSchedulerHook (VERY_LOW ) CheckpointHook -------------------- before_val_epoch: (ABOVE_NORMAL) EMAHook (NORMAL ) IterTimerHook -------------------- before_val_iter: (NORMAL ) IterTimerHook -------------------- after_val_iter: (NORMAL ) IterTimerHook (NORMAL ) VisualizationHook (BELOW_NORMAL) LoggerHook -------------------- after_val_epoch: (VERY_HIGH ) RuntimeInfoHook (ABOVE_NORMAL) EMAHook (NORMAL ) IterTimerHook (BELOW_NORMAL) LoggerHook (LOW ) ParamSchedulerHook (VERY_LOW ) CheckpointHook -------------------- before_save_checkpoint: (ABOVE_NORMAL) EMAHook -------------------- after_train: (VERY_LOW ) CheckpointHook -------------------- before_test_epoch: (ABOVE_NORMAL) EMAHook (NORMAL ) IterTimerHook -------------------- before_test_iter: (NORMAL ) IterTimerHook -------------------- after_test_iter: (NORMAL ) IterTimerHook (NORMAL ) VisualizationHook (BELOW_NORMAL) LoggerHook -------------------- after_test_epoch: (VERY_HIGH ) RuntimeInfoHook (ABOVE_NORMAL) EMAHook (NORMAL ) IterTimerHook (BELOW_NORMAL) LoggerHook -------------------- after_run: (BELOW_NORMAL) LoggerHook -------------------- 2023/06/04 17:00:10 - mmengine - INFO - load backbone in model from: ckpt/openclip-ViT-L-14.pth 2023/06/04 17:00:11 - mmengine - WARNING - The model and loaded state dict do not match exactly unexpected key in source state_dict: ln1.weight, ln1.bias 2023/06/04 17:00:11 - mmengine - INFO - load backbone in model from: ckpt/openclip-ViT-L-14.pth 2023/06/04 17:00:13 - mmengine - WARNING - The model and loaded state dict do not match exactly unexpected key in source state_dict: cls_token, pos_embed, patch_embed.projection.weight, pre_norm.weight, pre_norm.bias, layers.0.ln1.weight, layers.0.ln1.bias, layers.0.attn.qkv.weight, layers.0.attn.qkv.bias, layers.0.attn.proj.weight, layers.0.attn.proj.bias, layers.0.ln2.weight, layers.0.ln2.bias, layers.0.ffn.layers.0.0.weight, layers.0.ffn.layers.0.0.bias, layers.0.ffn.layers.1.weight, layers.0.ffn.layers.1.bias, layers.1.ln1.weight, layers.1.ln1.bias, layers.1.attn.qkv.weight, layers.1.attn.qkv.bias, layers.1.attn.proj.weight, layers.1.attn.proj.bias, layers.1.ln2.weight, layers.1.ln2.bias, layers.1.ffn.layers.0.0.weight, layers.1.ffn.layers.0.0.bias, layers.1.ffn.layers.1.weight, layers.1.ffn.layers.1.bias, layers.2.ln1.weight, layers.2.ln1.bias, layers.2.attn.qkv.weight, layers.2.attn.qkv.bias, layers.2.attn.proj.weight, layers.2.attn.proj.bias, layers.2.ln2.weight, layers.2.ln2.bias, layers.2.ffn.layers.0.0.weight, layers.2.ffn.layers.0.0.bias, layers.2.ffn.layers.1.weight, layers.2.ffn.layers.1.bias, layers.3.ln1.weight, layers.3.ln1.bias, layers.3.attn.qkv.weight, layers.3.attn.qkv.bias, layers.3.attn.proj.weight, layers.3.attn.proj.bias, layers.3.ln2.weight, layers.3.ln2.bias, layers.3.ffn.layers.0.0.weight, layers.3.ffn.layers.0.0.bias, layers.3.ffn.layers.1.weight, layers.3.ffn.layers.1.bias, layers.4.ln1.weight, layers.4.ln1.bias, layers.4.attn.qkv.weight, layers.4.attn.qkv.bias, layers.4.attn.proj.weight, layers.4.attn.proj.bias, layers.4.ln2.weight, layers.4.ln2.bias, layers.4.ffn.layers.0.0.weight, layers.4.ffn.layers.0.0.bias, layers.4.ffn.layers.1.weight, layers.4.ffn.layers.1.bias, layers.5.ln1.weight, layers.5.ln1.bias, layers.5.attn.qkv.weight, layers.5.attn.qkv.bias, layers.5.attn.proj.weight, layers.5.attn.proj.bias, layers.5.ln2.weight, layers.5.ln2.bias, layers.5.ffn.layers.0.0.weight, layers.5.ffn.layers.0.0.bias, layers.5.ffn.layers.1.weight, layers.5.ffn.layers.1.bias, layers.6.ln1.weight, layers.6.ln1.bias, layers.6.attn.qkv.weight, layers.6.attn.qkv.bias, layers.6.attn.proj.weight, layers.6.attn.proj.bias, layers.6.ln2.weight, layers.6.ln2.bias, layers.6.ffn.layers.0.0.weight, layers.6.ffn.layers.0.0.bias, layers.6.ffn.layers.1.weight, layers.6.ffn.layers.1.bias, layers.7.ln1.weight, layers.7.ln1.bias, layers.7.attn.qkv.weight, layers.7.attn.qkv.bias, layers.7.attn.proj.weight, layers.7.attn.proj.bias, layers.7.ln2.weight, layers.7.ln2.bias, layers.7.ffn.layers.0.0.weight, layers.7.ffn.layers.0.0.bias, layers.7.ffn.layers.1.weight, layers.7.ffn.layers.1.bias, layers.8.ln1.weight, layers.8.ln1.bias, layers.8.attn.qkv.weight, layers.8.attn.qkv.bias, layers.8.attn.proj.weight, layers.8.attn.proj.bias, layers.8.ln2.weight, layers.8.ln2.bias, layers.8.ffn.layers.0.0.weight, layers.8.ffn.layers.0.0.bias, layers.8.ffn.layers.1.weight, layers.8.ffn.layers.1.bias, layers.9.ln1.weight, layers.9.ln1.bias, layers.9.attn.qkv.weight, layers.9.attn.qkv.bias, layers.9.attn.proj.weight, layers.9.attn.proj.bias, layers.9.ln2.weight, layers.9.ln2.bias, layers.9.ffn.layers.0.0.weight, layers.9.ffn.layers.0.0.bias, layers.9.ffn.layers.1.weight, layers.9.ffn.layers.1.bias, layers.10.ln1.weight, layers.10.ln1.bias, layers.10.attn.qkv.weight, layers.10.attn.qkv.bias, layers.10.attn.proj.weight, layers.10.attn.proj.bias, layers.10.ln2.weight, layers.10.ln2.bias, layers.10.ffn.layers.0.0.weight, layers.10.ffn.layers.0.0.bias, layers.10.ffn.layers.1.weight, layers.10.ffn.layers.1.bias, layers.11.ln1.weight, layers.11.ln1.bias, layers.11.attn.qkv.weight, layers.11.attn.qkv.bias, layers.11.attn.proj.weight, layers.11.attn.proj.bias, layers.11.ln2.weight, layers.11.ln2.bias, layers.11.ffn.layers.0.0.weight, layers.11.ffn.layers.0.0.bias, layers.11.ffn.layers.1.weight, layers.11.ffn.layers.1.bias, layers.12.ln1.weight, layers.12.ln1.bias, layers.12.attn.qkv.weight, layers.12.attn.qkv.bias, layers.12.attn.proj.weight, layers.12.attn.proj.bias, layers.12.ln2.weight, layers.12.ln2.bias, layers.12.ffn.layers.0.0.weight, layers.12.ffn.layers.0.0.bias, layers.12.ffn.layers.1.weight, layers.12.ffn.layers.1.bias, layers.13.ln1.weight, layers.13.ln1.bias, layers.13.attn.qkv.weight, layers.13.attn.qkv.bias, layers.13.attn.proj.weight, layers.13.attn.proj.bias, layers.13.ln2.weight, layers.13.ln2.bias, layers.13.ffn.layers.0.0.weight, layers.13.ffn.layers.0.0.bias, layers.13.ffn.layers.1.weight, layers.13.ffn.layers.1.bias, layers.14.ln1.weight, layers.14.ln1.bias, layers.14.attn.qkv.weight, layers.14.attn.qkv.bias, layers.14.attn.proj.weight, layers.14.attn.proj.bias, layers.14.ln2.weight, layers.14.ln2.bias, layers.14.ffn.layers.0.0.weight, layers.14.ffn.layers.0.0.bias, layers.14.ffn.layers.1.weight, layers.14.ffn.layers.1.bias, layers.15.ln1.weight, layers.15.ln1.bias, layers.15.attn.qkv.weight, layers.15.attn.qkv.bias, layers.15.attn.proj.weight, layers.15.attn.proj.bias, layers.15.ln2.weight, layers.15.ln2.bias, layers.15.ffn.layers.0.0.weight, layers.15.ffn.layers.0.0.bias, layers.15.ffn.layers.1.weight, layers.15.ffn.layers.1.bias, layers.16.ln1.weight, layers.16.ln1.bias, layers.16.attn.qkv.weight, layers.16.attn.qkv.bias, layers.16.attn.proj.weight, layers.16.attn.proj.bias, layers.16.ln2.weight, layers.16.ln2.bias, layers.16.ffn.layers.0.0.weight, layers.16.ffn.layers.0.0.bias, layers.16.ffn.layers.1.weight, layers.16.ffn.layers.1.bias, layers.17.ln1.weight, layers.17.ln1.bias, layers.17.attn.qkv.weight, layers.17.attn.qkv.bias, layers.17.attn.proj.weight, layers.17.attn.proj.bias, layers.17.ln2.weight, layers.17.ln2.bias, layers.17.ffn.layers.0.0.weight, layers.17.ffn.layers.0.0.bias, layers.17.ffn.layers.1.weight, layers.17.ffn.layers.1.bias, layers.18.ln1.weight, layers.18.ln1.bias, layers.18.attn.qkv.weight, layers.18.attn.qkv.bias, layers.18.attn.proj.weight, layers.18.attn.proj.bias, layers.18.ln2.weight, layers.18.ln2.bias, layers.18.ffn.layers.0.0.weight, layers.18.ffn.layers.0.0.bias, layers.18.ffn.layers.1.weight, layers.18.ffn.layers.1.bias, layers.19.ln1.weight, layers.19.ln1.bias, layers.19.attn.qkv.weight, layers.19.attn.qkv.bias, layers.19.attn.proj.weight, layers.19.attn.proj.bias, layers.19.ln2.weight, layers.19.ln2.bias, layers.19.ffn.layers.0.0.weight, layers.19.ffn.layers.0.0.bias, layers.19.ffn.layers.1.weight, layers.19.ffn.layers.1.bias, layers.20.ln1.weight, layers.20.ln1.bias, layers.20.attn.qkv.weight, layers.20.attn.qkv.bias, layers.20.attn.proj.weight, layers.20.attn.proj.bias, layers.20.ln2.weight, layers.20.ln2.bias, layers.20.ffn.layers.0.0.weight, layers.20.ffn.layers.0.0.bias, layers.20.ffn.layers.1.weight, layers.20.ffn.layers.1.bias, layers.21.ln1.weight, layers.21.ln1.bias, layers.21.attn.qkv.weight, layers.21.attn.qkv.bias, layers.21.attn.proj.weight, layers.21.attn.proj.bias, layers.21.ln2.weight, layers.21.ln2.bias, layers.21.ffn.layers.0.0.weight, layers.21.ffn.layers.0.0.bias, layers.21.ffn.layers.1.weight, layers.21.ffn.layers.1.bias, layers.22.ln1.weight, layers.22.ln1.bias, layers.22.attn.qkv.weight, layers.22.attn.qkv.bias, layers.22.attn.proj.weight, layers.22.attn.proj.bias, layers.22.ln2.weight, layers.22.ln2.bias, layers.22.ffn.layers.0.0.weight, layers.22.ffn.layers.0.0.bias, layers.22.ffn.layers.1.weight, layers.22.ffn.layers.1.bias, layers.23.ln1.weight, layers.23.ln1.bias, layers.23.attn.qkv.weight, layers.23.attn.qkv.bias, layers.23.attn.proj.weight, layers.23.attn.proj.bias, layers.23.ln2.weight, layers.23.ln2.bias, layers.23.ffn.layers.0.0.weight, layers.23.ffn.layers.0.0.bias, layers.23.ffn.layers.1.weight, layers.23.ffn.layers.1.bias, ln1.weight, ln1.bias missing keys in source state_dict: proj Name of parameter - Initialization information backbone.cls_token - torch.Size([1, 1, 1024]): PretrainedInit: load from ckpt/openclip-ViT-L-14.pth backbone.pos_embed - torch.Size([1, 257, 1024]): PretrainedInit: load from ckpt/openclip-ViT-L-14.pth backbone.patch_embed.projection.weight - torch.Size([1024, 3, 14, 14]): PretrainedInit: load from ckpt/openclip-ViT-L-14.pth backbone.layers.0.ln1.weight - torch.Size([1024]): PretrainedInit: load from ckpt/openclip-ViT-L-14.pth backbone.layers.0.ln1.bias - torch.Size([1024]): PretrainedInit: load from ckpt/openclip-ViT-L-14.pth backbone.layers.0.attn.qkv.weight - torch.Size([3072, 1024]): PretrainedInit: load from ckpt/openclip-ViT-L-14.pth backbone.layers.0.attn.qkv.bias - torch.Size([3072]): PretrainedInit: load from ckpt/openclip-ViT-L-14.pth backbone.layers.0.attn.proj.weight - torch.Size([1024, 1024]): PretrainedInit: load from ckpt/openclip-ViT-L-14.pth backbone.layers.0.attn.proj.bias - torch.Size([1024]): PretrainedInit: load from ckpt/openclip-ViT-L-14.pth backbone.layers.0.ln2.weight - torch.Size([1024]): PretrainedInit: load from ckpt/openclip-ViT-L-14.pth backbone.layers.0.ln2.bias - torch.Size([1024]): PretrainedInit: load from ckpt/openclip-ViT-L-14.pth backbone.layers.0.ffn.layers.0.0.weight - torch.Size([4096, 1024]): PretrainedInit: load from ckpt/openclip-ViT-L-14.pth backbone.layers.0.ffn.layers.0.0.bias - torch.Size([4096]): PretrainedInit: load from ckpt/openclip-ViT-L-14.pth backbone.layers.0.ffn.layers.1.weight - torch.Size([1024, 4096]): PretrainedInit: load from ckpt/openclip-ViT-L-14.pth backbone.layers.0.ffn.layers.1.bias - torch.Size([1024]): PretrainedInit: load from ckpt/openclip-ViT-L-14.pth backbone.layers.1.ln1.weight - torch.Size([1024]): PretrainedInit: load from ckpt/openclip-ViT-L-14.pth backbone.layers.1.ln1.bias - torch.Size([1024]): PretrainedInit: load from ckpt/openclip-ViT-L-14.pth backbone.layers.1.attn.qkv.weight - torch.Size([3072, 1024]): PretrainedInit: load from ckpt/openclip-ViT-L-14.pth backbone.layers.1.attn.qkv.bias - torch.Size([3072]): PretrainedInit: load from ckpt/openclip-ViT-L-14.pth backbone.layers.1.attn.proj.weight - torch.Size([1024, 1024]): PretrainedInit: load from ckpt/openclip-ViT-L-14.pth backbone.layers.1.attn.proj.bias - torch.Size([1024]): PretrainedInit: load from ckpt/openclip-ViT-L-14.pth backbone.layers.1.ln2.weight - torch.Size([1024]): PretrainedInit: load from ckpt/openclip-ViT-L-14.pth backbone.layers.1.ln2.bias - torch.Size([1024]): PretrainedInit: load from ckpt/openclip-ViT-L-14.pth backbone.layers.1.ffn.layers.0.0.weight - torch.Size([4096, 1024]): PretrainedInit: load from ckpt/openclip-ViT-L-14.pth backbone.layers.1.ffn.layers.0.0.bias - torch.Size([4096]): PretrainedInit: load from ckpt/openclip-ViT-L-14.pth backbone.layers.1.ffn.layers.1.weight - torch.Size([1024, 4096]): PretrainedInit: load from ckpt/openclip-ViT-L-14.pth backbone.layers.1.ffn.layers.1.bias - torch.Size([1024]): PretrainedInit: load from ckpt/openclip-ViT-L-14.pth backbone.layers.2.ln1.weight - torch.Size([1024]): PretrainedInit: load from ckpt/openclip-ViT-L-14.pth backbone.layers.2.ln1.bias - torch.Size([1024]): PretrainedInit: load from ckpt/openclip-ViT-L-14.pth backbone.layers.2.attn.qkv.weight - torch.Size([3072, 1024]): PretrainedInit: load from ckpt/openclip-ViT-L-14.pth backbone.layers.2.attn.qkv.bias - torch.Size([3072]): PretrainedInit: load from ckpt/openclip-ViT-L-14.pth backbone.layers.2.attn.proj.weight - torch.Size([1024, 1024]): PretrainedInit: load from ckpt/openclip-ViT-L-14.pth backbone.layers.2.attn.proj.bias - torch.Size([1024]): PretrainedInit: load from ckpt/openclip-ViT-L-14.pth backbone.layers.2.ln2.weight - torch.Size([1024]): PretrainedInit: load from ckpt/openclip-ViT-L-14.pth backbone.layers.2.ln2.bias - torch.Size([1024]): PretrainedInit: load from ckpt/openclip-ViT-L-14.pth backbone.layers.2.ffn.layers.0.0.weight - torch.Size([4096, 1024]): PretrainedInit: load from ckpt/openclip-ViT-L-14.pth backbone.layers.2.ffn.layers.0.0.bias - torch.Size([4096]): PretrainedInit: load from ckpt/openclip-ViT-L-14.pth backbone.layers.2.ffn.layers.1.weight - torch.Size([1024, 4096]): PretrainedInit: load from ckpt/openclip-ViT-L-14.pth backbone.layers.2.ffn.layers.1.bias - torch.Size([1024]): PretrainedInit: load from ckpt/openclip-ViT-L-14.pth backbone.layers.3.ln1.weight - torch.Size([1024]): PretrainedInit: load from ckpt/openclip-ViT-L-14.pth backbone.layers.3.ln1.bias - torch.Size([1024]): PretrainedInit: load from ckpt/openclip-ViT-L-14.pth backbone.layers.3.attn.qkv.weight - torch.Size([3072, 1024]): PretrainedInit: load from ckpt/openclip-ViT-L-14.pth backbone.layers.3.attn.qkv.bias - torch.Size([3072]): PretrainedInit: load from ckpt/openclip-ViT-L-14.pth backbone.layers.3.attn.proj.weight - torch.Size([1024, 1024]): PretrainedInit: load from ckpt/openclip-ViT-L-14.pth backbone.layers.3.attn.proj.bias - torch.Size([1024]): PretrainedInit: load from ckpt/openclip-ViT-L-14.pth backbone.layers.3.ln2.weight - torch.Size([1024]): PretrainedInit: load from ckpt/openclip-ViT-L-14.pth backbone.layers.3.ln2.bias - torch.Size([1024]): PretrainedInit: load from ckpt/openclip-ViT-L-14.pth backbone.layers.3.ffn.layers.0.0.weight - torch.Size([4096, 1024]): PretrainedInit: load from ckpt/openclip-ViT-L-14.pth backbone.layers.3.ffn.layers.0.0.bias - torch.Size([4096]): PretrainedInit: load from ckpt/openclip-ViT-L-14.pth backbone.layers.3.ffn.layers.1.weight - torch.Size([1024, 4096]): PretrainedInit: load from ckpt/openclip-ViT-L-14.pth backbone.layers.3.ffn.layers.1.bias - torch.Size([1024]): PretrainedInit: load from ckpt/openclip-ViT-L-14.pth backbone.layers.4.ln1.weight - torch.Size([1024]): PretrainedInit: load from ckpt/openclip-ViT-L-14.pth backbone.layers.4.ln1.bias - torch.Size([1024]): PretrainedInit: load from ckpt/openclip-ViT-L-14.pth backbone.layers.4.attn.qkv.weight - torch.Size([3072, 1024]): PretrainedInit: load from ckpt/openclip-ViT-L-14.pth backbone.layers.4.attn.qkv.bias - torch.Size([3072]): PretrainedInit: load from ckpt/openclip-ViT-L-14.pth backbone.layers.4.attn.proj.weight - torch.Size([1024, 1024]): PretrainedInit: load from ckpt/openclip-ViT-L-14.pth backbone.layers.4.attn.proj.bias - torch.Size([1024]): PretrainedInit: load from ckpt/openclip-ViT-L-14.pth backbone.layers.4.ln2.weight - torch.Size([1024]): PretrainedInit: load from ckpt/openclip-ViT-L-14.pth backbone.layers.4.ln2.bias - torch.Size([1024]): PretrainedInit: load from ckpt/openclip-ViT-L-14.pth backbone.layers.4.ffn.layers.0.0.weight - torch.Size([4096, 1024]): PretrainedInit: load from ckpt/openclip-ViT-L-14.pth backbone.layers.4.ffn.layers.0.0.bias - torch.Size([4096]): PretrainedInit: load from ckpt/openclip-ViT-L-14.pth backbone.layers.4.ffn.layers.1.weight - torch.Size([1024, 4096]): PretrainedInit: load from ckpt/openclip-ViT-L-14.pth backbone.layers.4.ffn.layers.1.bias - torch.Size([1024]): PretrainedInit: load from ckpt/openclip-ViT-L-14.pth backbone.layers.5.ln1.weight - torch.Size([1024]): PretrainedInit: load from ckpt/openclip-ViT-L-14.pth backbone.layers.5.ln1.bias - torch.Size([1024]): PretrainedInit: load from ckpt/openclip-ViT-L-14.pth backbone.layers.5.attn.qkv.weight - torch.Size([3072, 1024]): PretrainedInit: load from ckpt/openclip-ViT-L-14.pth backbone.layers.5.attn.qkv.bias - torch.Size([3072]): PretrainedInit: load from ckpt/openclip-ViT-L-14.pth backbone.layers.5.attn.proj.weight - torch.Size([1024, 1024]): PretrainedInit: load from ckpt/openclip-ViT-L-14.pth backbone.layers.5.attn.proj.bias - torch.Size([1024]): PretrainedInit: load from ckpt/openclip-ViT-L-14.pth backbone.layers.5.ln2.weight - torch.Size([1024]): PretrainedInit: load from ckpt/openclip-ViT-L-14.pth backbone.layers.5.ln2.bias - torch.Size([1024]): PretrainedInit: load from ckpt/openclip-ViT-L-14.pth backbone.layers.5.ffn.layers.0.0.weight - torch.Size([4096, 1024]): PretrainedInit: load from ckpt/openclip-ViT-L-14.pth backbone.layers.5.ffn.layers.0.0.bias - torch.Size([4096]): PretrainedInit: load from ckpt/openclip-ViT-L-14.pth backbone.layers.5.ffn.layers.1.weight - torch.Size([1024, 4096]): PretrainedInit: load from ckpt/openclip-ViT-L-14.pth backbone.layers.5.ffn.layers.1.bias - torch.Size([1024]): PretrainedInit: load from ckpt/openclip-ViT-L-14.pth backbone.layers.6.ln1.weight - torch.Size([1024]): PretrainedInit: load from ckpt/openclip-ViT-L-14.pth backbone.layers.6.ln1.bias - torch.Size([1024]): PretrainedInit: load from ckpt/openclip-ViT-L-14.pth backbone.layers.6.attn.qkv.weight - torch.Size([3072, 1024]): PretrainedInit: load from ckpt/openclip-ViT-L-14.pth backbone.layers.6.attn.qkv.bias - torch.Size([3072]): PretrainedInit: load from ckpt/openclip-ViT-L-14.pth backbone.layers.6.attn.proj.weight - torch.Size([1024, 1024]): PretrainedInit: load from ckpt/openclip-ViT-L-14.pth backbone.layers.6.attn.proj.bias - torch.Size([1024]): PretrainedInit: load from ckpt/openclip-ViT-L-14.pth backbone.layers.6.ln2.weight - torch.Size([1024]): PretrainedInit: load from ckpt/openclip-ViT-L-14.pth backbone.layers.6.ln2.bias - torch.Size([1024]): PretrainedInit: load from ckpt/openclip-ViT-L-14.pth backbone.layers.6.ffn.layers.0.0.weight - torch.Size([4096, 1024]): PretrainedInit: load from ckpt/openclip-ViT-L-14.pth backbone.layers.6.ffn.layers.0.0.bias - torch.Size([4096]): PretrainedInit: load from ckpt/openclip-ViT-L-14.pth backbone.layers.6.ffn.layers.1.weight - torch.Size([1024, 4096]): PretrainedInit: load from ckpt/openclip-ViT-L-14.pth backbone.layers.6.ffn.layers.1.bias - torch.Size([1024]): PretrainedInit: load from ckpt/openclip-ViT-L-14.pth backbone.layers.7.ln1.weight - torch.Size([1024]): PretrainedInit: load from ckpt/openclip-ViT-L-14.pth backbone.layers.7.ln1.bias - torch.Size([1024]): PretrainedInit: load from ckpt/openclip-ViT-L-14.pth backbone.layers.7.attn.qkv.weight - torch.Size([3072, 1024]): PretrainedInit: load from ckpt/openclip-ViT-L-14.pth backbone.layers.7.attn.qkv.bias - torch.Size([3072]): PretrainedInit: load from ckpt/openclip-ViT-L-14.pth backbone.layers.7.attn.proj.weight - torch.Size([1024, 1024]): PretrainedInit: load from ckpt/openclip-ViT-L-14.pth backbone.layers.7.attn.proj.bias - torch.Size([1024]): PretrainedInit: load from ckpt/openclip-ViT-L-14.pth backbone.layers.7.ln2.weight - torch.Size([1024]): PretrainedInit: load from ckpt/openclip-ViT-L-14.pth backbone.layers.7.ln2.bias - torch.Size([1024]): PretrainedInit: load from ckpt/openclip-ViT-L-14.pth backbone.layers.7.ffn.layers.0.0.weight - torch.Size([4096, 1024]): PretrainedInit: load from ckpt/openclip-ViT-L-14.pth backbone.layers.7.ffn.layers.0.0.bias - torch.Size([4096]): PretrainedInit: load from ckpt/openclip-ViT-L-14.pth backbone.layers.7.ffn.layers.1.weight - torch.Size([1024, 4096]): PretrainedInit: load from ckpt/openclip-ViT-L-14.pth backbone.layers.7.ffn.layers.1.bias - torch.Size([1024]): PretrainedInit: load from ckpt/openclip-ViT-L-14.pth backbone.layers.8.ln1.weight - torch.Size([1024]): PretrainedInit: load from ckpt/openclip-ViT-L-14.pth backbone.layers.8.ln1.bias - torch.Size([1024]): PretrainedInit: load from ckpt/openclip-ViT-L-14.pth backbone.layers.8.attn.qkv.weight - torch.Size([3072, 1024]): PretrainedInit: load from ckpt/openclip-ViT-L-14.pth backbone.layers.8.attn.qkv.bias - torch.Size([3072]): PretrainedInit: load from ckpt/openclip-ViT-L-14.pth backbone.layers.8.attn.proj.weight - torch.Size([1024, 1024]): PretrainedInit: load from ckpt/openclip-ViT-L-14.pth backbone.layers.8.attn.proj.bias - torch.Size([1024]): PretrainedInit: load from ckpt/openclip-ViT-L-14.pth backbone.layers.8.ln2.weight - torch.Size([1024]): PretrainedInit: load from ckpt/openclip-ViT-L-14.pth backbone.layers.8.ln2.bias - torch.Size([1024]): PretrainedInit: load from ckpt/openclip-ViT-L-14.pth backbone.layers.8.ffn.layers.0.0.weight - torch.Size([4096, 1024]): PretrainedInit: load from ckpt/openclip-ViT-L-14.pth backbone.layers.8.ffn.layers.0.0.bias - torch.Size([4096]): PretrainedInit: load from ckpt/openclip-ViT-L-14.pth backbone.layers.8.ffn.layers.1.weight - torch.Size([1024, 4096]): PretrainedInit: load from ckpt/openclip-ViT-L-14.pth backbone.layers.8.ffn.layers.1.bias - torch.Size([1024]): PretrainedInit: load from ckpt/openclip-ViT-L-14.pth backbone.layers.9.ln1.weight - torch.Size([1024]): PretrainedInit: load from ckpt/openclip-ViT-L-14.pth backbone.layers.9.ln1.bias - torch.Size([1024]): PretrainedInit: load from ckpt/openclip-ViT-L-14.pth backbone.layers.9.attn.qkv.weight - torch.Size([3072, 1024]): PretrainedInit: load from ckpt/openclip-ViT-L-14.pth backbone.layers.9.attn.qkv.bias - torch.Size([3072]): PretrainedInit: load from ckpt/openclip-ViT-L-14.pth backbone.layers.9.attn.proj.weight - torch.Size([1024, 1024]): PretrainedInit: load from ckpt/openclip-ViT-L-14.pth backbone.layers.9.attn.proj.bias - torch.Size([1024]): PretrainedInit: load from ckpt/openclip-ViT-L-14.pth backbone.layers.9.ln2.weight - torch.Size([1024]): PretrainedInit: load from ckpt/openclip-ViT-L-14.pth backbone.layers.9.ln2.bias - torch.Size([1024]): PretrainedInit: load from ckpt/openclip-ViT-L-14.pth backbone.layers.9.ffn.layers.0.0.weight - torch.Size([4096, 1024]): PretrainedInit: load from ckpt/openclip-ViT-L-14.pth backbone.layers.9.ffn.layers.0.0.bias - torch.Size([4096]): PretrainedInit: load from ckpt/openclip-ViT-L-14.pth backbone.layers.9.ffn.layers.1.weight - torch.Size([1024, 4096]): PretrainedInit: load from ckpt/openclip-ViT-L-14.pth backbone.layers.9.ffn.layers.1.bias - torch.Size([1024]): PretrainedInit: load from ckpt/openclip-ViT-L-14.pth backbone.layers.10.ln1.weight - torch.Size([1024]): PretrainedInit: load from ckpt/openclip-ViT-L-14.pth backbone.layers.10.ln1.bias - torch.Size([1024]): PretrainedInit: load from ckpt/openclip-ViT-L-14.pth backbone.layers.10.attn.qkv.weight - torch.Size([3072, 1024]): PretrainedInit: load from ckpt/openclip-ViT-L-14.pth backbone.layers.10.attn.qkv.bias - torch.Size([3072]): PretrainedInit: load from ckpt/openclip-ViT-L-14.pth backbone.layers.10.attn.proj.weight - torch.Size([1024, 1024]): PretrainedInit: load from ckpt/openclip-ViT-L-14.pth backbone.layers.10.attn.proj.bias - torch.Size([1024]): PretrainedInit: load from ckpt/openclip-ViT-L-14.pth backbone.layers.10.ln2.weight - torch.Size([1024]): PretrainedInit: load from ckpt/openclip-ViT-L-14.pth backbone.layers.10.ln2.bias - torch.Size([1024]): PretrainedInit: load from ckpt/openclip-ViT-L-14.pth backbone.layers.10.ffn.layers.0.0.weight - torch.Size([4096, 1024]): PretrainedInit: load from ckpt/openclip-ViT-L-14.pth backbone.layers.10.ffn.layers.0.0.bias - torch.Size([4096]): PretrainedInit: load from ckpt/openclip-ViT-L-14.pth backbone.layers.10.ffn.layers.1.weight - torch.Size([1024, 4096]): PretrainedInit: load from ckpt/openclip-ViT-L-14.pth backbone.layers.10.ffn.layers.1.bias - torch.Size([1024]): PretrainedInit: load from ckpt/openclip-ViT-L-14.pth backbone.layers.11.ln1.weight - torch.Size([1024]): PretrainedInit: load from ckpt/openclip-ViT-L-14.pth backbone.layers.11.ln1.bias - torch.Size([1024]): PretrainedInit: load from ckpt/openclip-ViT-L-14.pth backbone.layers.11.attn.qkv.weight - torch.Size([3072, 1024]): PretrainedInit: load from ckpt/openclip-ViT-L-14.pth backbone.layers.11.attn.qkv.bias - torch.Size([3072]): PretrainedInit: load from ckpt/openclip-ViT-L-14.pth backbone.layers.11.attn.proj.weight - torch.Size([1024, 1024]): PretrainedInit: load from ckpt/openclip-ViT-L-14.pth backbone.layers.11.attn.proj.bias - torch.Size([1024]): PretrainedInit: load from ckpt/openclip-ViT-L-14.pth backbone.layers.11.ln2.weight - torch.Size([1024]): PretrainedInit: load from ckpt/openclip-ViT-L-14.pth backbone.layers.11.ln2.bias - torch.Size([1024]): PretrainedInit: load from ckpt/openclip-ViT-L-14.pth backbone.layers.11.ffn.layers.0.0.weight - torch.Size([4096, 1024]): PretrainedInit: load from ckpt/openclip-ViT-L-14.pth backbone.layers.11.ffn.layers.0.0.bias - torch.Size([4096]): PretrainedInit: load from ckpt/openclip-ViT-L-14.pth backbone.layers.11.ffn.layers.1.weight - torch.Size([1024, 4096]): PretrainedInit: load from ckpt/openclip-ViT-L-14.pth backbone.layers.11.ffn.layers.1.bias - torch.Size([1024]): PretrainedInit: load from ckpt/openclip-ViT-L-14.pth backbone.layers.12.ln1.weight - torch.Size([1024]): PretrainedInit: load from ckpt/openclip-ViT-L-14.pth backbone.layers.12.ln1.bias - torch.Size([1024]): PretrainedInit: load from ckpt/openclip-ViT-L-14.pth backbone.layers.12.attn.qkv.weight - torch.Size([3072, 1024]): PretrainedInit: load from ckpt/openclip-ViT-L-14.pth backbone.layers.12.attn.qkv.bias - torch.Size([3072]): PretrainedInit: load from ckpt/openclip-ViT-L-14.pth backbone.layers.12.attn.proj.weight - torch.Size([1024, 1024]): PretrainedInit: load from ckpt/openclip-ViT-L-14.pth backbone.layers.12.attn.proj.bias - torch.Size([1024]): PretrainedInit: load from ckpt/openclip-ViT-L-14.pth backbone.layers.12.ln2.weight - torch.Size([1024]): PretrainedInit: load from ckpt/openclip-ViT-L-14.pth backbone.layers.12.ln2.bias - torch.Size([1024]): PretrainedInit: load from ckpt/openclip-ViT-L-14.pth backbone.layers.12.ffn.layers.0.0.weight - torch.Size([4096, 1024]): PretrainedInit: load from ckpt/openclip-ViT-L-14.pth backbone.layers.12.ffn.layers.0.0.bias - torch.Size([4096]): PretrainedInit: load from ckpt/openclip-ViT-L-14.pth backbone.layers.12.ffn.layers.1.weight - torch.Size([1024, 4096]): PretrainedInit: load from ckpt/openclip-ViT-L-14.pth backbone.layers.12.ffn.layers.1.bias - torch.Size([1024]): PretrainedInit: load from ckpt/openclip-ViT-L-14.pth backbone.layers.13.ln1.weight - torch.Size([1024]): PretrainedInit: load from ckpt/openclip-ViT-L-14.pth backbone.layers.13.ln1.bias - torch.Size([1024]): PretrainedInit: load from ckpt/openclip-ViT-L-14.pth backbone.layers.13.attn.qkv.weight - torch.Size([3072, 1024]): PretrainedInit: load from ckpt/openclip-ViT-L-14.pth backbone.layers.13.attn.qkv.bias - torch.Size([3072]): PretrainedInit: load from ckpt/openclip-ViT-L-14.pth backbone.layers.13.attn.proj.weight - torch.Size([1024, 1024]): PretrainedInit: load from ckpt/openclip-ViT-L-14.pth backbone.layers.13.attn.proj.bias - torch.Size([1024]): PretrainedInit: load from ckpt/openclip-ViT-L-14.pth backbone.layers.13.ln2.weight - torch.Size([1024]): PretrainedInit: load from ckpt/openclip-ViT-L-14.pth backbone.layers.13.ln2.bias - torch.Size([1024]): PretrainedInit: load from ckpt/openclip-ViT-L-14.pth backbone.layers.13.ffn.layers.0.0.weight - torch.Size([4096, 1024]): PretrainedInit: load from ckpt/openclip-ViT-L-14.pth backbone.layers.13.ffn.layers.0.0.bias - torch.Size([4096]): PretrainedInit: load from ckpt/openclip-ViT-L-14.pth backbone.layers.13.ffn.layers.1.weight - torch.Size([1024, 4096]): PretrainedInit: load from ckpt/openclip-ViT-L-14.pth backbone.layers.13.ffn.layers.1.bias - torch.Size([1024]): PretrainedInit: load from ckpt/openclip-ViT-L-14.pth backbone.layers.14.ln1.weight - torch.Size([1024]): PretrainedInit: load from ckpt/openclip-ViT-L-14.pth backbone.layers.14.ln1.bias - torch.Size([1024]): PretrainedInit: load from ckpt/openclip-ViT-L-14.pth backbone.layers.14.attn.qkv.weight - torch.Size([3072, 1024]): PretrainedInit: load from ckpt/openclip-ViT-L-14.pth backbone.layers.14.attn.qkv.bias - torch.Size([3072]): PretrainedInit: load from ckpt/openclip-ViT-L-14.pth backbone.layers.14.attn.proj.weight - torch.Size([1024, 1024]): PretrainedInit: load from ckpt/openclip-ViT-L-14.pth backbone.layers.14.attn.proj.bias - torch.Size([1024]): PretrainedInit: load from ckpt/openclip-ViT-L-14.pth backbone.layers.14.ln2.weight - torch.Size([1024]): PretrainedInit: load from ckpt/openclip-ViT-L-14.pth backbone.layers.14.ln2.bias - torch.Size([1024]): PretrainedInit: load from ckpt/openclip-ViT-L-14.pth backbone.layers.14.ffn.layers.0.0.weight - torch.Size([4096, 1024]): PretrainedInit: load from ckpt/openclip-ViT-L-14.pth backbone.layers.14.ffn.layers.0.0.bias - torch.Size([4096]): PretrainedInit: load from ckpt/openclip-ViT-L-14.pth backbone.layers.14.ffn.layers.1.weight - torch.Size([1024, 4096]): PretrainedInit: load from ckpt/openclip-ViT-L-14.pth backbone.layers.14.ffn.layers.1.bias - torch.Size([1024]): PretrainedInit: load from ckpt/openclip-ViT-L-14.pth backbone.layers.15.ln1.weight - torch.Size([1024]): PretrainedInit: load from ckpt/openclip-ViT-L-14.pth backbone.layers.15.ln1.bias - torch.Size([1024]): PretrainedInit: load from ckpt/openclip-ViT-L-14.pth backbone.layers.15.attn.qkv.weight - torch.Size([3072, 1024]): PretrainedInit: load from ckpt/openclip-ViT-L-14.pth backbone.layers.15.attn.qkv.bias - torch.Size([3072]): PretrainedInit: load from ckpt/openclip-ViT-L-14.pth backbone.layers.15.attn.proj.weight - torch.Size([1024, 1024]): PretrainedInit: load from ckpt/openclip-ViT-L-14.pth backbone.layers.15.attn.proj.bias - torch.Size([1024]): PretrainedInit: load from ckpt/openclip-ViT-L-14.pth backbone.layers.15.ln2.weight - torch.Size([1024]): PretrainedInit: load from ckpt/openclip-ViT-L-14.pth backbone.layers.15.ln2.bias - torch.Size([1024]): PretrainedInit: load from ckpt/openclip-ViT-L-14.pth backbone.layers.15.ffn.layers.0.0.weight - torch.Size([4096, 1024]): PretrainedInit: load from ckpt/openclip-ViT-L-14.pth backbone.layers.15.ffn.layers.0.0.bias - torch.Size([4096]): PretrainedInit: load from ckpt/openclip-ViT-L-14.pth backbone.layers.15.ffn.layers.1.weight - torch.Size([1024, 4096]): PretrainedInit: load from ckpt/openclip-ViT-L-14.pth backbone.layers.15.ffn.layers.1.bias - torch.Size([1024]): PretrainedInit: load from ckpt/openclip-ViT-L-14.pth backbone.layers.16.ln1.weight - torch.Size([1024]): PretrainedInit: load from ckpt/openclip-ViT-L-14.pth backbone.layers.16.ln1.bias - torch.Size([1024]): PretrainedInit: load from ckpt/openclip-ViT-L-14.pth backbone.layers.16.attn.qkv.weight - torch.Size([3072, 1024]): PretrainedInit: load from ckpt/openclip-ViT-L-14.pth backbone.layers.16.attn.qkv.bias - torch.Size([3072]): PretrainedInit: load from ckpt/openclip-ViT-L-14.pth backbone.layers.16.attn.proj.weight - torch.Size([1024, 1024]): PretrainedInit: load from ckpt/openclip-ViT-L-14.pth backbone.layers.16.attn.proj.bias - torch.Size([1024]): PretrainedInit: load from ckpt/openclip-ViT-L-14.pth backbone.layers.16.ln2.weight - torch.Size([1024]): PretrainedInit: load from ckpt/openclip-ViT-L-14.pth backbone.layers.16.ln2.bias - torch.Size([1024]): PretrainedInit: load from ckpt/openclip-ViT-L-14.pth backbone.layers.16.ffn.layers.0.0.weight - torch.Size([4096, 1024]): PretrainedInit: load from ckpt/openclip-ViT-L-14.pth backbone.layers.16.ffn.layers.0.0.bias - torch.Size([4096]): PretrainedInit: load from ckpt/openclip-ViT-L-14.pth backbone.layers.16.ffn.layers.1.weight - torch.Size([1024, 4096]): PretrainedInit: load from ckpt/openclip-ViT-L-14.pth backbone.layers.16.ffn.layers.1.bias - torch.Size([1024]): PretrainedInit: load from ckpt/openclip-ViT-L-14.pth backbone.layers.17.ln1.weight - torch.Size([1024]): PretrainedInit: load from ckpt/openclip-ViT-L-14.pth backbone.layers.17.ln1.bias - torch.Size([1024]): PretrainedInit: load from ckpt/openclip-ViT-L-14.pth backbone.layers.17.attn.qkv.weight - torch.Size([3072, 1024]): PretrainedInit: load from ckpt/openclip-ViT-L-14.pth backbone.layers.17.attn.qkv.bias - torch.Size([3072]): PretrainedInit: load from ckpt/openclip-ViT-L-14.pth backbone.layers.17.attn.proj.weight - torch.Size([1024, 1024]): PretrainedInit: load from ckpt/openclip-ViT-L-14.pth backbone.layers.17.attn.proj.bias - torch.Size([1024]): PretrainedInit: load from ckpt/openclip-ViT-L-14.pth backbone.layers.17.ln2.weight - torch.Size([1024]): PretrainedInit: load from ckpt/openclip-ViT-L-14.pth backbone.layers.17.ln2.bias - torch.Size([1024]): PretrainedInit: load from ckpt/openclip-ViT-L-14.pth backbone.layers.17.ffn.layers.0.0.weight - torch.Size([4096, 1024]): PretrainedInit: load from ckpt/openclip-ViT-L-14.pth backbone.layers.17.ffn.layers.0.0.bias - torch.Size([4096]): PretrainedInit: load from ckpt/openclip-ViT-L-14.pth backbone.layers.17.ffn.layers.1.weight - torch.Size([1024, 4096]): PretrainedInit: load from ckpt/openclip-ViT-L-14.pth backbone.layers.17.ffn.layers.1.bias - torch.Size([1024]): PretrainedInit: load from ckpt/openclip-ViT-L-14.pth backbone.layers.18.ln1.weight - torch.Size([1024]): PretrainedInit: load from ckpt/openclip-ViT-L-14.pth backbone.layers.18.ln1.bias - torch.Size([1024]): PretrainedInit: load from ckpt/openclip-ViT-L-14.pth backbone.layers.18.attn.qkv.weight - torch.Size([3072, 1024]): PretrainedInit: load from ckpt/openclip-ViT-L-14.pth backbone.layers.18.attn.qkv.bias - torch.Size([3072]): PretrainedInit: load from ckpt/openclip-ViT-L-14.pth backbone.layers.18.attn.proj.weight - torch.Size([1024, 1024]): PretrainedInit: load from ckpt/openclip-ViT-L-14.pth backbone.layers.18.attn.proj.bias - torch.Size([1024]): PretrainedInit: load from ckpt/openclip-ViT-L-14.pth backbone.layers.18.ln2.weight - torch.Size([1024]): PretrainedInit: load from ckpt/openclip-ViT-L-14.pth backbone.layers.18.ln2.bias - torch.Size([1024]): PretrainedInit: load from ckpt/openclip-ViT-L-14.pth backbone.layers.18.ffn.layers.0.0.weight - torch.Size([4096, 1024]): PretrainedInit: load from ckpt/openclip-ViT-L-14.pth backbone.layers.18.ffn.layers.0.0.bias - torch.Size([4096]): PretrainedInit: load from ckpt/openclip-ViT-L-14.pth backbone.layers.18.ffn.layers.1.weight - torch.Size([1024, 4096]): PretrainedInit: load from ckpt/openclip-ViT-L-14.pth backbone.layers.18.ffn.layers.1.bias - torch.Size([1024]): PretrainedInit: load from ckpt/openclip-ViT-L-14.pth backbone.layers.19.ln1.weight - torch.Size([1024]): PretrainedInit: load from ckpt/openclip-ViT-L-14.pth backbone.layers.19.ln1.bias - torch.Size([1024]): PretrainedInit: load from ckpt/openclip-ViT-L-14.pth backbone.layers.19.attn.qkv.weight - torch.Size([3072, 1024]): PretrainedInit: load from ckpt/openclip-ViT-L-14.pth backbone.layers.19.attn.qkv.bias - torch.Size([3072]): PretrainedInit: load from ckpt/openclip-ViT-L-14.pth backbone.layers.19.attn.proj.weight - torch.Size([1024, 1024]): PretrainedInit: load from ckpt/openclip-ViT-L-14.pth backbone.layers.19.attn.proj.bias - torch.Size([1024]): PretrainedInit: load from ckpt/openclip-ViT-L-14.pth backbone.layers.19.ln2.weight - torch.Size([1024]): PretrainedInit: load from ckpt/openclip-ViT-L-14.pth backbone.layers.19.ln2.bias - torch.Size([1024]): PretrainedInit: load from ckpt/openclip-ViT-L-14.pth backbone.layers.19.ffn.layers.0.0.weight - torch.Size([4096, 1024]): PretrainedInit: load from ckpt/openclip-ViT-L-14.pth backbone.layers.19.ffn.layers.0.0.bias - torch.Size([4096]): PretrainedInit: load from ckpt/openclip-ViT-L-14.pth backbone.layers.19.ffn.layers.1.weight - torch.Size([1024, 4096]): PretrainedInit: load from ckpt/openclip-ViT-L-14.pth backbone.layers.19.ffn.layers.1.bias - torch.Size([1024]): PretrainedInit: load from ckpt/openclip-ViT-L-14.pth backbone.layers.20.ln1.weight - torch.Size([1024]): PretrainedInit: load from ckpt/openclip-ViT-L-14.pth backbone.layers.20.ln1.bias - torch.Size([1024]): PretrainedInit: load from ckpt/openclip-ViT-L-14.pth backbone.layers.20.attn.qkv.weight - torch.Size([3072, 1024]): PretrainedInit: load from ckpt/openclip-ViT-L-14.pth backbone.layers.20.attn.qkv.bias - torch.Size([3072]): PretrainedInit: load from ckpt/openclip-ViT-L-14.pth backbone.layers.20.attn.proj.weight - torch.Size([1024, 1024]): PretrainedInit: load from ckpt/openclip-ViT-L-14.pth backbone.layers.20.attn.proj.bias - torch.Size([1024]): PretrainedInit: load from ckpt/openclip-ViT-L-14.pth backbone.layers.20.ln2.weight - torch.Size([1024]): PretrainedInit: load from ckpt/openclip-ViT-L-14.pth backbone.layers.20.ln2.bias - torch.Size([1024]): PretrainedInit: load from ckpt/openclip-ViT-L-14.pth backbone.layers.20.ffn.layers.0.0.weight - torch.Size([4096, 1024]): PretrainedInit: load from ckpt/openclip-ViT-L-14.pth backbone.layers.20.ffn.layers.0.0.bias - torch.Size([4096]): PretrainedInit: load from ckpt/openclip-ViT-L-14.pth backbone.layers.20.ffn.layers.1.weight - torch.Size([1024, 4096]): PretrainedInit: load from ckpt/openclip-ViT-L-14.pth backbone.layers.20.ffn.layers.1.bias - torch.Size([1024]): PretrainedInit: load from ckpt/openclip-ViT-L-14.pth backbone.layers.21.ln1.weight - torch.Size([1024]): PretrainedInit: load from ckpt/openclip-ViT-L-14.pth backbone.layers.21.ln1.bias - torch.Size([1024]): PretrainedInit: load from ckpt/openclip-ViT-L-14.pth backbone.layers.21.attn.qkv.weight - torch.Size([3072, 1024]): PretrainedInit: load from ckpt/openclip-ViT-L-14.pth backbone.layers.21.attn.qkv.bias - torch.Size([3072]): PretrainedInit: load from ckpt/openclip-ViT-L-14.pth backbone.layers.21.attn.proj.weight - torch.Size([1024, 1024]): PretrainedInit: load from ckpt/openclip-ViT-L-14.pth backbone.layers.21.attn.proj.bias - torch.Size([1024]): PretrainedInit: load from ckpt/openclip-ViT-L-14.pth backbone.layers.21.ln2.weight - torch.Size([1024]): PretrainedInit: load from ckpt/openclip-ViT-L-14.pth backbone.layers.21.ln2.bias - torch.Size([1024]): PretrainedInit: load from ckpt/openclip-ViT-L-14.pth backbone.layers.21.ffn.layers.0.0.weight - torch.Size([4096, 1024]): PretrainedInit: load from ckpt/openclip-ViT-L-14.pth backbone.layers.21.ffn.layers.0.0.bias - torch.Size([4096]): PretrainedInit: load from ckpt/openclip-ViT-L-14.pth backbone.layers.21.ffn.layers.1.weight - torch.Size([1024, 4096]): PretrainedInit: load from ckpt/openclip-ViT-L-14.pth backbone.layers.21.ffn.layers.1.bias - torch.Size([1024]): PretrainedInit: load from ckpt/openclip-ViT-L-14.pth backbone.layers.22.ln1.weight - torch.Size([1024]): PretrainedInit: load from ckpt/openclip-ViT-L-14.pth backbone.layers.22.ln1.bias - torch.Size([1024]): PretrainedInit: load from ckpt/openclip-ViT-L-14.pth backbone.layers.22.attn.qkv.weight - torch.Size([3072, 1024]): PretrainedInit: load from ckpt/openclip-ViT-L-14.pth backbone.layers.22.attn.qkv.bias - torch.Size([3072]): PretrainedInit: load from ckpt/openclip-ViT-L-14.pth backbone.layers.22.attn.proj.weight - torch.Size([1024, 1024]): PretrainedInit: load from ckpt/openclip-ViT-L-14.pth backbone.layers.22.attn.proj.bias - torch.Size([1024]): PretrainedInit: load from ckpt/openclip-ViT-L-14.pth backbone.layers.22.ln2.weight - torch.Size([1024]): PretrainedInit: load from ckpt/openclip-ViT-L-14.pth backbone.layers.22.ln2.bias - torch.Size([1024]): PretrainedInit: load from ckpt/openclip-ViT-L-14.pth backbone.layers.22.ffn.layers.0.0.weight - torch.Size([4096, 1024]): PretrainedInit: load from ckpt/openclip-ViT-L-14.pth backbone.layers.22.ffn.layers.0.0.bias - torch.Size([4096]): PretrainedInit: load from ckpt/openclip-ViT-L-14.pth backbone.layers.22.ffn.layers.1.weight - torch.Size([1024, 4096]): PretrainedInit: load from ckpt/openclip-ViT-L-14.pth backbone.layers.22.ffn.layers.1.bias - torch.Size([1024]): PretrainedInit: load from ckpt/openclip-ViT-L-14.pth backbone.layers.23.ln1.weight - torch.Size([1024]): PretrainedInit: load from ckpt/openclip-ViT-L-14.pth backbone.layers.23.ln1.bias - torch.Size([1024]): PretrainedInit: load from ckpt/openclip-ViT-L-14.pth backbone.layers.23.attn.qkv.weight - torch.Size([3072, 1024]): PretrainedInit: load from ckpt/openclip-ViT-L-14.pth backbone.layers.23.attn.qkv.bias - torch.Size([3072]): PretrainedInit: load from ckpt/openclip-ViT-L-14.pth backbone.layers.23.attn.proj.weight - torch.Size([1024, 1024]): PretrainedInit: load from ckpt/openclip-ViT-L-14.pth backbone.layers.23.attn.proj.bias - torch.Size([1024]): PretrainedInit: load from ckpt/openclip-ViT-L-14.pth backbone.layers.23.ln2.weight - torch.Size([1024]): PretrainedInit: load from ckpt/openclip-ViT-L-14.pth backbone.layers.23.ln2.bias - torch.Size([1024]): PretrainedInit: load from ckpt/openclip-ViT-L-14.pth backbone.layers.23.ffn.layers.0.0.weight - torch.Size([4096, 1024]): PretrainedInit: load from ckpt/openclip-ViT-L-14.pth backbone.layers.23.ffn.layers.0.0.bias - torch.Size([4096]): PretrainedInit: load from ckpt/openclip-ViT-L-14.pth backbone.layers.23.ffn.layers.1.weight - torch.Size([1024, 4096]): PretrainedInit: load from ckpt/openclip-ViT-L-14.pth backbone.layers.23.ffn.layers.1.bias - torch.Size([1024]): PretrainedInit: load from ckpt/openclip-ViT-L-14.pth backbone.pre_norm.weight - torch.Size([1024]): PretrainedInit: load from ckpt/openclip-ViT-L-14.pth backbone.pre_norm.bias - torch.Size([1024]): PretrainedInit: load from ckpt/openclip-ViT-L-14.pth neck.proj - torch.Size([1024, 768]): The value is the same before and after calling `init_weights` of ImageClassifier head.fc.weight - torch.Size([2, 768]): TruncNormalInit: a=-2, b=2, mean=0, std=0.02, bias=0.0 head.fc.bias - torch.Size([2]): TruncNormalInit: a=-2, b=2, mean=0, std=0.02, bias=0.0 2023/06/04 17:00:13 - mmengine - WARNING - "FileClient" will be deprecated in future. Please use io functions in https://mmengine.readthedocs.io/en/latest/api/fileio.html#file-io 2023/06/04 17:00:13 - mmengine - WARNING - "HardDiskBackend" is the alias of "LocalBackend" and the former will be deprecated in future. 2023/06/04 17:00:13 - mmengine - INFO - Checkpoints will be saved to /mnt/petrelfs/luzeyu/workspace/fakebench/mmpretrain/workdir/clip_large_pretrain_4x256_sdv1_lr1e-4_wopostnorm_wneck. 2023/06/04 17:01:24 - mmengine - INFO - Epoch(train) [1][ 100/3907] lr: 9.9999e-05 eta: 7:45:19 time: 0.6274 data_time: 0.0020 memory: 44139 loss: 0.5037 2023/06/04 17:02:27 - mmengine - INFO - Epoch(train) [1][ 200/3907] lr: 9.9994e-05 eta: 7:15:41 time: 0.6289 data_time: 0.0017 memory: 44139 loss: 0.4562 2023/06/04 17:03:30 - mmengine - INFO - Epoch(train) [1][ 300/3907] lr: 9.9987e-05 eta: 7:05:18 time: 0.6298 data_time: 0.0015 memory: 44139 loss: 0.4655 2023/06/04 17:04:33 - mmengine - INFO - Epoch(train) [1][ 400/3907] lr: 9.9977e-05 eta: 7:00:01 time: 0.6311 data_time: 0.0014 memory: 44139 loss: 0.4383 2023/06/04 17:05:36 - mmengine - INFO - Epoch(train) [1][ 500/3907] lr: 9.9964e-05 eta: 6:56:08 time: 0.6297 data_time: 0.0015 memory: 44139 loss: 0.4066 2023/06/04 17:06:39 - mmengine - INFO - Epoch(train) [1][ 600/3907] lr: 9.9948e-05 eta: 6:53:10 time: 0.6302 data_time: 0.0015 memory: 44139 loss: 0.4381 2023/06/04 17:07:42 - mmengine - INFO - Epoch(train) [1][ 700/3907] lr: 9.9929e-05 eta: 6:50:50 time: 0.6333 data_time: 0.0016 memory: 44139 loss: 0.4434 2023/06/04 17:08:45 - mmengine - INFO - Epoch(train) [1][ 800/3907] lr: 9.9907e-05 eta: 6:48:49 time: 0.6296 data_time: 0.0019 memory: 44139 loss: 0.4539 2023/06/04 17:09:48 - mmengine - INFO - Epoch(train) [1][ 900/3907] lr: 9.9882e-05 eta: 6:46:58 time: 0.6299 data_time: 0.0017 memory: 44139 loss: 0.4135 2023/06/04 17:10:51 - mmengine - INFO - Exp name: clip_large_pretrain_4x256_sdv1_lr1e-4_20230604_165929 2023/06/04 17:10:51 - mmengine - INFO - Epoch(train) [1][1000/3907] lr: 9.9855e-05 eta: 6:45:16 time: 0.6296 data_time: 0.0017 memory: 44139 loss: 0.3775 2023/06/04 17:11:54 - mmengine - INFO - Epoch(train) [1][1100/3907] lr: 9.9824e-05 eta: 6:43:42 time: 0.6292 data_time: 0.0016 memory: 44139 loss: 0.3822 2023/06/04 17:12:57 - mmengine - INFO - Epoch(train) [1][1200/3907] lr: 9.9791e-05 eta: 6:42:12 time: 0.6307 data_time: 0.0015 memory: 44139 loss: 0.4045 2023/06/04 17:14:00 - mmengine - INFO - Epoch(train) [1][1300/3907] lr: 9.9755e-05 eta: 6:40:46 time: 0.6284 data_time: 0.0014 memory: 44139 loss: 0.4228 2023/06/04 17:15:03 - mmengine - INFO - Epoch(train) [1][1400/3907] lr: 9.9716e-05 eta: 6:39:22 time: 0.6284 data_time: 0.0017 memory: 44139 loss: 0.3893 2023/06/04 17:16:06 - mmengine - INFO - Epoch(train) [1][1500/3907] lr: 9.9674e-05 eta: 6:38:02 time: 0.6301 data_time: 0.0016 memory: 44139 loss: 0.3855 2023/06/04 17:17:09 - mmengine - INFO - Epoch(train) [1][1600/3907] lr: 9.9629e-05 eta: 6:36:47 time: 0.6283 data_time: 0.0015 memory: 44139 loss: 0.3852 2023/06/04 17:18:12 - mmengine - INFO - Epoch(train) [1][1700/3907] lr: 9.9581e-05 eta: 6:35:29 time: 0.6285 data_time: 0.0016 memory: 44139 loss: 0.3946 2023/06/04 17:19:15 - mmengine - INFO - Epoch(train) [1][1800/3907] lr: 9.9530e-05 eta: 6:34:13 time: 0.6283 data_time: 0.0015 memory: 44139 loss: 0.3993 2023/06/04 17:20:18 - mmengine - INFO - Epoch(train) [1][1900/3907] lr: 9.9476e-05 eta: 6:32:57 time: 0.6278 data_time: 0.0015 memory: 44139 loss: 0.3996 2023/06/04 17:21:21 - mmengine - INFO - Exp name: clip_large_pretrain_4x256_sdv1_lr1e-4_20230604_165929 2023/06/04 17:21:21 - mmengine - INFO - Epoch(train) [1][2000/3907] lr: 9.9420e-05 eta: 6:31:42 time: 0.6274 data_time: 0.0018 memory: 44139 loss: 0.3753 2023/06/04 17:22:24 - mmengine - INFO - Epoch(train) [1][2100/3907] lr: 9.9361e-05 eta: 6:30:29 time: 0.6294 data_time: 0.0015 memory: 44139 loss: 0.3721 2023/06/04 17:23:26 - mmengine - INFO - Epoch(train) [1][2200/3907] lr: 9.9298e-05 eta: 6:29:17 time: 0.6286 data_time: 0.0015 memory: 44139 loss: 0.3699 2023/06/04 17:24:29 - mmengine - INFO - Epoch(train) [1][2300/3907] lr: 9.9233e-05 eta: 6:28:05 time: 0.6286 data_time: 0.0015 memory: 44139 loss: 0.3799 2023/06/04 17:25:32 - mmengine - INFO - Epoch(train) [1][2400/3907] lr: 9.9165e-05 eta: 6:26:54 time: 0.6274 data_time: 0.0015 memory: 44139 loss: 0.3607 2023/06/04 17:26:35 - mmengine - INFO - Epoch(train) [1][2500/3907] lr: 9.9095e-05 eta: 6:25:45 time: 0.6284 data_time: 0.0016 memory: 44139 loss: 0.3979 2023/06/04 17:27:38 - mmengine - INFO - Epoch(train) [1][2600/3907] lr: 9.9021e-05 eta: 6:24:36 time: 0.6287 data_time: 0.0014 memory: 44139 loss: 0.3836 2023/06/04 17:28:41 - mmengine - INFO - Epoch(train) [1][2700/3907] lr: 9.8944e-05 eta: 6:23:27 time: 0.6277 data_time: 0.0016 memory: 44139 loss: 0.3638 2023/06/04 17:29:44 - mmengine - INFO - Epoch(train) [1][2800/3907] lr: 9.8865e-05 eta: 6:22:18 time: 0.6289 data_time: 0.0017 memory: 44139 loss: 0.3390 2023/06/04 17:30:46 - mmengine - INFO - Epoch(train) [1][2900/3907] lr: 9.8783e-05 eta: 6:21:10 time: 0.6275 data_time: 0.0015 memory: 44139 loss: 0.3645 2023/06/04 17:31:49 - mmengine - INFO - Exp name: clip_large_pretrain_4x256_sdv1_lr1e-4_20230604_165929 2023/06/04 17:31:49 - mmengine - INFO - Epoch(train) [1][3000/3907] lr: 9.8698e-05 eta: 6:20:01 time: 0.6274 data_time: 0.0014 memory: 44139 loss: 0.3369 2023/06/04 17:32:52 - mmengine - INFO - Epoch(train) [1][3100/3907] lr: 9.8610e-05 eta: 6:18:53 time: 0.6272 data_time: 0.0015 memory: 44139 loss: 0.3641 2023/06/04 17:33:55 - mmengine - INFO - Epoch(train) [1][3200/3907] lr: 9.8519e-05 eta: 6:17:45 time: 0.6285 data_time: 0.0020 memory: 44139 loss: 0.3568 2023/06/04 17:34:58 - mmengine - INFO - Epoch(train) [1][3300/3907] lr: 9.8426e-05 eta: 6:16:38 time: 0.6293 data_time: 0.0015 memory: 44139 loss: 0.3258 2023/06/04 17:36:00 - mmengine - INFO - Epoch(train) [1][3400/3907] lr: 9.8330e-05 eta: 6:15:31 time: 0.6287 data_time: 0.0015 memory: 44139 loss: 0.3230 2023/06/04 17:37:03 - mmengine - INFO - Epoch(train) [1][3500/3907] lr: 9.8231e-05 eta: 6:14:25 time: 0.6282 data_time: 0.0016 memory: 44139 loss: 0.3450 2023/06/04 17:38:06 - mmengine - INFO - Epoch(train) [1][3600/3907] lr: 9.8129e-05 eta: 6:13:19 time: 0.6286 data_time: 0.0017 memory: 44139 loss: 0.3551 2023/06/04 17:39:09 - mmengine - INFO - Epoch(train) [1][3700/3907] lr: 9.8024e-05 eta: 6:12:13 time: 0.6288 data_time: 0.0017 memory: 44139 loss: 0.3679 2023/06/04 17:40:12 - mmengine - INFO - Epoch(train) [1][3800/3907] lr: 9.7917e-05 eta: 6:11:07 time: 0.6290 data_time: 0.0015 memory: 44139 loss: 0.3537 2023/06/04 17:41:15 - mmengine - INFO - Epoch(train) [1][3900/3907] lr: 9.7806e-05 eta: 6:10:02 time: 0.6274 data_time: 0.0015 memory: 44139 loss: 0.3821 2023/06/04 17:41:19 - mmengine - INFO - Exp name: clip_large_pretrain_4x256_sdv1_lr1e-4_20230604_165929 2023/06/04 17:41:19 - mmengine - INFO - Saving checkpoint at 1 epochs 2023/06/04 17:42:56 - mmengine - INFO - Epoch(val) [1][57/57] accuracy/top1: 81.3020 single-label/precision_classwise: [79.19664764404297, 84.79105377197266] single-label/recall_classwise: [89.61555480957031, 71.09302520751953] single-label/f1-score_classwise: [84.08457946777344, 77.34019470214844] data_time: 0.0441 time: 1.3513 2023/06/04 17:44:03 - mmengine - INFO - Exp name: clip_large_pretrain_4x256_sdv1_lr1e-4_20230604_165929 2023/06/04 17:44:08 - mmengine - INFO - Epoch(train) [2][ 100/3907] lr: 9.7685e-05 eta: 6:10:04 time: 0.6321 data_time: 0.0019 memory: 44139 loss: 0.3356 2023/06/04 17:45:11 - mmengine - INFO - Epoch(train) [2][ 200/3907] lr: 9.7570e-05 eta: 6:08:59 time: 0.6314 data_time: 0.0018 memory: 44138 loss: 0.3296 2023/06/04 17:46:14 - mmengine - INFO - Epoch(train) [2][ 300/3907] lr: 9.7451e-05 eta: 6:07:55 time: 0.6315 data_time: 0.0017 memory: 44138 loss: 0.3412 2023/06/04 17:47:17 - mmengine - INFO - Epoch(train) [2][ 400/3907] lr: 9.7329e-05 eta: 6:06:50 time: 0.6293 data_time: 0.0018 memory: 44138 loss: 0.3488 2023/06/04 17:48:21 - mmengine - INFO - Epoch(train) [2][ 500/3907] lr: 9.7205e-05 eta: 6:05:50 time: 0.6305 data_time: 0.0016 memory: 44138 loss: 0.3149 2023/06/04 17:49:24 - mmengine - INFO - Epoch(train) [2][ 600/3907] lr: 9.7078e-05 eta: 6:04:44 time: 0.6295 data_time: 0.0014 memory: 44138 loss: 0.3519 2023/06/04 17:50:27 - mmengine - INFO - Epoch(train) [2][ 700/3907] lr: 9.6949e-05 eta: 6:03:38 time: 0.6305 data_time: 0.0015 memory: 44138 loss: 0.3351 2023/06/04 17:51:30 - mmengine - INFO - Epoch(train) [2][ 800/3907] lr: 9.6816e-05 eta: 6:02:33 time: 0.6291 data_time: 0.0014 memory: 44138 loss: 0.3389 2023/06/04 17:52:33 - mmengine - INFO - Epoch(train) [2][ 900/3907] lr: 9.6681e-05 eta: 6:01:27 time: 0.6307 data_time: 0.0016 memory: 44138 loss: 0.3430 2023/06/04 17:53:36 - mmengine - INFO - Epoch(train) [2][1000/3907] lr: 9.6544e-05 eta: 6:00:23 time: 0.6299 data_time: 0.0016 memory: 44138 loss: 0.3348