Create README.md
1f3cb2d
verified
-
1.52 kB
initial commit
-
76 Bytes
Create README.md
languagebind_model.pt
Detected Pickle imports (36)
- "models.languagebind.image.configuration_image.CLIPTextConfig",
- "models.languagebind.audio.modeling_audio.CLIPEncoder",
- "torch._utils._rebuild_parameter",
- "transformers.models.clip.modeling_clip.CLIPTextEmbeddings",
- "models.languagebind.audio.configuration_audio.LanguageBindAudioConfig",
- "torch.nn.modules.linear.Linear",
- "transformers.models.clip.modeling_clip.CLIPVisionEmbeddings",
- "models.languagebind.audio.configuration_audio.CLIPVisionConfig",
- "models.languagebind.image.modeling_image.CLIPEncoderLayer",
- "torch.nn.modules.normalization.LayerNorm",
- "models.languagebind.image.modeling_image.CLIPVisionTransformer",
- "models.languagebind.image.modeling_image.PatchDropout",
- "transformers.activations.GELUActivation",
- "torch.float32",
- "models.languagebind.image.modeling_image.CLIPTextTransformer",
- "__builtin__.set",
- "models.languagebind.audio.modeling_audio.CLIPEncoderLayer",
- "torch.nn.modules.container.ModuleDict",
- "models.languagebind.audio.modeling_audio.CLIPVisionTransformer",
- "torch._C._nn.gelu",
- "models.languagebind.audio.modeling_audio.PatchDropout",
- "transformers.models.clip.modeling_clip.CLIPMLP",
- "models.model.CustomClassifier",
- "models.languagebind.image.configuration_image.CLIPVisionConfig",
- "torch.nn.modules.container.ModuleList",
- "models.languagebind.audio.configuration_audio.CLIPTextConfig",
- "models.model.MultiModalClassifier",
- "torch.nn.modules.conv.Conv2d",
- "torch.nn.modules.sparse.Embedding",
- "torch._utils._rebuild_tensor_v2",
- "collections.OrderedDict",
- "torch.LongStorage",
- "models.languagebind.image.configuration_image.LanguageBindImageConfig",
- "transformers.models.clip.modeling_clip.CLIPAttention",
- "models.languagebind.image.modeling_image.CLIPEncoder",
- "torch.FloatStorage"
How to fix it?
2.93 GB
Upload languagebind_model.pt