Hugging Face
Models
Datasets
Spaces
Posts
Docs
Enterprise
Pricing
Log In
Sign Up
zzha6204
/
languagebind-mlp
like
0
multimodal
classification
content detection
License:
mit
Model card
Files
Files and versions
Community
6aeecea
languagebind-mlp
1 contributor
History:
2 commits
zzha6204
Upload languagebind_model.pt
6aeecea
verified
10 months ago
.gitattributes
Safe
1.52 kB
initial commit
10 months ago
languagebind_model.pt
pickle
Detected Pickle imports (36)
"models.languagebind.image.configuration_image.CLIPTextConfig"
,
"models.languagebind.audio.modeling_audio.CLIPEncoder"
,
"torch._utils._rebuild_parameter"
,
"transformers.models.clip.modeling_clip.CLIPTextEmbeddings"
,
"models.languagebind.audio.configuration_audio.LanguageBindAudioConfig"
,
"torch.nn.modules.linear.Linear"
,
"transformers.models.clip.modeling_clip.CLIPVisionEmbeddings"
,
"models.languagebind.audio.configuration_audio.CLIPVisionConfig"
,
"models.languagebind.image.modeling_image.CLIPEncoderLayer"
,
"torch.nn.modules.normalization.LayerNorm"
,
"models.languagebind.image.modeling_image.CLIPVisionTransformer"
,
"models.languagebind.image.modeling_image.PatchDropout"
,
"transformers.activations.GELUActivation"
,
"torch.float32"
,
"models.languagebind.image.modeling_image.CLIPTextTransformer"
,
"__builtin__.set"
,
"models.languagebind.audio.modeling_audio.CLIPEncoderLayer"
,
"torch.nn.modules.container.ModuleDict"
,
"models.languagebind.audio.modeling_audio.CLIPVisionTransformer"
,
"torch._C._nn.gelu"
,
"models.languagebind.audio.modeling_audio.PatchDropout"
,
"transformers.models.clip.modeling_clip.CLIPMLP"
,
"models.model.CustomClassifier"
,
"models.languagebind.image.configuration_image.CLIPVisionConfig"
,
"torch.nn.modules.container.ModuleList"
,
"models.languagebind.audio.configuration_audio.CLIPTextConfig"
,
"models.model.MultiModalClassifier"
,
"torch.nn.modules.conv.Conv2d"
,
"torch.nn.modules.sparse.Embedding"
,
"torch._utils._rebuild_tensor_v2"
,
"collections.OrderedDict"
,
"torch.LongStorage"
,
"models.languagebind.image.configuration_image.LanguageBindImageConfig"
,
"transformers.models.clip.modeling_clip.CLIPAttention"
,
"models.languagebind.image.modeling_image.CLIPEncoder"
,
"torch.FloatStorage"
How to fix it?
2.93 GB
LFS
Upload languagebind_model.pt
10 months ago