Create README.md
018d2d6
-
1.52 kB
initial commit
-
66 Bytes
Create README.md
pretrained_vit_model_full.pth
Detected Pickle imports (18)
- "torch.nn.modules.activation.MultiheadAttention",
- "torchvision.models.vision_transformer.MLPBlock",
- "torch.nn.modules.activation.GELU",
- "torch.nn.modules.linear.Linear",
- "torch._utils._rebuild_parameter",
- "__builtin__.set",
- "torchvision.models.vision_transformer.Encoder",
- "torch.nn.modules.dropout.Dropout",
- "torch.nn.modules.normalization.LayerNorm",
- "torch.nn.modules.linear.NonDynamicallyQuantizableLinear",
- "torch.FloatStorage",
- "torch.nn.modules.conv.Conv2d",
- "torch.nn.modules.container.Sequential",
- "functools.partial",
- "collections.OrderedDict",
- "torch._utils._rebuild_tensor_v2",
- "torchvision.models.vision_transformer.VisionTransformer",
- "torchvision.models.vision_transformer.EncoderBlock"
How to fix it?
343 MB
Upload 2 files
vit_model_full.pth
Detected Pickle imports (19)
- "torch.nn.modules.container.Sequential",
- "torch.nn.modules.dropout.Dropout",
- "torch._utils._rebuild_tensor_v2",
- "__main__.PatchEmbedding",
- "__builtin__.set",
- "torch.nn.modules.linear.NonDynamicallyQuantizableLinear",
- "__main__.MLPBlock",
- "torch.nn.modules.conv.Conv2d",
- "torch.nn.modules.activation.GELU",
- "torch.nn.modules.linear.Linear",
- "torch.nn.modules.normalization.LayerNorm",
- "__main__.ViT",
- "torch._utils._rebuild_parameter",
- "__main__.MultiheadSelfAttentionBlock",
- "torch.nn.modules.activation.MultiheadAttention",
- "collections.OrderedDict",
- "__main__.TransformerEncoderBlock",
- "torch.nn.modules.flatten.Flatten",
- "torch.FloatStorage"
How to fix it?
343 MB
Upload 2 files