Edit model card

Adds the missing layers from openai/clip-vit-large-patch14-336 to microsoft/LLM2CLIP-Openai-L-14-336 so that it can be loaded in ComfyUI.

import torch
from safetensors.torch import load_file, save_file


def newclip(original_path, new_path, combined_path):
    original = load_file(original_path)
    new = load_file(new_path)
    combined = {}

    for key in original.keys():
        if key in new.keys():
            combined[key] = new[key]
        else:
            combined[key] = original[key]

    save_file(combined, combined_path)

newclip(
    "./original-clip-l.safetensors",
    "./new-llm2clip-clip-l.safetensors",
    "./combined.safetensors"
)
Downloads last month

-

Downloads are not tracked for this model. How to track
Safetensors
Model size
428M params
Tensor type
I64
·
F32
·
Inference Examples
Unable to determine this model's library. Check the docs .