metadata
license: cc-by-nc-sa-4.0
size_categories:
- 10M<n<100M
Sentence transformer (all_MiniLM_L6_v2) embeddings for all long llava summaries for coyo-hd-11m-llavanext dataset (07-03-2024 version)
Instructions
PLEASE NOTE: You will need at least 40GB GPU to use the embeddings
Depencencies
!pip install huggingface_hub -U
!pip install datasets -U
!pip install sentence-transformers -U
Imports
from huggingface_hub import hf_hub_download
from datasets import load_dataset
from sentence_transformers import SentenceTransformer
from sentence_transformers import util
import torch
import numpy as np
import tqdm
coyo dataset and coyo dataset embeddings
coyo_dataset = load_dataset("CaptionEmporium/coyo-hd-11m-llavanext")
hf_hub_download(repo_id="asigalov61/coyo-hd-11m-llavanext-all-MiniLM-L6-v2",
repo_type='dataset',
filename="coyo_hd_11m_llavanext_all_MiniLM_L6_v2_llava_captions_embeddings_07_03_24.npz",
local_dir='.'
)
Loading code
coyo_embeddings_cpu = np.load('coyo_hd_11m_llavanext_all_MiniLM_L6_v2_llava_captions_embeddings_07_03_24.npz')['data']
coyo_embeddings_cpu = torch.from_numpy(coyo_embeddings_cpu).cuda()
coyo_embeddings_cpu = util.normalize_embeddings(coyo_embeddings_cpu)
model = SentenceTransformer('all-MiniLM-L6-v2', device='cuda')
Inference code
torch.cuda.empty_cache()
queries_corpus = ['Capital of France',
'Love, peace and happiness',
'Cute cats in tacky suits :)'
]
queries_embeddings = model.encode(queries_corpus, device='cuda', show_progress_bar=True, convert_to_tensor=True)
queries_embeddings = util.normalize_embeddings(queries_embeddings)
results = util.semantic_search(queries_embeddings, coyo_embeddings_cpu, score_function=util.dot_score)
closest_index = results[0][0]['corpus_id']
print('=' * 70)
print('Best match index:', closest_index)
print('=' * 70)
print('Best match corpus entry:', coyo_dataset['train'][closest_index])
print('=' * 70)