File size: 4,141 Bytes
4e5bc8c fcb0de4 4e5bc8c fcb0de4 106b198 fcb0de4 106b198 546d881 106b198 0ba692b 106b198 fcb0de4 |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 |
---
dataset_info:
features:
- name: emoji
dtype: string
- name: message
dtype: string
- name: embed
sequence: float64
splits:
- name: train
num_bytes: 30665042
num_examples: 3722
download_size: 24682308
dataset_size: 30665042
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
license: mit
task_categories:
- feature-extraction
language:
- en
tags:
- semantic-search
- embeddings
- emoji
size_categories:
- 1K<n<10K
---
# local emoji semantic search
Emoji, their text descriptions and precomputed text embeddings with [Alibaba-NLP/gte-large-en-v1.5](https://hf.co/Alibaba-NLP/gte-large-en-v1.5) for use in emoji semantic search.
This work is largely inspired by the original [emoji-semantic-search repo](https://archive.md/ikcze) and aims to provide the data for fully local use, as the [demo](https://www.emojisearch.app/) is [not working](https://github.com/lilianweng/emoji-semantic-search/issues/6#issue-2724936875) as of a few days ago.
- This repo only contains a precomputed embedding "database", equivalent to [server/emoji-embeddings.jsonl.gz](https://github.com/lilianweng/emoji-semantic-search/blob/6a6f351852b99e7b899437fa31309595a9008cd1/server/emoji-embeddings.jsonl.gz) in the original repo, to be used as the database for semantic search,
- If working with the original repo, the [inference class](https://github.com/lilianweng/emoji-semantic-search/blob/6a6f351852b99e7b899437fa31309595a9008cd1/server/app.py#L18) also needs to be updated to use SentenceTransformers instead of OpenAI calls (_see below example_)
- The provided inference code is almost instant even on CPU 🔥
## basic inference example
since the dataset is tiny, just load with pandas:
```py
import pandas as pd
df = pd.read_parquet("hf://datasets/pszemraj/local-emoji-search-gte/data/train-00000-of-00001.parquet")
print(df.info())
```
load the sentence-transformers model:
```py
# Requires sentence_transformers>=2.7.0
from sentence_transformers import SentenceTransformer
model = SentenceTransformer('Alibaba-NLP/gte-large-en-v1.5', trust_remote_code=True)
```
define a minimal semantic search inference function:
<details>
<summary>Click me to expand the inference function code</summary>
```py
import numpy as np
import pandas as pd
from sentence_transformers import SentenceTransformer
from sentence_transformers.util import semantic_search
def get_top_emojis(
query: str,
emoji_df: pd.DataFrame,
model,
top_k: int = 5,
num_digits: int = 4,
) -> list:
"""
Performs semantic search to find the most relevant emojis for a given query.
Args:
query (str): The search query.
emoji_df (pd.DataFrame): DataFrame containing emoji metadata and embeddings.
model (SentenceTransformer): The sentence transformer model for encoding.
top_k (int): Number of top results to return.
num_digits (int): Number of digits to round scores to
Returns:
list: A list of dicts, where each dict represents a top match. Each dict has keys 'emoji', 'message', and 'score'
"""
query_embed = model.encode(query)
embeddings_array = np.vstack(emoji_df.embed.values, dtype=np.float32)
hits = semantic_search(query_embed, embeddings_array, top_k=top_k)[0]
# Extract the top hits + metadata
results = [
{
"emoji": emoji_df.loc[hit["corpus_id"], "emoji"],
"message": emoji_df.loc[hit["corpus_id"], "message"],
"score": round(hit["score"], num_digits),
}
for hit in hits
]
return results
```
</details>
run inference!
```py
import pprint as pp
query_text = "that is flames"
top_emojis = get_top_emojis(query_text, df, model, top_k=5)
pp.pprint(top_emojis, indent=2)
# [ {'emoji': '❤\u200d🔥', 'message': 'heart on fire', 'score': 0.7043},
# {'emoji': '🥵', 'message': 'hot face', 'score': 0.694},
# {'emoji': '😳', 'message': 'flushed face', 'score': 0.6794},
# {'emoji': '🔥', 'message': 'fire', 'score': 0.6744},
# {'emoji': '🧨', 'message': 'firecracker', 'score': 0.663}]
```
|