pszemraj's picture
Update README.md
546d881 verified
metadata
dataset_info:
  features:
    - name: emoji
      dtype: string
    - name: message
      dtype: string
    - name: embed
      sequence: float64
  splits:
    - name: train
      num_bytes: 30665042
      num_examples: 3722
  download_size: 24682308
  dataset_size: 30665042
configs:
  - config_name: default
    data_files:
      - split: train
        path: data/train-*
license: mit
task_categories:
  - feature-extraction
language:
  - en
tags:
  - semantic-search
  - embeddings
  - emoji
size_categories:
  - 1K<n<10K

local emoji semantic search

Emoji, their text descriptions and precomputed text embeddings with Alibaba-NLP/gte-large-en-v1.5 for use in emoji semantic search.

This work is largely inspired by the original emoji-semantic-search repo and aims to provide the data for fully local use, as the demo is not working as of a few days ago.

  • This repo only contains a precomputed embedding "database", equivalent to server/emoji-embeddings.jsonl.gz in the original repo, to be used as the database for semantic search,
    • If working with the original repo, the inference class also needs to be updated to use SentenceTransformers instead of OpenAI calls (see below example)
  • The provided inference code is almost instant even on CPU 🔥

basic inference example

since the dataset is tiny, just load with pandas:

import pandas as pd

df = pd.read_parquet("hf://datasets/pszemraj/local-emoji-search-gte/data/train-00000-of-00001.parquet")
print(df.info())

load the sentence-transformers model:

# Requires sentence_transformers>=2.7.0

from sentence_transformers import SentenceTransformer

model = SentenceTransformer('Alibaba-NLP/gte-large-en-v1.5', trust_remote_code=True)

define a minimal semantic search inference function:

Click me to expand the inference function code
import numpy as np
import pandas as pd
from sentence_transformers import SentenceTransformer
from sentence_transformers.util import semantic_search


def get_top_emojis(
    query: str,
    emoji_df: pd.DataFrame,
    model,
    top_k: int = 5,
    num_digits: int = 4,
) -> list:
    """
    Performs semantic search to find the most relevant emojis for a given query.

    Args:
        query (str): The search query.
        emoji_df (pd.DataFrame): DataFrame containing emoji metadata and embeddings.
        model (SentenceTransformer): The sentence transformer model for encoding.
        top_k (int): Number of top results to return.
        num_digits (int): Number of digits to round scores to

    Returns:
        list: A list of dicts, where each dict represents a top match. Each dict has keys 'emoji', 'message', and 'score'
    """
    query_embed = model.encode(query)
    embeddings_array = np.vstack(emoji_df.embed.values, dtype=np.float32)

    hits = semantic_search(query_embed, embeddings_array, top_k=top_k)[0]

    # Extract the top hits + metadata
    results = [
        {
            "emoji": emoji_df.loc[hit["corpus_id"], "emoji"],
            "message": emoji_df.loc[hit["corpus_id"], "message"],
            "score": round(hit["score"], num_digits),
        }
        for hit in hits
    ]
    return results

run inference!

import pprint as pp

query_text = "that is flames"
top_emojis = get_top_emojis(query_text, df, model, top_k=5)

pp.pprint(top_emojis, indent=2)

# [ {'emoji': '❤\u200d🔥', 'message': 'heart on fire', 'score': 0.7043},
#   {'emoji': '🥵', 'message': 'hot face', 'score': 0.694},
#   {'emoji': '😳', 'message': 'flushed face', 'score': 0.6794},
#   {'emoji': '🔥', 'message': 'fire', 'score': 0.6744},
#   {'emoji': '🧨', 'message': 'firecracker', 'score': 0.663}]