mdumandag's picture
fix formatting
50ed2f3
metadata
configs:
  - config_name: en
    default: true
    data_files:
      - split: train
        path: data/en/*.parquet
  - config_name: de
    data_files:
      - split: train
        path: data/de/*.parquet
  - config_name: fr
    data_files:
      - split: train
        path: data/fr/*.parquet
  - config_name: ru
    data_files:
      - split: train
        path: data/ru/*.parquet
  - config_name: es
    data_files:
      - split: train
        path: data/es/*.parquet
  - config_name: it
    data_files:
      - split: train
        path: data/it/*.parquet
  - config_name: ja
    data_files:
      - split: train
        path: data/ja/*.parquet
  - config_name: pt
    data_files:
      - split: train
        path: data/pt/*.parquet
  - config_name: zh
    data_files:
      - split: train
        path: data/zh/*.parquet
  - config_name: fa
    data_files:
      - split: train
        path: data/fa/*.parquet
  - config_name: tr
    data_files:
      - split: train
        path: data/tr/*.parquet
license: apache-2.0

Wikipedia Embeddings with BGE-M3

This dataset contains embeddings from the June 2024 Wikipedia dump for the 11 most popular languages.

The embeddings are generated with the multilingual BGE-M3 model.

The dataset consists of Wikipedia articles split into paragraphs, and embedded with the aforementioned model.

To enhance search quality, the paragraphs are prefixed with their respective article titles before embedding.

Additionally, paragraphs containing fewer than 100 characters, which tend to have low information density, are excluded from the dataset.

The dataset contains approximately 144 million vector embeddings in total.

Language Config Name Embeddings
English en 47_018_430
German de 20_213_669
French fr 18_324_060
Russian ru 13_618_886
Spanish es 13_194_999
Italian it 10_092_524
Japanese ja 7_769_997
Portuguese pt 5_948_941
Farsi fa 2_598_251
Chinese zh 3_306_397
Turkish tr 2_051_157
Total 144_137_311

Loading Dataset

You can load the entire dataset for a language as follows. Please note that for some languages, the download size may be quite large.

from datasets import load_dataset

dataset = load_dataset("Upstash/wikipedia-2024-06-bge-m3", "en", split="train")

Alternatively, you can stream portions of the dataset as needed.

from datasets import load_dataset

dataset = load_dataset(
    "Upstash/wikipedia-2024-06-bge-m3", "en", split="train", streaming=True
)

for data in dataset:
    data_id = data["id"]
    url = data["url"]
    title = data["title"]
    text = data["text"]
    embedding = data["embedding"]
    # Do some work
    break

Using Dataset

One potential use case for the dataset is enabling similarity search by integrating it with a vector database.

In fact, we have developed a vector database that allows you to search through the Wikipedia articles. Additionally, it includes a RAG (Retrieval-Augmented Generation) chatbot, which enables you to interact with a chatbot enhanced by the dataset.

For more details, see this blog post, and be sure to check out the search engine and chatbot yourself.

For reference, here is a rough estimation of how to implement semantic search functionality using this dataset and Upstash Vector.

from datasets import load_dataset
from sentence_transformers import SentenceTransformer
from upstash_vector import Index

# You can create Upstash Vector with dimension set to 1024,
# and similarity search function to dot product.
index = Index(
    url="<UPSTASH_VECTOR_REST_URL>",
    token="<UPSTASH_VECTOR_REST_TOKEN>",
)

vectors = []
batch_size = 200

dataset = load_dataset(
    "Upstash/wikipedia-2024-06-bge-m3", "en", split="train", streaming=True
)

for data in dataset:
    data_id = data["id"]
    url = data["url"]
    title = data["title"]
    text = data["text"]
    embedding = data["embedding"]

    metadata = {
        "url": url,
        "title": title,
    }

    vector = (
        data_id,  # Unique vector id
        embedding,  # Vector embedding
        metadata,  # Optional, JSON-like metadata
        text,  # Optional, unstructured text data
    )
    vectors.append(vector)

    if len(vectors) == batch_size:
        break

# Upload embeddings into Upstash Vector in batches
index.upsert(
    vectors=vectors,
    namespace="en",
)

# Create the query vector
transformer = SentenceTransformer(
    "BAAI/bge-m3",
    device="cuda",
    revision="babcf60cae0a1f438d7ade582983d4ba462303c2",
)

query = "Which state has the nickname Yellowhammer State?"
query_vector = transformer.encode(
    sentences=query,
    show_progress_bar=False,
    normalize_embeddings=True,
)

results = index.query(
    vector=query_vector,
    top_k=2,
    include_metadata=True,
    include_data=True,
    namespace="en",
)

# Query results are sorted in descending order of similarity
for result in results:
    print(result.id)  # Unique vector id
    print(result.score)  # Similarity score to the query vector
    print(result.metadata)  # Metadata associated with vector
    print(result.data)  # Unstructured data associated with vector
    print("---")