File size: 5,924 Bytes
25f8be9 f01fb8e 25f8be9 5efb24f 50ed2f3 5efb24f 50ed2f3 5efb24f |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 168 169 170 171 172 173 174 175 176 177 178 179 180 181 182 183 184 185 186 187 188 189 190 191 192 193 194 195 196 197 198 199 200 201 202 203 204 205 206 207 208 209 210 211 |
---
configs:
- config_name: en
default: true
data_files:
- split: train
path: "data/en/*.parquet"
- config_name: de
data_files:
- split: train
path: "data/de/*.parquet"
- config_name: fr
data_files:
- split: train
path: "data/fr/*.parquet"
- config_name: ru
data_files:
- split: train
path: "data/ru/*.parquet"
- config_name: es
data_files:
- split: train
path: "data/es/*.parquet"
- config_name: it
data_files:
- split: train
path: "data/it/*.parquet"
- config_name: ja
data_files:
- split: train
path: "data/ja/*.parquet"
- config_name: pt
data_files:
- split: train
path: "data/pt/*.parquet"
- config_name: zh
data_files:
- split: train
path: "data/zh/*.parquet"
- config_name: fa
data_files:
- split: train
path: "data/fa/*.parquet"
- config_name: tr
data_files:
- split: train
path: "data/tr/*.parquet"
license: apache-2.0
---
# Wikipedia Embeddings with BGE-M3
This dataset contains embeddings from the
[June 2024 Wikipedia dump](https://dumps.wikimedia.org/wikidatawiki/20240601/)
for the 11 most popular languages.
The embeddings are generated with the multilingual
[BGE-M3](https://huggingface.co/BAAI/bge-m3) model.
The dataset consists of Wikipedia articles split into paragraphs,
and embedded with the aforementioned model.
To enhance search quality, the paragraphs are prefixed with their
respective article titles before embedding.
Additionally, paragraphs containing fewer than 100 characters,
which tend to have low information density, are excluded from the dataset.
The dataset contains approximately 144 million vector embeddings in total.
| Language | Config Name | Embeddings |
|------------|-------------|-------------|
| English | en | 47_018_430 |
| German | de | 20_213_669 |
| French | fr | 18_324_060 |
| Russian | ru | 13_618_886 |
| Spanish | es | 13_194_999 |
| Italian | it | 10_092_524 |
| Japanese | ja | 7_769_997 |
| Portuguese | pt | 5_948_941 |
| Farsi | fa | 2_598_251 |
| Chinese | zh | 3_306_397 |
| Turkish | tr | 2_051_157 |
| **Total** | | 144_137_311 |
## Loading Dataset
You can load the entire dataset for a language as follows.
Please note that for some languages, the download size may be quite large.
```python
from datasets import load_dataset
dataset = load_dataset("Upstash/wikipedia-2024-06-bge-m3", "en", split="train")
```
Alternatively, you can stream portions of the dataset as needed.
```python
from datasets import load_dataset
dataset = load_dataset(
"Upstash/wikipedia-2024-06-bge-m3", "en", split="train", streaming=True
)
for data in dataset:
data_id = data["id"]
url = data["url"]
title = data["title"]
text = data["text"]
embedding = data["embedding"]
# Do some work
break
```
## Using Dataset
One potential use case for the dataset is enabling similarity search
by integrating it with a vector database.
In fact, we have developed a vector database that allows you to search
through the Wikipedia articles. Additionally, it includes a
[RAG (Retrieval-Augmented Generation)](https://github.com/upstash/rag-chat) chatbot,
which enables you to interact with a chatbot enhanced by the dataset.
For more details, see this [blog post](https://upstash.com/blog/indexing-wikipedia),
and be sure to check out the
[search engine and chatbot](https://wikipedia-semantic-search.vercel.app) yourself.
For reference, here is a rough estimation of how to implement semantic search
functionality using this dataset and Upstash Vector.
```python
from datasets import load_dataset
from sentence_transformers import SentenceTransformer
from upstash_vector import Index
# You can create Upstash Vector with dimension set to 1024,
# and similarity search function to dot product.
index = Index(
url="<UPSTASH_VECTOR_REST_URL>",
token="<UPSTASH_VECTOR_REST_TOKEN>",
)
vectors = []
batch_size = 200
dataset = load_dataset(
"Upstash/wikipedia-2024-06-bge-m3", "en", split="train", streaming=True
)
for data in dataset:
data_id = data["id"]
url = data["url"]
title = data["title"]
text = data["text"]
embedding = data["embedding"]
metadata = {
"url": url,
"title": title,
}
vector = (
data_id, # Unique vector id
embedding, # Vector embedding
metadata, # Optional, JSON-like metadata
text, # Optional, unstructured text data
)
vectors.append(vector)
if len(vectors) == batch_size:
break
# Upload embeddings into Upstash Vector in batches
index.upsert(
vectors=vectors,
namespace="en",
)
# Create the query vector
transformer = SentenceTransformer(
"BAAI/bge-m3",
device="cuda",
revision="babcf60cae0a1f438d7ade582983d4ba462303c2",
)
query = "Which state has the nickname Yellowhammer State?"
query_vector = transformer.encode(
sentences=query,
show_progress_bar=False,
normalize_embeddings=True,
)
results = index.query(
vector=query_vector,
top_k=2,
include_metadata=True,
include_data=True,
namespace="en",
)
# Query results are sorted in descending order of similarity
for result in results:
print(result.id) # Unique vector id
print(result.score) # Similarity score to the query vector
print(result.metadata) # Metadata associated with vector
print(result.data) # Unstructured data associated with vector
print("---")
```
|