File size: 4,563 Bytes
fa17708
 
 
a951f8f
 
 
 
 
 
 
 
 
 
 
fa17708
a951f8f
 
 
fa17708
 
a951f8f
fa17708
a951f8f
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
fa17708
a951f8f
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
---
dataset_info:
  features:
    - name: id
      dtype: string
    - name: url
      dtype: string
    - name: title
      dtype: string
    - name: chunks
      sequence: string
    - name: embeddings
      sequence:
        sequence: float32
  splits:
    - name: train
      num_bytes: 6167551972
      num_examples: 534044
  download_size: 5897354237
  dataset_size: 6167551972

configs:
  - config_name: default
    data_files:
    - split: train
      path: data/train-*

language:
  - cs

size_categories:
  - 100K<n<1M

task_categories:
  - text-generation
  - fill-mask

license:
  - cc-by-sa-3.0
  - gfdl
---

This dataset contains the Czech subset of the [`wikimedia/wikipedia`](https://huggingface.co/datasets/wikimedia/wikipedia) dataset. Each page is divided into paragraphs, stored as a list in the `chunks` column. For every paragraph, embeddings are created using the [`intfloat/multilingual-e5-large`](https://huggingface.co/intfloat/multilingual-e5-large) model.

## Usage

Load the dataset:

```python
from datasets import load_dataset

ds = load_dataset("karmiq/wikipedia-embeddings-cs-e5-large", split="train")
ds[1]
```

```
{
  'id': '1',
  'url': 'https://cs.wikipedia.org/wiki/Astronomie',
  'title': 'Astronomie',
  'chunks': [
    'Astronomie, řecky αστρονομία z άστρον ( astron ) hvězda a νόμος ( nomos )...',
    'Myšlenky Aristotelovy rozvinul ve 2. století našeho letopočtu Klaudios Ptolemaios...',
    ...,
  ],
  'embeddings': [
    [0.09006806463003159, -0.009814552962779999, ...],
    [0.10767366737127304, ...],
    ...
  ]
}
```

The structure makes it easy to use the dataset for implementing semantic search.

<details>
<summary>Load the data in Elasticsearch</summary>

```python
def doc_generator(data, batch_size=1000):
  for batch in data.with_format("numpy").iter(batch_size):
    for i, id in enumerate(batch["id"]):
      output = {"id": id}
      output["title"] = batch["title"][i]
      output["url"] = batch["url"][i]
      output["parts"] = [
          { "chunk": chunk, "embedding": embedding }
          for chunk, embedding in zip(batch["chunks"][i], batch["embeddings"][i])
      ]
      yield output

num_indexed, num_failed = 0, 0,
progress = tqdm(total=ds.num_rows, unit="doc", desc="Indexing")

for ok, info in parallel_bulk(
    es,
    index="wikipedia-search",
    actions=doc_generator(ds),
    raise_on_error=False,
):
    if not ok:
        print(f"ERROR {info['index']['status']}: "
              f"{info['index']['error']['type']}: {info['index']['error']['caused_by']['type']}: "
              f"{info['index']['error']['caused_by']['reason'][:250]}")

    progress.update(1)
```
</details>

<details>
<summary>Use <code>sentence_transformers.util.semantic_search</code></summary>

```python
import sentence_transformers
model = sentence_transformers.SentenceTransformer("intfloat/multilingual-e5-large")

ds.set_format(type="torch", columns=["embeddings"], output_all_columns=True)

# Flatten the dataset
def explode_sequence(batch):
  output = { "id": [], "url": [], "title": [], "chunk": [], "embedding": [] }

  for id, url, title, chunks, embeddings in zip(
    batch["id"], batch["url"], batch["title"], batch["chunks"], batch["embeddings"]
  ):
    output["id"].extend([id for _ in range(len(chunks))])
    output["url"].extend([url for _ in range(len(chunks))])
    output["title"].extend([title for _ in range(len(chunks))])
    output["chunk"].extend(chunks)
    output["embedding"].extend(embeddings)

  return output

ds_flat = ds.map(
  explode_sequence,
  batched=True,
  remove_columns=ds.column_names,
  num_proc=min(os.cpu_count(), 32),
  desc="Flatten")
ds_flat

query = "Čím se zabývá fyzika?"

hits = sentence_transformers.util.semantic_search(
  query_embeddings=model.encode(query),
  corpus_embeddings=ds_flat["embedding"],
  top_k=10)

for hit in hits[0]:
    title = ds_flat[hit['corpus_id']]['title']
    chunk = ds_flat[hit['corpus_id']]['chunk']
    print(f"[{hit['score']:0.2f}] {textwrap.shorten(chunk, width=100, placeholder='…')} [{title}]")

# [0.90] Fyzika částic ( též částicová fyzika ) je oblast fyziky, která se zabývá částicemi. V širším smyslu… [Fyzika částic]
# [0.89] Fyzika ( z řeckého φυσικός ( fysikos ): přírodní, ze základu φύσις ( fysis ): příroda, archaicky… [Fyzika]
# ...
```
</details>

The embeddings generation took about 6 hours on an NVIDIA A100 80GB GPU.

## License

See license of the original dataset: <https://huggingface.co/datasets/wikimedia/wikipedia>.