File size: 1,348 Bytes
bc4689e
 
 
 
87d04e0
bc4689e
87d04e0
bc4689e
87d04e0
bc4689e
87d04e0
bc4689e
 
87d04e0
 
 
 
bc4689e
 
 
 
 
 
2a78aea
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
---
dataset_info:
  features:
  - name: lang
    dtype: string
  - name: example_id
    dtype: string
  - name: query
    dtype: string
  - name: answer
    dtype: string
  splits:
  - name: train
    num_bytes: 4193271
    num_examples: 40548
  download_size: 2118715
  dataset_size: 4193271
configs:
- config_name: default
  data_files:
  - split: train
    path: data/train-*
---

# mkqa filtered version

For a better dataset description, please visit the official site of the source dataset: [LINK](https://huggingface.co/datasets/mkqa) <br>
<br>
**This dataset was prepared by converting mkqa dataset**.

**I additionaly share the code which I used to convert the original dataset to make everything more clear**
```
mkqa = load_dataset("mkqa", split="train").to_pandas()
needed_langs = ["en", "ar", "de", "es", "vi", "zh_cn"]
rows = []
for i, row in tqdm(mkqa.iterrows(), total=mkqa.shape[0]):
    for lang in needed_langs:
        rows.append([lang, row["example_id"], row["queries"][lang], row["answers"][lang][0]["text"]])
        
filtered_dataset = pd.DataFrame(rows, columns=["lang", "example_id", "query", "answer"])
filtered_dataset.dropna(inplace=True)
filtered_dataset.reset_index(drop=True, inplace=True)
```

**How to download**

```
from datasets import load_dataset
data = load_dataset("dkoterwa/oasst1_filtered_retrieval")
```