Datasets:

License:
File size: 2,455 Bytes
e9388ba
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
import json
from string import Template

import datasets

_CITATION = ""       # TODO: @theyorubayesian
_DESCRIPTION = \
    """
    A collection of passages culled from news websites for Cross-Lingual Information Retrieval for African Languages.
    """
_HOMEPAGE = "https://github.com/castorini/ciral"
_LICENSE = "Apache License 2.0"        
_VERSION = "1.0.0"
_LANGUAGES = [
    "hausa",
    "somali",
    "swahili",
    "yoruba",
    "combined"
]
_DATASET_URL = Template("./passages-v1.0/${language}_passages.jsonl")


class CiralPassages(datasets.GeneratorBasedBuilder):
    BUILDER_CONFIGS = [
        datasets.BuilderConfig(
            version=datasets.Version(_VERSION),
            name=language,
            description=f"CIRAL passages for language: {language}"
        ) for language in _LANGUAGES
    ]
    DEFAULT_CONFIG_NAME = "combined"

    def _info(self):
        return datasets.DatasetInfo(
            description=_DESCRIPTION,
            citation=_CITATION,
            features=datasets.Features({
                "docid": datasets.Value("string"),
                "title": datasets.Value("string"),
                "text": datasets.Value("string"),
                "url": datasets.Value("string")
            }),
            homepage=_HOMEPAGE,
            license=_LICENSE
        )

    def _split_generators(self, dl_manager: datasets.DownloadManager):
        language = self.config.name
        if language == "combined":
            language_file = dl_manager.download_and_extract({
                _language: _DATASET_URL.substitute(language=_language)
                for _language in _LANGUAGES[:-1]
            })

            splits = [
                datasets.SplitGenerator(
                    name=_language, gen_kwargs={"filepath": language_file[_language]}
                ) for _language in _LANGUAGES[:-1]
            ]
            return splits
        else:
            language_file = dl_manager.download_and_extract(
                _DATASET_URL.substitute(language=language))

            splits = [
                datasets.SplitGenerator(
                    name="train",
                    gen_kwargs={"filepath": language_file}
                )
            ]
            return splits

    def _generate_examples(self, filepath: str):
        with open(filepath, encoding="utf-8") as f:
            for line in f:
                data = json.loads(line)
                yield data["docid"], data