kelechi commited on
Commit
efaccae
·
1 Parent(s): b030c5b

added dataset card

Browse files
Files changed (1) hide show
  1. README.md +75 -0
README.md ADDED
@@ -0,0 +1,75 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ language:
3
+ - om
4
+ - am
5
+ - rw
6
+ - rn
7
+ - ha
8
+ - ig
9
+ - pcm
10
+ - so
11
+ - sw
12
+ - ti
13
+ - yo
14
+ - multilingual
15
+
16
+ license: "Apache License 2.0"
17
+ ---
18
+ # Dataset Summary
19
+ This is the corpus on which [AfriBERTa] (https://huggingface.co/castorini/afriberta_large) was trained on.
20
+ The dataset contains 11 languages - Afaan Oromoo (also called Oromo), Amharic, Gahuza (a mixed language containing Kinyarwanda and Kirundi), Hausa, Igbo, Nigerian Pidgin, Somali, Swahili, Tigrinya and Yorùbá.
21
+ The dataset is mostly from the BBC news website, but some languages also have data from Common Crawl.
22
+
23
+
24
+ # Supported Tasks and Leaderboards
25
+ The AfriBERTa corpus was mostly intended to pre-train language models.
26
+
27
+
28
+ # Load Dataset
29
+ An example to load the train split of the Somali corpus:
30
+ ```
31
+ dataset = load_dataset("castorini/afriberta", "somali", split="train")
32
+ ```
33
+
34
+ An example to load the test split of the Pidgin corpus:
35
+ ```
36
+ dataset = load_dataset("castorini/afriberta", "pidgin", split="test")
37
+ ```
38
+
39
+ # Data Fields
40
+
41
+ The data fields are:
42
+
43
+ - id: id of the example
44
+ - text: content as a string
45
+
46
+ # Data Splits
47
+ Each language has a train and test split, with varying sizes.
48
+
49
+ # Considerations for Using the Data
50
+
51
+ ## Discussion of Biases
52
+ Since majority of the data is obtained from the BBC's news website, models trained on this dataset are likely going to
53
+ be biased towards the news domain.
54
+
55
+ Also, since some of the data is obtained from Common Crawl, care should be taken (especially for text generation models) since personal and sensitive information might be present.
56
+
57
+ # Citation Information
58
+ ```
59
+ @inproceedings{ogueji-etal-2021-small,
60
+ title = "Small Data? No Problem! Exploring the Viability of Pretrained Multilingual Language Models for Low-resourced Languages",
61
+ author = "Ogueji, Kelechi and
62
+ Zhu, Yuxin and
63
+ Lin, Jimmy",
64
+ booktitle = "Proceedings of the 1st Workshop on Multilingual Representation Learning",
65
+ month = nov,
66
+ year = "2021",
67
+ address = "Punta Cana, Dominican Republic",
68
+ publisher = "Association for Computational Linguistics",
69
+ url = "https://aclanthology.org/2021.mrl-1.11",
70
+ pages = "116--126",
71
+ }
72
+ ```
73
+
74
+ # Contributions
75
+ Thanks to [keleog](https://github.com/keleog)