saattrupdan commited on
Commit
a33abfb
1 Parent(s): b222475

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +126 -18
README.md CHANGED
@@ -1,21 +1,129 @@
1
  ---
2
- dataset_info:
3
- features:
4
- - name: doc
5
- dtype: string
6
- - name: subreddit
7
- dtype: string
8
- - name: language
9
- dtype: string
10
- - name: language_confidence
11
- dtype: float64
12
- splits:
13
- - name: train
14
- num_bytes: 3763131485
15
- num_examples: 13481537
16
- download_size: 2341424578
17
- dataset_size: 3763131485
 
18
  ---
19
- # Dataset Card for "scandi-reddit"
20
 
21
- [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
  ---
2
+ pretty_name: ScandiReddit
3
+ language:
4
+ - da
5
+ - sv
6
+ - no
7
+ - is
8
+ license:
9
+ - cc-by-4.0
10
+ multilinguality:
11
+ - multilingual
12
+ size_categories:
13
+ - 10M<n<100M
14
+ task_categories:
15
+ - text-generation
16
+ - fill-mask
17
+ task_ids:
18
+ - language-modeling
19
  ---
 
20
 
21
+ # Dataset Card for ScandiReddit
22
+
23
+ ## Dataset Description
24
+
25
+ - **Repository:** <https://github.com/alexandrainst/ScandiReddit>
26
+ - **Point of Contact:** [Dan Saattrup Nielsen](mailto:dan.nielsen@alexandra.dk)
27
+ - **Size of downloaded dataset files:** 2341 MB
28
+ - **Size of the generated dataset:** 3594 MB
29
+ - **Total amount of disk used:** 5935 MB
30
+
31
+ ### Dataset Summary
32
+
33
+ ScandiReddit is...
34
+
35
+
36
+ ### Supported Tasks and Leaderboards
37
+
38
+ Training language models is the intended task for this dataset. No leaderboard is active at this point.
39
+
40
+
41
+ ### Languages
42
+
43
+ The dataset is available in Danish (`da`), Swedish (`sv`), Norwegian (`no`) and Icelandic (`is`).
44
+
45
+
46
+ ## Dataset Structure
47
+
48
+ ### Data Instances
49
+
50
+ - **Size of downloaded dataset files:** 2341 MB
51
+ - **Size of the generated dataset:** 3594 MB
52
+ - **Total amount of disk used:** 5935 MB
53
+
54
+ An example from the dataset looks as follows.
55
+ ```
56
+ {
57
+ 'doc': 'Bergen er ødelagt. Det er ikke moro mer.',
58
+ 'subreddit': 'Norway',
59
+ 'language': 'da',
60
+ 'language_confidence': 0.7472341656684875}
61
+ }
62
+ ```
63
+
64
+ ### Data Fields
65
+
66
+ The data fields are the same among all splits.
67
+
68
+ - `doc`: a `string` feature.
69
+ - `subreddit`: a `string` feature.
70
+ - `language`: a `string` feature.
71
+ - `language_confidence`: a `float64` feature.
72
+
73
+ ### Language Distribution
74
+
75
+ | name | count |
76
+ |----------|--------:|
77
+ | sv | 6967420 |
78
+ | sv | 4965195 |
79
+ | no | 1340470 |
80
+ | is | 206689 |
81
+
82
+ ### Top-20 Subreddit Distribution
83
+
84
+ | name | count |
85
+ |----------|--------:|
86
+ |sweden |4881483|
87
+ |Denmark |3579178|
88
+ |norge |1281655|
89
+ |svenskpolitik | 771960|
90
+ |InfluencergossipDK | 649910|
91
+ |swedishproblems | 339683|
92
+ |Iceland | 183488|
93
+ |dkfinance | 113860|
94
+ |unket | 81077|
95
+ |DanishEnts | 69055|
96
+ |dankmark | 62928|
97
+ |swedents | 58576|
98
+ |scandinavia | 57136|
99
+ |Allsvenskan | 56006|
100
+ |Gothenburg | 54395|
101
+ |stockholm | 51016|
102
+ |ISKbets | 47944|
103
+ |Sverige | 39552|
104
+ |SWARJE | 34691|
105
+ |GossipDK | 29332|
106
+
107
+
108
+ ## Dataset Creation
109
+
110
+ ### Curation Rationale
111
+
112
+ The Scandinavian languages do not have many open source social media datasets.
113
+
114
+ ### Source Data
115
+
116
+ The raw Reddit data was collected through [PushShift](https://files.pushshift.io/reddit/comments/).
117
+
118
+
119
+ ## Additional Information
120
+
121
+ ### Dataset Curators
122
+
123
+ [Dan Saattrup Nielsen](https://saattrupdan.github.io/) from the [The Alexandra
124
+ Institute](https://alexandra.dk/) curated this dataset.
125
+
126
+ ### Licensing Information
127
+
128
+ The dataset is licensed under the [CC BY 4.0
129
+ license](https://creativecommons.org/licenses/by/4.0/).