parquet-converter commited on
Commit
fe8a564
·
1 Parent(s): dea2001

Update parquet files

Browse files
.gitattributes DELETED
@@ -1,27 +0,0 @@
1
- *.7z filter=lfs diff=lfs merge=lfs -text
2
- *.arrow filter=lfs diff=lfs merge=lfs -text
3
- *.bin filter=lfs diff=lfs merge=lfs -text
4
- *.bin.* filter=lfs diff=lfs merge=lfs -text
5
- *.bz2 filter=lfs diff=lfs merge=lfs -text
6
- *.ftz filter=lfs diff=lfs merge=lfs -text
7
- *.gz filter=lfs diff=lfs merge=lfs -text
8
- *.h5 filter=lfs diff=lfs merge=lfs -text
9
- *.joblib filter=lfs diff=lfs merge=lfs -text
10
- *.lfs.* filter=lfs diff=lfs merge=lfs -text
11
- *.model filter=lfs diff=lfs merge=lfs -text
12
- *.msgpack filter=lfs diff=lfs merge=lfs -text
13
- *.onnx filter=lfs diff=lfs merge=lfs -text
14
- *.ot filter=lfs diff=lfs merge=lfs -text
15
- *.parquet filter=lfs diff=lfs merge=lfs -text
16
- *.pb filter=lfs diff=lfs merge=lfs -text
17
- *.pt filter=lfs diff=lfs merge=lfs -text
18
- *.pth filter=lfs diff=lfs merge=lfs -text
19
- *.rar filter=lfs diff=lfs merge=lfs -text
20
- saved_model/**/* filter=lfs diff=lfs merge=lfs -text
21
- *.tar.* filter=lfs diff=lfs merge=lfs -text
22
- *.tflite filter=lfs diff=lfs merge=lfs -text
23
- *.tgz filter=lfs diff=lfs merge=lfs -text
24
- *.xz filter=lfs diff=lfs merge=lfs -text
25
- *.zip filter=lfs diff=lfs merge=lfs -text
26
- *.zstandard filter=lfs diff=lfs merge=lfs -text
27
- *tfevents* filter=lfs diff=lfs merge=lfs -text
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
README.md DELETED
@@ -1,231 +0,0 @@
1
- ---
2
- annotations_creators:
3
- - crowdsourced
4
- language:
5
- - en
6
- language_creators:
7
- - found
8
- license:
9
- - cc-by-4.0
10
- multilinguality:
11
- - monolingual
12
- pretty_name: Coarse Discourse
13
- size_categories:
14
- - 100K<n<1M
15
- source_datasets:
16
- - original
17
- task_categories:
18
- - text-classification
19
- task_ids:
20
- - multi-class-classification
21
- paperswithcode_id: coarse-discourse
22
- dataset_info:
23
- features:
24
- - name: title
25
- dtype: string
26
- - name: is_self_post
27
- dtype: bool
28
- - name: subreddit
29
- dtype: string
30
- - name: url
31
- dtype: string
32
- - name: majority_link
33
- dtype: string
34
- - name: is_first_post
35
- dtype: bool
36
- - name: majority_type
37
- dtype: string
38
- - name: id_post
39
- dtype: string
40
- - name: post_depth
41
- dtype: int32
42
- - name: in_reply_to
43
- dtype: string
44
- - name: annotations
45
- sequence:
46
- - name: annotator
47
- dtype: string
48
- - name: link_to_post
49
- dtype: string
50
- - name: main_type
51
- dtype: string
52
- splits:
53
- - name: train
54
- num_bytes: 45443464
55
- num_examples: 116357
56
- download_size: 4636201
57
- dataset_size: 45443464
58
- ---
59
-
60
- # Dataset Card for "coarse_discourse"
61
-
62
- ## Table of Contents
63
- - [Dataset Description](#dataset-description)
64
- - [Dataset Summary](#dataset-summary)
65
- - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
66
- - [Languages](#languages)
67
- - [Dataset Structure](#dataset-structure)
68
- - [Data Instances](#data-instances)
69
- - [Data Fields](#data-fields)
70
- - [Data Splits](#data-splits)
71
- - [Dataset Creation](#dataset-creation)
72
- - [Curation Rationale](#curation-rationale)
73
- - [Source Data](#source-data)
74
- - [Annotations](#annotations)
75
- - [Personal and Sensitive Information](#personal-and-sensitive-information)
76
- - [Considerations for Using the Data](#considerations-for-using-the-data)
77
- - [Social Impact of Dataset](#social-impact-of-dataset)
78
- - [Discussion of Biases](#discussion-of-biases)
79
- - [Other Known Limitations](#other-known-limitations)
80
- - [Additional Information](#additional-information)
81
- - [Dataset Curators](#dataset-curators)
82
- - [Licensing Information](#licensing-information)
83
- - [Citation Information](#citation-information)
84
- - [Contributions](#contributions)
85
-
86
- ## Dataset Description
87
-
88
- - **Homepage:**
89
- - **Repository:** https://github.com/google-research-datasets/coarse-discourse
90
- - **Paper:** [Characterizing Online Discussion Using Coarse Discourse Sequences](https://research.google/pubs/pub46055/)
91
- - **Point of Contact:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
92
- - **Size of downloaded dataset files:** 4.42 MB
93
- - **Size of the generated dataset:** 43.34 MB
94
- - **Total amount of disk used:** 47.76 MB
95
-
96
- ### Dataset Summary
97
-
98
- A large corpus of discourse annotations and relations on ~10K forum threads.
99
-
100
- We collect and release a corpus of over 9,000 threads comprising over 100,000 comments manually annotated via paid crowdsourcing with discourse acts and randomly sampled from the site Reddit.
101
-
102
- ### Supported Tasks and Leaderboards
103
-
104
- [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
105
-
106
- ### Languages
107
-
108
- [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
109
-
110
- ## Dataset Structure
111
-
112
- ### Data Instances
113
-
114
- #### default
115
-
116
- - **Size of downloaded dataset files:** 4.42 MB
117
- - **Size of the generated dataset:** 43.34 MB
118
- - **Total amount of disk used:** 47.76 MB
119
-
120
- An example of 'train' looks as follows.
121
- ```
122
- {
123
- "annotations": {
124
- "annotator": ["fc96a15ab87f02dd1998ff55a64f6478", "e9e4b3ab355135fa954badcc06bfccc6", "31ac59c1734c1547d4d0723ff254c247"],
125
- "link_to_post": ["", "", ""],
126
- "main_type": ["elaboration", "elaboration", "elaboration"]
127
- },
128
- "id_post": "t1_c9b30i1",
129
- "in_reply_to": "t1_c9b2nyd",
130
- "is_first_post": false,
131
- "is_self_post": true,
132
- "majority_link": "t1_c9b2nyd",
133
- "majority_type": "elaboration",
134
- "post_depth": 2,
135
- "subreddit": "100movies365days",
136
- "title": "DTX120: #87 - Nashville",
137
- "url": "https://www.reddit.com/r/100movies365days/comments/1bx6qw/dtx120_87_nashville/"
138
- }
139
- ```
140
-
141
- ### Data Fields
142
-
143
- The data fields are the same among all splits.
144
-
145
- #### default
146
- - `title`: a `string` feature.
147
- - `is_self_post`: a `bool` feature.
148
- - `subreddit`: a `string` feature.
149
- - `url`: a `string` feature.
150
- - `majority_link`: a `string` feature.
151
- - `is_first_post`: a `bool` feature.
152
- - `majority_type`: a `string` feature.
153
- - `id_post`: a `string` feature.
154
- - `post_depth`: a `int32` feature.
155
- - `in_reply_to`: a `string` feature.
156
- - `annotations`: a dictionary feature containing:
157
- - `annotator`: a `string` feature.
158
- - `link_to_post`: a `string` feature.
159
- - `main_type`: a `string` feature.
160
-
161
- ### Data Splits
162
-
163
- | name |train |
164
- |-------|-----:|
165
- |default|116357|
166
-
167
- ## Dataset Creation
168
-
169
- ### Curation Rationale
170
-
171
- [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
172
-
173
- ### Source Data
174
-
175
- #### Initial Data Collection and Normalization
176
-
177
- [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
178
-
179
- #### Who are the source language producers?
180
-
181
- [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
182
-
183
- ### Annotations
184
-
185
- #### Annotation process
186
-
187
- [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
188
-
189
- #### Who are the annotators?
190
-
191
- [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
192
-
193
- ### Personal and Sensitive Information
194
-
195
- [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
196
-
197
- ## Considerations for Using the Data
198
-
199
- ### Social Impact of Dataset
200
-
201
- [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
202
-
203
- ### Discussion of Biases
204
-
205
- [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
206
-
207
- ### Other Known Limitations
208
-
209
- [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
210
-
211
- ## Additional Information
212
-
213
- ### Dataset Curators
214
-
215
- [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
216
-
217
- ### Licensing Information
218
-
219
- [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
220
-
221
- ### Citation Information
222
-
223
- ```
224
- @inproceedings{coarsediscourse, title={Characterizing Online Discussion Using Coarse Discourse Sequences}, author={Zhang, Amy X. and Culbertson, Bryan and Paritosh, Praveen}, booktitle={Proceedings of the 11th International AAAI Conference on Weblogs and Social Media}, series={ICWSM '17}, year={2017}, location = {Montreal, Canada} }
225
-
226
- ```
227
-
228
-
229
- ### Contributions
230
-
231
- Thanks to [@thomwolf](https://github.com/thomwolf), [@lewtun](https://github.com/lewtun), [@jplu](https://github.com/jplu) for adding this dataset.
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
coarse_discourse.py DELETED
@@ -1,116 +0,0 @@
1
- """TODO(coarse_discourse): Add a description here."""
2
-
3
-
4
- import json
5
- import os
6
-
7
- import datasets
8
-
9
-
10
- # TODO(coarse_discourse): BibTeX citation
11
- _CITATION = """\
12
- @inproceedings{coarsediscourse, title={Characterizing Online Discussion Using Coarse Discourse Sequences}, author={Zhang, Amy X. and Culbertson, Bryan and Paritosh, Praveen}, booktitle={Proceedings of the 11th International AAAI Conference on Weblogs and Social Media}, series={ICWSM '17}, year={2017}, location = {Montreal, Canada} }
13
- """
14
-
15
- # TODO(coarse_discourse):
16
- _DESCRIPTION = """\
17
- dataset contains discourse annotation and relation on threads from reddit during 2016
18
- """
19
- _URL = "https://github.com/google-research-datasets/coarse-discourse/archive/master.zip"
20
-
21
-
22
- class CoarseDiscourse(datasets.GeneratorBasedBuilder):
23
- """TODO(coarse_discourse): Short description of my dataset."""
24
-
25
- # TODO(coarse_discourse): Set up version.
26
- VERSION = datasets.Version("0.1.0")
27
-
28
- def _info(self):
29
- # TODO(coarse_discourse): Specifies the datasets.DatasetInfo object
30
- return datasets.DatasetInfo(
31
- # This is the description that will appear on the datasets page.
32
- description=_DESCRIPTION,
33
- # datasets.features.FeatureConnectors
34
- features=datasets.Features(
35
- {
36
- # These are the features of your dataset like images, labels ...
37
- "title": datasets.Value("string"),
38
- "is_self_post": datasets.Value("bool"),
39
- "subreddit": datasets.Value("string"),
40
- "url": datasets.Value("string"),
41
- "majority_link": datasets.Value("string"),
42
- "is_first_post": datasets.Value("bool"),
43
- "majority_type": datasets.Value("string"),
44
- "id_post": datasets.Value("string"),
45
- "post_depth": datasets.Value("int32"),
46
- "in_reply_to": datasets.Value("string"),
47
- "annotations": datasets.features.Sequence(
48
- {
49
- "annotator": datasets.Value("string"),
50
- "link_to_post": datasets.Value("string"),
51
- "main_type": datasets.Value("string"),
52
- }
53
- ),
54
- }
55
- ),
56
- # If there's a common (input, target) tuple from the features,
57
- # specify them here. They'll be used if as_supervised=True in
58
- # builder.as_dataset.
59
- supervised_keys=None,
60
- # Homepage of the dataset for documentation
61
- homepage="https://github.com/google-research-datasets/coarse-discourse",
62
- citation=_CITATION,
63
- )
64
-
65
- def _split_generators(self, dl_manager):
66
- """Returns SplitGenerators."""
67
- # TODO(coarse_discourse): Downloads the data and defines the splits
68
- # dl_manager is a datasets.download.DownloadManager that can be used to
69
- # download and extract URLs
70
- dl_dir = dl_manager.download_and_extract(_URL)
71
- return [
72
- datasets.SplitGenerator(
73
- name=datasets.Split.TRAIN,
74
- # These kwargs will be passed to _generate_examples
75
- gen_kwargs={
76
- "filepath": os.path.join(dl_dir, "coarse-discourse-master", "coarse_discourse_dataset.json")
77
- },
78
- ),
79
- ]
80
-
81
- def _generate_examples(self, filepath):
82
- """Yields examples."""
83
- # TODO(coarse_discourse): Yields (key, example) tuples from the dataset
84
- with open(filepath, encoding="utf-8") as f:
85
- for id_, row in enumerate(f):
86
- data = json.loads(row)
87
- url = data.get("url", "")
88
- is_self_post = data.get("is_self_post", "")
89
- subreddit = data.get("subreddit", "")
90
- title = data.get("title", "")
91
- posts = data.get("posts", "")
92
- for id1, post in enumerate(posts):
93
- maj_link = post.get("majority_link", "")
94
- maj_type = post.get("majority_type", "")
95
- id_post = post.get("id", "")
96
- is_first_post = post.get("is_firs_post", "")
97
- post_depth = post.get("post_depth", -1)
98
- in_reply_to = post.get("in_reply_to", "")
99
- annotations = post["annotations"]
100
- annotators = [annotation.get("annotator", "") for annotation in annotations]
101
- main_types = [annotation.get("main_type", "") for annotation in annotations]
102
- link_posts = [annotation.get("linkk_to_post", "") for annotation in annotations]
103
-
104
- yield str(id_) + "_" + str(id1), {
105
- "title": title,
106
- "is_self_post": is_self_post,
107
- "subreddit": subreddit,
108
- "url": url,
109
- "majority_link": maj_link,
110
- "is_first_post": is_first_post,
111
- "majority_type": maj_type,
112
- "id_post": id_post,
113
- "post_depth": post_depth,
114
- "in_reply_to": in_reply_to,
115
- "annotations": {"annotator": annotators, "link_to_post": link_posts, "main_type": main_types},
116
- }
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
dataset_infos.json DELETED
@@ -1 +0,0 @@
1
- {"default": {"description": "dataset contains discourse annotation and relation on threads from reddit during 2016\n", "citation": "@inproceedings{coarsediscourse, title={Characterizing Online Discussion Using Coarse Discourse Sequences}, author={Zhang, Amy X. and Culbertson, Bryan and Paritosh, Praveen}, booktitle={Proceedings of the 11th International AAAI Conference on Weblogs and Social Media}, series={ICWSM '17}, year={2017}, location = {Montreal, Canada} }\n", "homepage": "https://github.com/google-research-datasets/coarse-discourse", "license": "", "features": {"title": {"dtype": "string", "id": null, "_type": "Value"}, "is_self_post": {"dtype": "bool", "id": null, "_type": "Value"}, "subreddit": {"dtype": "string", "id": null, "_type": "Value"}, "url": {"dtype": "string", "id": null, "_type": "Value"}, "majority_link": {"dtype": "string", "id": null, "_type": "Value"}, "is_first_post": {"dtype": "bool", "id": null, "_type": "Value"}, "majority_type": {"dtype": "string", "id": null, "_type": "Value"}, "id_post": {"dtype": "string", "id": null, "_type": "Value"}, "post_depth": {"dtype": "int32", "id": null, "_type": "Value"}, "in_reply_to": {"dtype": "string", "id": null, "_type": "Value"}, "annotations": {"feature": {"annotator": {"dtype": "string", "id": null, "_type": "Value"}, "link_to_post": {"dtype": "string", "id": null, "_type": "Value"}, "main_type": {"dtype": "string", "id": null, "_type": "Value"}}, "length": -1, "id": null, "_type": "Sequence"}}, "supervised_keys": null, "builder_name": "coarse_discourse", "config_name": "default", "version": {"version_str": "0.1.0", "description": null, "datasets_version_to_prepare": null, "major": 0, "minor": 1, "patch": 0}, "splits": {"train": {"name": "train", "num_bytes": 45443464, "num_examples": 116357, "dataset_name": "coarse_discourse"}}, "download_checksums": {"https://github.com/google-research-datasets/coarse-discourse/archive/master.zip": {"num_bytes": 4636201, "checksum": "8e1e685c5907a6d654be8b8ed307522fd91ca919f503713fd1627fbe3d43c3cd"}}, "download_size": 4636201, "dataset_size": 45443464, "size_in_bytes": 50079665}}
 
 
default/coarse_discourse-train.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:16eadb25172ab88f551e9b35dbf0f4900e84cd345c2e504ce058b7aa791db19c
3
+ size 4256574