system HF staff commited on
Commit
3015c15
1 Parent(s): 6b2a014

Update files from the datasets library (from 1.3.0)

Browse files

Release notes: https://github.com/huggingface/datasets/releases/tag/1.3.0

Files changed (1) hide show
  1. README.md +165 -0
README.md ADDED
@@ -0,0 +1,165 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ ---
3
+
4
+ # Dataset Card for "crd3"
5
+
6
+ ## Table of Contents
7
+ - [Dataset Description](#dataset-description)
8
+ - [Dataset Summary](#dataset-summary)
9
+ - [Supported Tasks](#supported-tasks)
10
+ - [Languages](#languages)
11
+ - [Dataset Structure](#dataset-structure)
12
+ - [Data Instances](#data-instances)
13
+ - [Data Fields](#data-fields)
14
+ - [Data Splits Sample Size](#data-splits-sample-size)
15
+ - [Dataset Creation](#dataset-creation)
16
+ - [Curation Rationale](#curation-rationale)
17
+ - [Source Data](#source-data)
18
+ - [Annotations](#annotations)
19
+ - [Personal and Sensitive Information](#personal-and-sensitive-information)
20
+ - [Considerations for Using the Data](#considerations-for-using-the-data)
21
+ - [Social Impact of Dataset](#social-impact-of-dataset)
22
+ - [Discussion of Biases](#discussion-of-biases)
23
+ - [Other Known Limitations](#other-known-limitations)
24
+ - [Additional Information](#additional-information)
25
+ - [Dataset Curators](#dataset-curators)
26
+ - [Licensing Information](#licensing-information)
27
+ - [Citation Information](#citation-information)
28
+ - [Contributions](#contributions)
29
+
30
+ ## [Dataset Description](#dataset-description)
31
+
32
+ - **Homepage:** [https://github.com/RevanthRameshkumar/CRD3](https://github.com/RevanthRameshkumar/CRD3)
33
+ - **Repository:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
34
+ - **Paper:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
35
+ - **Point of Contact:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
36
+ - **Size of downloaded dataset files:** 279.93 MB
37
+ - **Size of the generated dataset:** 4020.33 MB
38
+ - **Total amount of disk used:** 4300.25 MB
39
+
40
+ ### [Dataset Summary](#dataset-summary)
41
+
42
+ Storytelling with Dialogue: A Critical Role Dungeons and Dragons Dataset.
43
+ Critical Role is an unscripted, live-streamed show where a fixed group of people play Dungeons and Dragons, an open-ended role-playing game.
44
+ The dataset is collected from 159 Critical Role episodes transcribed to text dialogues, consisting of 398,682 turns. It also includes corresponding
45
+ abstractive summaries collected from the Fandom wiki. The dataset is linguistically unique in that the narratives are generated entirely through player
46
+ collaboration and spoken interaction. For each dialogue, there are a large number of turns, multiple abstractive summaries with varying levels of detail,
47
+ and semantic ties to the previous dialogues.
48
+
49
+ ### [Supported Tasks](#supported-tasks)
50
+
51
+ [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
52
+
53
+ ### [Languages](#languages)
54
+
55
+ [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
56
+
57
+ ## [Dataset Structure](#dataset-structure)
58
+
59
+ We show detailed information for up to 5 configurations of the dataset.
60
+
61
+ ### [Data Instances](#data-instances)
62
+
63
+ #### default
64
+
65
+ - **Size of downloaded dataset files:** 279.93 MB
66
+ - **Size of the generated dataset:** 4020.33 MB
67
+ - **Total amount of disk used:** 4300.25 MB
68
+
69
+ An example of 'train' looks as follows.
70
+ ```
71
+ {
72
+ "alignment_score": 3.679936647415161,
73
+ "chunk": "Wish them a Happy Birthday on their Facebook and Twitter pages! Also, as a reminder: D&D Beyond streams their weekly show (\"And Beyond\") every Wednesday on twitch.tv/dndbeyond.",
74
+ "chunk_id": 1,
75
+ "turn_end": 6,
76
+ "turn_num": 4,
77
+ "turn_start": 4,
78
+ "turns": {
79
+ "names": ["SAM"],
80
+ "utterances": ["Yesterday, guys, was D&D Beyond's first one--", "first one-year anniversary. Take two. Hey guys,", "yesterday was D&D Beyond's one-year anniversary.", "Wish them a happy birthday on their Facebook and", "Twitter pages."]
81
+ }
82
+ }
83
+ ```
84
+
85
+ ### [Data Fields](#data-fields)
86
+
87
+ The data fields are the same among all splits.
88
+
89
+ #### default
90
+ - `chunk`: a `string` feature.
91
+ - `chunk_id`: a `int32` feature.
92
+ - `turn_start`: a `int32` feature.
93
+ - `turn_end`: a `int32` feature.
94
+ - `alignment_score`: a `float32` feature.
95
+ - `turn_num`: a `int32` feature.
96
+ - `turns`: a dictionary feature containing:
97
+ - `names`: a `string` feature.
98
+ - `utterances`: a `string` feature.
99
+
100
+ ### [Data Splits Sample Size](#data-splits-sample-size)
101
+
102
+ | name | train |validation| test |
103
+ |-------|------:|---------:|------:|
104
+ |default|2942362| 2942362|2942362|
105
+
106
+ ## [Dataset Creation](#dataset-creation)
107
+
108
+ ### [Curation Rationale](#curation-rationale)
109
+
110
+ [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
111
+
112
+ ### [Source Data](#source-data)
113
+
114
+ [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
115
+
116
+ ### [Annotations](#annotations)
117
+
118
+ [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
119
+
120
+ ### [Personal and Sensitive Information](#personal-and-sensitive-information)
121
+
122
+ [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
123
+
124
+ ## [Considerations for Using the Data](#considerations-for-using-the-data)
125
+
126
+ ### [Social Impact of Dataset](#social-impact-of-dataset)
127
+
128
+ [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
129
+
130
+ ### [Discussion of Biases](#discussion-of-biases)
131
+
132
+ [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
133
+
134
+ ### [Other Known Limitations](#other-known-limitations)
135
+
136
+ [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
137
+
138
+ ## [Additional Information](#additional-information)
139
+
140
+ ### [Dataset Curators](#dataset-curators)
141
+
142
+ [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
143
+
144
+ ### [Licensing Information](#licensing-information)
145
+
146
+ [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
147
+
148
+ ### [Citation Information](#citation-information)
149
+
150
+ ```
151
+
152
+ @inproceedings{
153
+ title = {Storytelling with Dialogue: A Critical Role Dungeons and Dragons Dataset},
154
+ author = {Rameshkumar, Revanth and Bailey, Peter},
155
+ year = {2020},
156
+ publisher = {Association for Computational Linguistics},
157
+ conference = {ACL}
158
+ }
159
+
160
+ ```
161
+
162
+
163
+ ### Contributions
164
+
165
+ Thanks to [@thomwolf](https://github.com/thomwolf), [@lhoestq](https://github.com/lhoestq), [@mariamabarham](https://github.com/mariamabarham), [@lewtun](https://github.com/lewtun) for adding this dataset.