jsaizant commited on
Commit
6abfa60
1 Parent(s): 84daf0b

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +212 -24
README.md CHANGED
@@ -1,24 +1,212 @@
1
- ---
2
- license: cc-by-sa-3.0
3
- dataset_info:
4
- features:
5
- - name: id
6
- dtype: string
7
- - name: category
8
- dtype: string
9
- - name: instruction
10
- dtype: string
11
- - name: response
12
- dtype: string
13
- splits:
14
- - name: train
15
- num_bytes: 1634495
16
- num_examples: 3232
17
- download_size: 1006083
18
- dataset_size: 1634495
19
- configs:
20
- - config_name: default
21
- data_files:
22
- - split: train
23
- path: data/train-*
24
- ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: cc-by-sa-3.0
3
+ dataset_info:
4
+ features:
5
+ - name: id
6
+ dtype: string
7
+ - name: category
8
+ dtype: string
9
+ - name: instruction
10
+ dtype: string
11
+ - name: response
12
+ dtype: string
13
+ splits:
14
+ - name: train
15
+ num_bytes: 1634495
16
+ num_examples: 3232
17
+ download_size: 1006083
18
+ dataset_size: 1634495
19
+ configs:
20
+ - config_name: default
21
+ data_files:
22
+ - split: train
23
+ path: data/train-*
24
+ task_categories:
25
+ - question-answering
26
+ - text2text-generation
27
+ language:
28
+ - ca
29
+ pretty_name: dolly3k_ca
30
+ size_categories:
31
+ - 1K<n<10K
32
+ ---
33
+
34
+
35
+ # Dataset Card for dolly3k_ca
36
+
37
+ <!-- Provide a quick summary of the dataset. -->
38
+
39
+ dolly3k_ca is a question answering dataset in Catalan, professionally translated from a filtered version of [databricks-dolly-15k](https://huggingface.co/datasets/databricks/databricks-dolly-15k) dataset in English.
40
+
41
+ ## Dataset Details
42
+
43
+ ### Dataset Description
44
+
45
+ <!-- Provide a longer summary of what this dataset is. -->
46
+
47
+ dolly3k_ca (Dolly 3K instances - Catalan) is based on question-answer instance pairs written by humans. The dataset consists of 3232 instances in the train split. Each instance contains an instruction or question, and one answer. Every instance is categorized according to the type of instruction, following InstructGPT categories.
48
+
49
+ - **Curated by:** [Language Technologies Unit | BSC-CNS](https://www.bsc.es/discover-bsc/organisation/research-departments/language-technologies-unit)
50
+ - **Funded by:** [Projecte AINA](https://projecteaina.cat/)
51
+ <!-- - **Shared by [optional]:** [More Information Needed] -->
52
+ - **Language(s) (NLP):** Catalan
53
+ - **License:** [CC-BY-SA 3.0](https://creativecommons.org/licenses/by-sa/3.0/) ([Original](https://huggingface.co/datasets/databricks/databricks-dolly-15k))
54
+
55
+ ### Dataset Sources
56
+
57
+ <!-- Provide the basic links for the dataset. -->
58
+
59
+ - **Repository:** [HuggingFace](https://huggingface.co/projecte-aina)
60
+ <!-- - **Paper [optional]:** [More Information Needed] -->
61
+ <!-- - **Demo [optional]:** [More Information Needed] -->
62
+
63
+ ## Uses
64
+
65
+ <!-- Address questions around how the dataset is intended to be used. -->
66
+
67
+ dolly3k_ca is intended for instruction fine tuning large language models. Below are some possible uses:
68
+
69
+ ### Direct Use
70
+
71
+ <!-- This section describes suitable use cases for the dataset. -->
72
+
73
+ - Instruction Tuning: The question-answer pairs can be used to improve model performance on following instructions in general, thus helping adapt pre-trained models for practical use.
74
+ - Synthetic Data Generation: Prompts could be submitted as few-shot examples to a language model to generate examples of instructions.
75
+ - Data Augmentation: Each prompt or response can be paraphrased and restated, with the resulting text associated to the respective ground-truth sample.
76
+
77
+ ### Out-of-Scope Use
78
+
79
+ <!-- This section addresses misuse, malicious use, and uses that the dataset will not work well for. -->
80
+
81
+ We do not identify any out-of-scope use.
82
+
83
+ ## Dataset Structure
84
+
85
+ <!-- This section provides a description of the dataset fields, and additional information about the dataset structure such as criteria used to create the splits, relationships between data points, etc. -->
86
+
87
+
88
+ The dataset is provided in a JSONL format where each row corresponds to question-answer pair and contains an instance identifier, the category of the instruction, the question and the corresponding answer. Each line contains the following fields:
89
+
90
+ - `id`: text string containing the identifier of the question-answer pair.
91
+ - `category`: text string containing the instruction type from one of the InstructGPT categories.
92
+ - `instruction`: text string containing the question or instruction.
93
+ - `response`: text string containing a complete answer to the instruction.
94
+
95
+ For example:
96
+
97
+ ```
98
+ {
99
+ "id": "2",
100
+ "category": "open_qa",
101
+ "instruction": "Per què els camells poden sobreviure tant de temps sense aigua?",
102
+ "response": "Els camells fan servir el greix de les gepes per mantenir-se plens d’energia i completament hidratats durant llargs períodes de temps."
103
+ }
104
+ ```
105
+
106
+ dolly3k_ca contains the train split from the original dataset.
107
+
108
+ | Metric | train |
109
+ |---|---|
110
+ | Input Sentences | 3232 |
111
+ | Average Row Length in Words | 77.706 |
112
+ | Average Row Length in Characters | 462.819 |
113
+
114
+ ## Dataset Creation
115
+
116
+ ### Curation Rationale
117
+
118
+ <!-- Motivation for the creation of this dataset. -->
119
+
120
+ From the blog post (Conover, M. et al. (2023). Free Dolly: Introducing the World's First Truly Open Instruction-Tuned LLM):
121
+ > As far as we know, all the existing well-known instruction-following models ([Alpaca](https://crfm.stanford.edu/2023/03/13/alpaca.html), [Koala](https://bair.berkeley.edu/blog/2023/04/03/koala/), [GPT4All](https://github.com/nomic-ai/gpt4all), [Vicuna](https://vicuna.lmsys.org/)) suffer from this limitation, prohibiting commercial use. To get around this conundrum, we started looking for ways to create a new dataset not “tainted” for commercial use.
122
+
123
+ ### Source Data
124
+
125
+ <!-- This section describes the source data (e.g. news text and headlines, social media posts, translated sentences, ...). -->
126
+
127
+ dolly3k_ca comes from the [databricks-dolly-15k](https://huggingface.co/datasets/databricks/databricks-dolly-15k) dataset in English, which consists of instruction-following records generated by thousands of Databricks employees in several of the behavioral categories outlined in the [InstructGPT](https://arxiv.org/abs/2203.02155) paper, including brainstorming, classification, closed QA, generation, information extraction, open QA, and summarization.
128
+
129
+ #### Data Collection and Processing
130
+
131
+ <!-- This section describes the data collection and processing process such as data selection criteria, filtering and normalization methods, tools and libraries used, etc. -->
132
+
133
+ Data were collected from the [databricks-dolly-15k](https://huggingface.co/datasets/databricks/databricks-dolly-15k) dataset.
134
+
135
+ Prior to the translation process, dolly3k_ca was filtered for unwanted instances. The filtering follows these steps
136
+ - Instances with `context`. The original dataset has a `context` column which contains reference texts copied from Wikipedia in some instances. As some of the instances required the `context` information to be answered, the authors recommend that users remove it for downstream applications.
137
+ - Instances with entities. The original dataset has many US references. In order to reduce this cultural bias and make it easier to localise the text in the Catalan culture, instances were removed if they contained an entity in the `instruction` column, as identified by a NER model.
138
+
139
+ The Catalan translation process was based on the following guidelines
140
+ - **Conversion of dates and units**: Adapt dates, metric systems, currencies, etc. to the Catalan context, except when the task involves metric system conversion.
141
+ - **Personal Names**: Translate English names with clear Catalan equivalents; otherwise, use common names in the Catalan context. Keep the translated names consistent throughout the text. Do not translate the names of individual characters.
142
+ - **Language style**: Avoid uniformity in translation, maintaining a rich and varied language that reflects our linguistic depth. In scientific texts - maintain precision and terminology while avoiding monotony.
143
+ - **Dataset logic**: Ensure that the internal logic of datasets is maintained; answers should remain relevant and accurate. Factual accuracy is key in question-answer data sets.
144
+ - **Error handling**: Correct errors in the English text during translation, unless otherwise specified for the specific data set. Spelling errors must be corrected in Catalan.
145
+ - **Avoid patterns and maintain length**: Avoid the inclusion of patterns that might indicate the correct option and maintain difficulty. Keep the length of the answers as close to the original text as possible. Handle scientific terminology carefully to ensure consistency.
146
+
147
+ #### Who are the source data producers?
148
+
149
+ <!-- This section describes the people or systems who originally created the data. It should also include self-reported demographic or identity information for the source data creators if this information is available. -->
150
+
151
+ dolly3k_ca is a professional translation of a filtered version from [databricks-dolly-15k](https://huggingface.co/datasets/databricks/databricks-dolly-15k) made by a group of translators who are native speakers of Catalan. The translators were provided with the complete train split, as well as a set of translation preferences and guidelines, together with a brief explanation of the original corpus. To ensure ongoing communication, the translators were asked to provide sample translations at intervals of 500, 1000 and 2000 examples. These translations were then checked by a Catalan speaker from our team. In addition, the translators were encouraged to seek clarification on any specific doubts and any necessary corrections were applied to the entire dataset.
152
+
153
+ #### Annotation process
154
+
155
+ <!-- This section describes the annotation process such as annotation tools used in the process, the amount of data annotated, annotation guidelines provided to the annotators, interannotator statistics, annotation validation, etc. -->
156
+
157
+ Refer to the blog post (Conover, M. et al. (2023). Free Dolly: Introducing the World's First Truly Open Instruction-Tuned LLM).
158
+
159
+ #### Who are the annotators?
160
+
161
+ <!-- This section describes the people or systems who created the annotations. -->
162
+
163
+ Refer to the blog post (Conover, M. et al. (2023). Free Dolly: Introducing the World's First Truly Open Instruction-Tuned LLM).
164
+
165
+ #### Personal and Sensitive Information
166
+
167
+ <!-- State whether the dataset contains data that might be considered personal, sensitive, or private (e.g., data that reveals addresses, uniquely identifiable names or aliases, racial or ethnic origins, sexual orientations, religious beliefs, political opinions, financial or health data, etc.). If efforts were made to anonymize the data, describe the anonymization process. -->
168
+
169
+ This dataset contains public information (e.g., some information from Wikipedia). To our knowledge, there are no private person’s personal identifiers or sensitive information.
170
+
171
+ ## Bias, Risks, and Limitations
172
+
173
+ <!-- This section is meant to convey both technical and sociotechnical limitations. -->
174
+
175
+ [More Information Needed]
176
+
177
+ ### Recommendations
178
+
179
+ <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
180
+
181
+ [More Information Needed]
182
+
183
+ ## Citation [optional]
184
+
185
+ <!-- If there is a paper or blog post introducing the dataset, the APA and Bibtex information for that should go in this section. -->
186
+
187
+ **BibTeX:**
188
+
189
+ [More Information Needed]
190
+
191
+ **APA:**
192
+
193
+ [More Information Needed]
194
+
195
+ ## Glossary [optional]
196
+
197
+ <!-- If relevant, include terms and calculations in this section that can help readers understand the dataset or dataset card. -->
198
+
199
+ [More Information Needed]
200
+
201
+ ## More Information [optional]
202
+
203
+ This work/research has been promoted and financed by the Government of Catalonia through the [Aina project](https://projecteaina.cat/).
204
+
205
+
206
+ ## Dataset Card Authors [optional]
207
+
208
+ [More Information Needed]
209
+
210
+ ## Dataset Card Contact
211
+
212
+ Language Technologies Unit (langtech@bsc.es) at the Barcelona Supercomputing Center (BSC).