dolly3k_ca / README.md
jsaizant's picture
Update README.md
6abfa60 verified
metadata
license: cc-by-sa-3.0
dataset_info:
  features:
    - name: id
      dtype: string
    - name: category
      dtype: string
    - name: instruction
      dtype: string
    - name: response
      dtype: string
  splits:
    - name: train
      num_bytes: 1634495
      num_examples: 3232
  download_size: 1006083
  dataset_size: 1634495
configs:
  - config_name: default
    data_files:
      - split: train
        path: data/train-*
task_categories:
  - question-answering
  - text2text-generation
language:
  - ca
pretty_name: dolly3k_ca
size_categories:
  - 1K<n<10K

Dataset Card for dolly3k_ca

dolly3k_ca is a question answering dataset in Catalan, professionally translated from a filtered version of databricks-dolly-15k dataset in English.

Dataset Details

Dataset Description

dolly3k_ca (Dolly 3K instances - Catalan) is based on question-answer instance pairs written by humans. The dataset consists of 3232 instances in the train split. Each instance contains an instruction or question, and one answer. Every instance is categorized according to the type of instruction, following InstructGPT categories.

Dataset Sources

Uses

dolly3k_ca is intended for instruction fine tuning large language models. Below are some possible uses:

Direct Use

  • Instruction Tuning: The question-answer pairs can be used to improve model performance on following instructions in general, thus helping adapt pre-trained models for practical use.
  • Synthetic Data Generation: Prompts could be submitted as few-shot examples to a language model to generate examples of instructions.
  • Data Augmentation: Each prompt or response can be paraphrased and restated, with the resulting text associated to the respective ground-truth sample.

Out-of-Scope Use

We do not identify any out-of-scope use.

Dataset Structure

The dataset is provided in a JSONL format where each row corresponds to question-answer pair and contains an instance identifier, the category of the instruction, the question and the corresponding answer. Each line contains the following fields:

  • id: text string containing the identifier of the question-answer pair.
  • category: text string containing the instruction type from one of the InstructGPT categories.
  • instruction: text string containing the question or instruction.
  • response: text string containing a complete answer to the instruction.

For example:

{
    "id": "2", 
    "category": "open_qa", 
    "instruction": "Per què els camells poden sobreviure tant de temps sense aigua?", 
    "response": "Els camells fan servir el greix de les gepes per mantenir-se plens d’energia i completament hidratats durant llargs períodes de temps."
}

dolly3k_ca contains the train split from the original dataset.

Metric train
Input Sentences 3232
Average Row Length in Words 77.706
Average Row Length in Characters 462.819

Dataset Creation

Curation Rationale

From the blog post (Conover, M. et al. (2023). Free Dolly: Introducing the World's First Truly Open Instruction-Tuned LLM):

As far as we know, all the existing well-known instruction-following models (Alpaca, Koala, GPT4All, Vicuna) suffer from this limitation, prohibiting commercial use. To get around this conundrum, we started looking for ways to create a new dataset not “tainted” for commercial use.

Source Data

dolly3k_ca comes from the databricks-dolly-15k dataset in English, which consists of instruction-following records generated by thousands of Databricks employees in several of the behavioral categories outlined in the InstructGPT paper, including brainstorming, classification, closed QA, generation, information extraction, open QA, and summarization.

Data Collection and Processing

Data were collected from the databricks-dolly-15k dataset.

Prior to the translation process, dolly3k_ca was filtered for unwanted instances. The filtering follows these steps

  • Instances with context. The original dataset has a context column which contains reference texts copied from Wikipedia in some instances. As some of the instances required the context information to be answered, the authors recommend that users remove it for downstream applications.
  • Instances with entities. The original dataset has many US references. In order to reduce this cultural bias and make it easier to localise the text in the Catalan culture, instances were removed if they contained an entity in the instruction column, as identified by a NER model.

The Catalan translation process was based on the following guidelines

  • Conversion of dates and units: Adapt dates, metric systems, currencies, etc. to the Catalan context, except when the task involves metric system conversion.
  • Personal Names: Translate English names with clear Catalan equivalents; otherwise, use common names in the Catalan context. Keep the translated names consistent throughout the text. Do not translate the names of individual characters.
  • Language style: Avoid uniformity in translation, maintaining a rich and varied language that reflects our linguistic depth. In scientific texts - maintain precision and terminology while avoiding monotony.
  • Dataset logic: Ensure that the internal logic of datasets is maintained; answers should remain relevant and accurate. Factual accuracy is key in question-answer data sets.
  • Error handling: Correct errors in the English text during translation, unless otherwise specified for the specific data set. Spelling errors must be corrected in Catalan.
  • Avoid patterns and maintain length: Avoid the inclusion of patterns that might indicate the correct option and maintain difficulty. Keep the length of the answers as close to the original text as possible. Handle scientific terminology carefully to ensure consistency.

Who are the source data producers?

dolly3k_ca is a professional translation of a filtered version from databricks-dolly-15k made by a group of translators who are native speakers of Catalan. The translators were provided with the complete train split, as well as a set of translation preferences and guidelines, together with a brief explanation of the original corpus. To ensure ongoing communication, the translators were asked to provide sample translations at intervals of 500, 1000 and 2000 examples. These translations were then checked by a Catalan speaker from our team. In addition, the translators were encouraged to seek clarification on any specific doubts and any necessary corrections were applied to the entire dataset.

Annotation process

Refer to the blog post (Conover, M. et al. (2023). Free Dolly: Introducing the World's First Truly Open Instruction-Tuned LLM).

Who are the annotators?

Refer to the blog post (Conover, M. et al. (2023). Free Dolly: Introducing the World's First Truly Open Instruction-Tuned LLM).

Personal and Sensitive Information

This dataset contains public information (e.g., some information from Wikipedia). To our knowledge, there are no private person’s personal identifiers or sensitive information.

Bias, Risks, and Limitations

[More Information Needed]

Recommendations

[More Information Needed]

Citation [optional]

BibTeX:

[More Information Needed]

APA:

[More Information Needed]

Glossary [optional]

[More Information Needed]

More Information [optional]

This work/research has been promoted and financed by the Government of Catalonia through the Aina project.

Dataset Card Authors [optional]

[More Information Needed]

Dataset Card Contact

Language Technologies Unit (langtech@bsc.es) at the Barcelona Supercomputing Center (BSC).