|
--- |
|
YAML tags: null |
|
language: |
|
- ca |
|
license: |
|
- cc-by-nc-nd-4.0 |
|
multilinguality: |
|
- monolingual |
|
pretty_name: IntruCAT |
|
--- |
|
|
|
# Dataset Card for Instrucat |
|
|
|
## Dataset Description |
|
- **Homepage** [Projecte AINA](https://projecteaina.cat/tech/) |
|
- **Repository** [HuggingFace](https://huggingface.co/projecte-aina) |
|
- **Point of Contact:** langtech@bsc.es |
|
|
|
### Dataset Summary |
|
|
|
InstruCat is a dataset consisting of 216,826 instructions in Catalan. |
|
|
|
It contains data converted to instructions format from the following datasets: |
|
|
|
- caBreu : The instructions were created in form of summarization tasks. There are 2 types of summarization categories in the dataset: extreme and abstractive. The extreme one summarizes text into one sentence and the abstractive into shorter texts around 3-5 sentences. |
|
- CatalanQA : The instructions correspond to questions in CatalanQA. |
|
- CaWikiTC : The instructions were created in 2 different ways of text classification tasks with the distribution 70% - 30%. The first way is to define a category of a given text. The second way is to answer where a given text belongs to a certain category in a form of alternative question. |
|
- ceil : The instructions were created in 2 different ways of Named Entity Recognition tasks with the distribution 70% - 30%. The first way is to list all the found Named Entities. The second way is to list only Named Entities of a particular category. |
|
- CoqCat : The instructions correspond to the first questions of CoqCat conversations. |
|
- GuiaCat : The instructions were created in form of sentiment analysis tasks. |
|
- IntoxiCat : The instructions were created in form of binary classification tasks. The task is to define wether a given text is toxic or no. |
|
- NLUCat : The instructions were created in form of phrase generation tasks to express a given intent. |
|
- Parafraseja : The instructions were created in form of text generation tasks. The task is to generate a text equivalent by meaning to a given text. |
|
- PAWS-ca : The instructions were created in form of text generation tasks. The task is to generate a text equivalent by meaning to a given text. |
|
- sts-ca : The instructions were created in form of text generation tasks. The task is to generate a text equivalent by meaning to a given text. |
|
- teca : The instructions were created in 2 different ways with the distribution 70% - 30%. The first way is in form of entailment generation tasks. The second way is to define whether one given text is an entailment of another given text. |
|
- WikiCat : The instructions were created in 2 different ways of text classification tasks with the distribution 70% - 30%. The first way is to define a category of a given text. The second way is to answer where a given text belongs to a certain category in a form of alternative question. |
|
|
|
|
|
### Supported Tasks and Leaderboards |
|
|
|
Training LLMs |
|
|
|
### Languages |
|
|
|
This dataset is in Catalan (ca-ES). |
|
|
|
## Dataset Structure |
|
|
|
### Data Instances |
|
|
|
Three JSONL files, one for each split. |
|
|
|
An example of 'test' looks as follows: |
|
|
|
``` |
|
{ |
|
"ID": "Parafraseja_8977", |
|
"instruction": "Reescriu aquesta frase sense alterar-ne el significat:", |
|
"context": "Es tracta d'un tipus que ens falla ja que a ell li falla aquesta falta d'interès per tal d'exercir el domini sobre l'ambient.", |
|
"response": "Es tracta d'un tipus que ens falla perquè a ell li falla aquesta falta d'interès per exercir el domini sobre l'ambient.", |
|
"category": "paraphrasis" |
|
} |
|
``` |
|
|
|
### Category Distibution |
|
| Category | Number of instructions |% | |
|
|----------------|----------|------ | |
|
| ner | 59410 | 27.39% | |
|
| paraphrasis | 34695 | 16.00% | |
|
| text_classification | 33393 | 15.40% | |
|
| toxicity | 29809 | 13.74% | |
|
| qa | 27427 | 12.64% | |
|
| phrase_generation | 11873 | 5.47% | |
|
| entailment_generation | 6354 | 2.93% | |
|
| sentiment_analysis | 5750 | 2.65% | |
|
| abstractive_summarization | 2999 | 1.38% | |
|
| extreme_summarization | 2999 | 1.38% | |
|
| entailment | 2117 | 0.97% | |
|
|
|
|
|
### Data Splits |
|
|
|
- train.jsonl: 165100 instructions |
|
- validation.jsonl: 25351 instructions |
|
- test.jsonl: 26375 instructions |
|
|
|
## Additional Information |
|
|
|
### Dataset Curators |
|
|
|
Language Technologies Unit (langtech@bsc.es) at the Barcelona Supercomputing Center (BSC). |
|
|
|
This work is funded by the Ministerio para la Transformación Digital y de la Función Pública - Funded by EU – NextGenerationEU within the framework of |
|
the [project ILENIA](https://proyectoilenia.es/) with reference 2022/TL22/00215337. |
|
|
|
### Licensing Information |
|
|
|
InstruCAT contains data converted to instructions format from datasets with various licenses. |
|
The whole work is licensed under the most restrictive license in the corpus, which is [Creative Commons Attribution-NonCommercial-NoDerivs 4.0 International](https://creativecommons.org/licenses/by-nc-nd/4.0/deed.es) license. |
|
|
|
### Citation Information |
|
|
|
[N/A] |
|
|
|
### Contributions |
|
|
|
[N/A] |