Datasets:
Tasks:
Question Answering
Modalities:
Text
Formats:
parquet
Sub-tasks:
extractive-qa
Languages:
Catalan
Size:
10K - 100K
ArXiv:
License:
asier-gutierrez
commited on
Commit
•
84a139a
1
Parent(s):
648166d
dataset
Browse files- .gitattributes +1 -0
- README.md +142 -0
- catalanqa.py +110 -0
- dev.json +3 -0
- test.json +3 -0
- train.json +3 -0
.gitattributes
CHANGED
@@ -35,3 +35,4 @@ saved_model/**/* filter=lfs diff=lfs merge=lfs -text
|
|
35 |
*.mp3 filter=lfs diff=lfs merge=lfs -text
|
36 |
*.ogg filter=lfs diff=lfs merge=lfs -text
|
37 |
*.wav filter=lfs diff=lfs merge=lfs -text
|
|
|
|
35 |
*.mp3 filter=lfs diff=lfs merge=lfs -text
|
36 |
*.ogg filter=lfs diff=lfs merge=lfs -text
|
37 |
*.wav filter=lfs diff=lfs merge=lfs -text
|
38 |
+
*.json filter=lfs diff=lfs merge=lfs -text
|
README.md
ADDED
@@ -0,0 +1,142 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
---
|
2 |
+
|
3 |
+
annotations_creators:
|
4 |
+
- expert-generated
|
5 |
+
language_creators:
|
6 |
+
- found
|
7 |
+
languages:
|
8 |
+
- catalan
|
9 |
+
licenses:
|
10 |
+
- cc-by-sa-4.0
|
11 |
+
multilinguality:
|
12 |
+
- monolingual
|
13 |
+
pretty_name: catalanQA
|
14 |
+
size_categories:
|
15 |
+
- 1K<n<10K
|
16 |
+
source_datasets:
|
17 |
+
- original
|
18 |
+
task_categories:
|
19 |
+
- question-answering
|
20 |
+
task_ids:
|
21 |
+
- extractive-qa
|
22 |
+
|
23 |
+
---
|
24 |
+
## Table of Contents
|
25 |
+
- [Table of Contents](#table-of-contents)
|
26 |
+
- [Dataset Description](#dataset-description)
|
27 |
+
- [Dataset Summary](#dataset-summary)
|
28 |
+
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
|
29 |
+
- [Languages](#languages)
|
30 |
+
- [Dataset Structure](#dataset-structure)
|
31 |
+
- [Data Instances](#data-instances)
|
32 |
+
- [Data Fields](#data-fields)
|
33 |
+
- [Data Splits](#data-splits)
|
34 |
+
- [Dataset Creation](#dataset-creation)
|
35 |
+
- [Curation Rationale](#curation-rationale)
|
36 |
+
- [Source Data](#source-data)
|
37 |
+
- [Annotations](#annotations)
|
38 |
+
- [Personal and Sensitive Information](#personal-and-sensitive-information)
|
39 |
+
- [Considerations for Using the Data](#considerations-for-using-the-data)
|
40 |
+
- [Social Impact of Dataset](#social-impact-of-dataset)
|
41 |
+
- [Discussion of Biases](#discussion-of-biases)
|
42 |
+
- [Other Known Limitations](#other-known-limitations)
|
43 |
+
- [Additional Information](#additional-information)
|
44 |
+
- [Dataset Curators](#dataset-curators)
|
45 |
+
- [Licensing Information](#licensing-information)
|
46 |
+
- [Citation Information](#citation-information)
|
47 |
+
- [Contributions](#contributions)
|
48 |
+
## Dataset Description
|
49 |
+
- **Homepage:** https://github.com/projecte-aina
|
50 |
+
- **Point of Contact:** [Carlos Rodríguez-Penagos](mailto:carlos.rodriguez1@bsc.es) and [Carme Armentano-Oller](mailto:carme.armentano@bsc.es)
|
51 |
+
### Dataset Summary
|
52 |
+
|
53 |
+
CatalanQA: It is an aggregation and balancing of 2 previous datasets: VilaQUAD and ViquiQUAD, which were described in
|
54 |
+
|
55 |
+
This dataset can be used to build extractive-QA and Language Models.
|
56 |
+
|
57 |
+
Splts have been balanced by kind of question, and unlike other datasets like SQUAD, it only contains, per record, one question and one answer for each context, although the contexts can repeat multiple times.
|
58 |
+
|
59 |
+
### Supported Tasks and Leaderboards
|
60 |
+
Extractive-QA, Language Model.
|
61 |
+
### Languages
|
62 |
+
Catalan (`ca`).
|
63 |
+
## Dataset Structure
|
64 |
+
### Data Instances
|
65 |
+
```
|
66 |
+
{
|
67 |
+
"title": "Els 521 policies espanyols amb més mala nota a les oposicions seran enviats a Catalunya",
|
68 |
+
"paragraphs": [
|
69 |
+
{
|
70 |
+
"context": "El Ministeri d'Interior espanyol enviarà a Catalunya els 521 policies espanyols que han obtingut més mala nota a les oposicions. Segons que explica El País, hi havia mig miler de places vacants que s'havien de cobrir, però els agents amb més bones puntuacions han elegit destinacions diferents. En total van aprovar les oposicions 2.600 aspirants. D'aquests, en seran destinats al Principat 521 dels 560 amb més mala nota. Per l'altra banda, entre els 500 agents amb més bona nota, només 8 han triat Catalunya. Fonts de la policia espanyola que esmenta el diari ho atribueixen al procés d'independència, al Primer d'Octubre i a la 'situació social' que se'n deriva.",
|
71 |
+
"qas": [
|
72 |
+
{
|
73 |
+
"question": "Quants policies enviaran a Catalunya?",
|
74 |
+
"id": "0.5961700408283691",
|
75 |
+
"answers": [
|
76 |
+
{
|
77 |
+
"text": "521",
|
78 |
+
"answer_start": 57
|
79 |
+
}
|
80 |
+
]
|
81 |
+
}
|
82 |
+
]
|
83 |
+
}
|
84 |
+
]
|
85 |
+
},
|
86 |
+
```
|
87 |
+
### Data Fields
|
88 |
+
Follows [Rajpurkar, Pranav et al., 2016](http://arxiv.org/abs/1606.05250) for squad v1 datasets.
|
89 |
+
- `id` (str): Unique ID assigned to the question.
|
90 |
+
- `title` (str): Title of the Wikipedia article.
|
91 |
+
- `context` (str): Wikipedia section text.
|
92 |
+
- `question` (str): Question.
|
93 |
+
- `answers` (list): List of answers to the question, each containing:
|
94 |
+
- `text` (str): Span text answering to the question.
|
95 |
+
- `answer_start` Starting offset of the span text answering to the question.
|
96 |
+
### Data Splits
|
97 |
+
- train.json: 17135 question/answer pairs
|
98 |
+
- dev.json: 2157 question/answer pairs
|
99 |
+
- test.json: 2135 question/answer pairs
|
100 |
+
## Dataset Creation
|
101 |
+
### Methodology
|
102 |
+
Aggregation anb balancing from ViquiQUAD and VilaQUAD datasets
|
103 |
+
### Curation Rationale
|
104 |
+
For compatibility with similar datasets in other languages, we followed as close as possible existing curation guidelines.
|
105 |
+
### Source Data
|
106 |
+
- https://www.vilaweb.cat and https://ca.wikipedia.org
|
107 |
+
#### Initial Data Collection and Normalization
|
108 |
+
|
109 |
+
#### Who are the source language producers?
|
110 |
+
[More Information Needed]
|
111 |
+
### Annotations
|
112 |
+
#### Annotation process
|
113 |
+
We comissioned the creation of 1 to 5 questions for each context, following an adaptation of the guidelines from SQUAD 1.0 ([Rajpurkar, Pranav et al. “SQuAD: 100, 000+ Questions for Machine Comprehension of Text.” EMNLP (2016)](http://arxiv.org/abs/1606.05250)).
|
114 |
+
#### Who are the annotators?
|
115 |
+
Annotation was commissioned to an specialized company that hired a team of native language speakers.
|
116 |
+
### Personal and Sensitive Information
|
117 |
+
No personal or sensitive information included.
|
118 |
+
## Considerations for Using the Data
|
119 |
+
### Social Impact of Dataset
|
120 |
+
[More Information Needed]
|
121 |
+
### Discussion of Biases
|
122 |
+
[More Information Needed]
|
123 |
+
### Other Known Limitations
|
124 |
+
[More Information Needed]
|
125 |
+
## Additional Information
|
126 |
+
### Dataset Curators
|
127 |
+
Carlos Rodríguez-Penagos (carlos.rodriguez1@bsc.es) and Carme Armentano-Oller (carme.armentano@bsc.es)
|
128 |
+
### Licensing Information
|
129 |
+
This work is licensed under a <a rel="license" href="https://creativecommons.org/licenses/by-sa/4.0/">Attribution-ShareAlike 4.0 International License</a>.
|
130 |
+
|
131 |
+
### Citation Information
|
132 |
+
|
133 |
+
```
|
134 |
+
|
135 |
+
```
|
136 |
+
|
137 |
+
[DOI]()
|
138 |
+
|
139 |
+
### Funding
|
140 |
+
|
141 |
+
This work was funded by the [Catalan Ministry of the Vice-presidency, Digital Policies and Territory](https://politiquesdigitals.gencat.cat/en/inici/index.html) within the framework of the [Aina project](https://politiquesdigitals.gencat.cat/ca/tic/aina-el-projecte-per-garantir-el-catala-en-lera-digital/).
|
142 |
+
|
catalanqa.py
ADDED
@@ -0,0 +1,110 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
"""CatalanQA Dataset."""
|
2 |
+
# Loading script for the CatalanQA dataset.
|
3 |
+
import json
|
4 |
+
|
5 |
+
import datasets
|
6 |
+
|
7 |
+
logger = datasets.logging.get_logger(__name__)
|
8 |
+
|
9 |
+
_CITATION = """\
|
10 |
+
None
|
11 |
+
"""
|
12 |
+
|
13 |
+
_DESCRIPTION = """\
|
14 |
+
CatalanQA: an extractive QA dataset from original Catalan Sources: Wikipedia and VilaWeb newswire.
|
15 |
+
|
16 |
+
It is an aggregation and balancing of 2 previous datasets: VilaQUAD and ViquiQUAD, which were described in
|
17 |
+
|
18 |
+
This dataset can be used to build extractive-QA and Language Models.
|
19 |
+
|
20 |
+
Splts have been balanced by kind of question, and unlike other datasets like SQUAD, it only contains, per record, one question and one answer for each context, although the contexts can repeat multiple times.
|
21 |
+
|
22 |
+
- test.json contains 2135 question/answer pairs
|
23 |
+
|
24 |
+
- train.json contains 17135 question/answer pairs
|
25 |
+
|
26 |
+
- dev.json contains 2157 question/answer pairs
|
27 |
+
|
28 |
+
Funded by the Generalitat de Catalunya, Departament de Polítiques Digitals i Administració Pública (AINA),
|
29 |
+
and Plan de Impulso de las Tecnologías del Lenguaje (Plan TL).
|
30 |
+
"""
|
31 |
+
|
32 |
+
_HOMEPAGE = ""
|
33 |
+
|
34 |
+
_URL = "https://huggingface.co/datasets/projecte-aina/catalanqa/resolve/main/"
|
35 |
+
_TRAINING_FILE = "train.json"
|
36 |
+
_DEV_FILE = "eval.json"
|
37 |
+
_TEST_FILE = "test.json"
|
38 |
+
|
39 |
+
|
40 |
+
class CatalanQA(datasets.GeneratorBasedBuilder):
|
41 |
+
"""CatalanQA Dataset."""
|
42 |
+
|
43 |
+
VERSION = datasets.Version("1.0.1")
|
44 |
+
|
45 |
+
def _info(self):
|
46 |
+
return datasets.DatasetInfo(
|
47 |
+
description=_DESCRIPTION,
|
48 |
+
features=datasets.Features(
|
49 |
+
{
|
50 |
+
"id": datasets.Value("string"),
|
51 |
+
"title": datasets.Value("string"),
|
52 |
+
"context": datasets.Value("string"),
|
53 |
+
"question": datasets.Value("string"),
|
54 |
+
"answers": [
|
55 |
+
{
|
56 |
+
"text": datasets.Value("string"),
|
57 |
+
"answer_start": datasets.Value("int32"),
|
58 |
+
}
|
59 |
+
],
|
60 |
+
}
|
61 |
+
),
|
62 |
+
# No default supervised_keys (as we have to pass both question
|
63 |
+
# and context as input).
|
64 |
+
supervised_keys=None,
|
65 |
+
homepage=_HOMEPAGE,
|
66 |
+
citation=_CITATION,
|
67 |
+
)
|
68 |
+
|
69 |
+
def _split_generators(self, dl_manager):
|
70 |
+
"""Returns SplitGenerators."""
|
71 |
+
urls_to_download = {
|
72 |
+
"train": f"{_URL}{_TRAINING_FILE}",
|
73 |
+
"dev": f"{_URL}{_DEV_FILE}",
|
74 |
+
"test": f"{_URL}{_TEST_FILE}",
|
75 |
+
}
|
76 |
+
downloaded_files = dl_manager.download(urls_to_download)
|
77 |
+
|
78 |
+
return [
|
79 |
+
datasets.SplitGenerator(name=datasets.Split.TRAIN, gen_kwargs={"filepath": downloaded_files["train"]}),
|
80 |
+
datasets.SplitGenerator(name=datasets.Split.VALIDATION, gen_kwargs={"filepath": downloaded_files["dev"]}),
|
81 |
+
datasets.SplitGenerator(name=datasets.Split.TEST, gen_kwargs={"filepath": downloaded_files["test"]}),
|
82 |
+
]
|
83 |
+
|
84 |
+
def _generate_examples(self, filepath):
|
85 |
+
"""This function returns the examples in the raw (text) form."""
|
86 |
+
logger.info("generating examples from = %s", filepath)
|
87 |
+
with open(filepath, encoding="utf-8") as f:
|
88 |
+
catalanqa = json.load(f)
|
89 |
+
for article in catalanqa["data"]:
|
90 |
+
title = article.get("title", "").strip()
|
91 |
+
for paragraph in article["paragraphs"]:
|
92 |
+
context = paragraph["context"].strip()
|
93 |
+
for qa in paragraph["qas"]:
|
94 |
+
question = qa["question"].strip()
|
95 |
+
id_ = qa["id"]
|
96 |
+
# answer_starts = [answer["answer_start"] for answer in qa["answers"]]
|
97 |
+
# answers = [answer["text"].strip() for answer in qa["answers"]]
|
98 |
+
text = qa["answers"][0]["text"]
|
99 |
+
answer_start = qa["answers"][0]["answer_start"]
|
100 |
+
|
101 |
+
# Features currently used are "context", "question", and "answers".
|
102 |
+
# Others are extracted here for the ease of future expansions.
|
103 |
+
yield id_, {
|
104 |
+
"title": title,
|
105 |
+
"context": context,
|
106 |
+
"question": question,
|
107 |
+
"id": id_,
|
108 |
+
"answers": [{"text": text, "answer_start": answer_start}],
|
109 |
+
}
|
110 |
+
|
dev.json
ADDED
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
1 |
+
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:c7c6801fb921e889b0a0b2aa1181659598f2ed074d4fcd4684ba61e557e3420f
|
3 |
+
size 2951660
|
test.json
ADDED
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
1 |
+
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:733e7247aacda187da67cc1506311aaa8bc3c30401a13ae48df61b1e27af1e57
|
3 |
+
size 2899131
|
train.json
ADDED
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
1 |
+
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:baecf1ca47cc55f8d17c6e1316016b20f0ed6768cad8e3d7a28d4ac0d7f52d87
|
3 |
+
size 23437913
|