Datasets:

Modalities:
Text
Formats:
parquet
Sub-tasks:
extractive-qa
Languages:
Catalan
ArXiv:
Libraries:
Datasets
pandas
License:
gonzalez-agirre commited on
Commit
604bcae
1 Parent(s): 80baa6b

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +6 -6
README.md CHANGED
@@ -54,7 +54,7 @@ task_ids:
54
 
55
  This dataset can be used to build extractive-QA and Language Models.
56
 
57
- Splts have been balanced by kind of question, and unlike other datasets like SQUAD, it only contains, per record, one question and one answer for each context, although the contexts can repeat multiple times.
58
 
59
  ### Supported Tasks and Leaderboards
60
  Extractive-QA, Language Model.
@@ -85,7 +85,7 @@ Catalan (`ca`).
85
  },
86
  ```
87
  ### Data Fields
88
- Follows [Rajpurkar, Pranav et al., 2016](http://arxiv.org/abs/1606.05250) for squad v1 datasets.
89
  - `id` (str): Unique ID assigned to the question.
90
  - `title` (str): Title of the Wikipedia article.
91
  - `context` (str): Wikipedia section text.
@@ -99,7 +99,7 @@ Follows [Rajpurkar, Pranav et al., 2016](http://arxiv.org/abs/1606.05250) for sq
99
  - test.json: 2135 question/answer pairs
100
  ## Dataset Creation
101
  ### Methodology
102
- Aggregation anb balancing from ViquiQUAD and VilaQUAD datasets
103
  ### Curation Rationale
104
  For compatibility with similar datasets in other languages, we followed as close as possible existing curation guidelines.
105
  ### Source Data
@@ -110,11 +110,11 @@ For compatibility with similar datasets in other languages, we followed as close
110
  [More Information Needed]
111
  ### Annotations
112
  #### Annotation process
113
- We comissioned the creation of 1 to 5 questions for each context, following an adaptation of the guidelines from SQUAD 1.0 ([Rajpurkar, Pranav et al. “SQuAD: 100, 000+ Questions for Machine Comprehension of Text.” EMNLP (2016)](http://arxiv.org/abs/1606.05250)).
114
  #### Who are the annotators?
115
- Annotation was commissioned to an specialized company that hired a team of native language speakers.
116
  ### Personal and Sensitive Information
117
- No personal or sensitive information included.
118
  ## Considerations for Using the Data
119
  ### Social Impact of Dataset
120
  [More Information Needed]
 
54
 
55
  This dataset can be used to build extractive-QA and Language Models.
56
 
57
+ Splits have been balanced by kind of question, and unlike other datasets like SQUAD, it only contains, per record, one question and one answer for each context, although the contexts can repeat multiple times.
58
 
59
  ### Supported Tasks and Leaderboards
60
  Extractive-QA, Language Model.
 
85
  },
86
  ```
87
  ### Data Fields
88
+ Follows [Rajpurkar, Pranav et al., 2016](http://arxiv.org/abs/1606.05250) for SQUAD v1 datasets.
89
  - `id` (str): Unique ID assigned to the question.
90
  - `title` (str): Title of the Wikipedia article.
91
  - `context` (str): Wikipedia section text.
 
99
  - test.json: 2135 question/answer pairs
100
  ## Dataset Creation
101
  ### Methodology
102
+ Aggregation and balancing from ViquiQUAD and VilaQUAD datasets.
103
  ### Curation Rationale
104
  For compatibility with similar datasets in other languages, we followed as close as possible existing curation guidelines.
105
  ### Source Data
 
110
  [More Information Needed]
111
  ### Annotations
112
  #### Annotation process
113
+ We commissioned the creation of 1 to 5 questions for each context, following an adaptation of the guidelines from SQUAD 1.0 ([Rajpurkar, Pranav et al. “SQuAD: 100, 000+ Questions for Machine Comprehension of Text.” EMNLP (2016)](http://arxiv.org/abs/1606.05250)).
114
  #### Who are the annotators?
115
+ Annotation was commissioned to a specialized company that hired a team of native language speakers.
116
  ### Personal and Sensitive Information
117
+ No personal or sensitive information is included.
118
  ## Considerations for Using the Data
119
  ### Social Impact of Dataset
120
  [More Information Needed]