ruanchaves commited on
Commit
386e561
1 Parent(s): 9ff8ae6

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +7 -14
README.md CHANGED
@@ -56,7 +56,7 @@ FaQuAD-NLI is a modified version of the [FaQuAD dataset](https://huggingface.co/
56
  ### Supported Tasks and Leaderboards
57
 
58
  - `question_answering`: The dataset can be used to train a model for question-answering tasks in the domain of Brazilian higher education institutions.
59
- - `textual_entailment`: FaQuAD-NLI can be used to train a model for textual entailment tasks, where question and answer sentence pairs are classified as either suitable or unsuitable as an answer.
60
 
61
  ### Languages
62
 
@@ -75,20 +75,13 @@ This dataset is in Brazilian Portuguese.
75
 
76
  ### Data Splits
77
 
78
- The dataset is split into three subsets: train, validation, and test. The splits were made carefully to avoid question and answer pairs belonging to the same document appearing in more than one split.
 
79
 
80
- - Train: 3128 instances
81
- - Validation: 731 instances
82
- - Test: 650 instances
83
-
84
- ### Licensing Information
85
-
86
- [More Information Needed]
87
-
88
- ### Citation Information
89
-
90
- [More Information Needed]
91
 
92
  ### Contributions
93
 
94
- [More Information Needed]
 
56
  ### Supported Tasks and Leaderboards
57
 
58
  - `question_answering`: The dataset can be used to train a model for question-answering tasks in the domain of Brazilian higher education institutions.
59
+ - `textual_entailment`: FaQuAD-NLI can be used to train a model for textual entailment tasks, where question and answer sentence pairs are classified as either suitable or unsuitable.
60
 
61
  ### Languages
62
 
 
75
 
76
  ### Data Splits
77
 
78
+ The dataset is split into three subsets: train, validation, and test.
79
+ The splits were made carefully to avoid question and answer pairs belonging to the same document appearing in more than one split.
80
 
81
+ | | Train | Validation | Test |
82
+ |------------|-------|------------|------|
83
+ | Instances | 3128 | 731 | 650 |
 
 
 
 
 
 
 
 
84
 
85
  ### Contributions
86
 
87
+ Thanks to [@ruanchaves](https://github.com/ruanchaves) for adding this dataset.