pietrolesci commited on
Commit
db84220
1 Parent(s): 756abcb

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +23 -37
README.md CHANGED
@@ -1,50 +1,36 @@
1
- In the next section the README from the original project and below the code to generate the dataset.
 
 
2
 
3
- ## NLI style FEVER Download Link
4
- Link: [NLI style FEVER dataset]().
5
 
6
- ## What is in the file?
7
- This file contains the NLI style FEVER dataset introduced in the [**Adversarial NLI paper**](https://arxiv.org/abs/1910.14599).
8
- The dataset is used together with [**SNLI**](https://nlp.stanford.edu/projects/snli/) and [**MultiNLI**](https://www.nyu.edu/projects/bowman/multinli/) to train the backend NLI model in the [**Adversarial NLI**](https://adversarialnli.com/).
 
9
 
10
- ## What is the Original FEVER dataset?
11
- Each data point in the original FEVER dataset is a textual claim paired with a label (support / refute / not enough information) depending on whether the claim can be verified by the Wikipedia.
12
- For examples with support and refute labels in the training set and dev set, ground truth location of the evidence in the Wikipedia is also provided. (Please refer to [the original paper](https://arxiv.org/abs/1803.05355) for details)
13
 
14
- ## What is the difference between the original FEVER and this file?
15
- In the original FEVER setting, the input is a claim and the Wikipedia and the expected output is a label.
16
- However, this is different from the standard NLI formalization which is basically a *pair-of-sequence to label* problem.
17
- To facilitate NLI-related research take advantage of the FEVER dataset, we pair the claims in the FEVER dataset with **textual evidence** and make it a *pair-of-sequence to label* formatted dataset.
18
-
19
- ## How is the pairing implemented?
20
- We first applied evidence selection using the method in previous [SOTA fact-checking system](https://arxiv.org/abs/1811.07039) such that each claim will have a collection of potential evidential sentences.
21
- Then, for claims in FEVER dev set, test set and the claims with not-enough-info label in training set, we directly paired them with the concatenation of all selected evidential sentences.
22
- (Note that for not-enough-info claims in FEVER training set, no ground truth evidence locations are provided in the original dataset.)
23
- For claims in FEVER training set with support and refute label where ground truth evidence locations are provided, we paired them with ground truth textual evidence plus some other randomly sampled evidence from the sentence collection selected by [SOTA fact-checking system](https://arxiv.org/abs/1811.07039).
24
- Therefore, the same claim might got paired with multiple different contexts.
25
- This can help the final NLI model be adaptive to the noisy upstream evidence.
26
-
27
- ## What is the format?
28
- The train/dev/test data are contained in the three jsonl files.
29
- The `query' and `context' field correspond to `premise' and `hypothesis' and the `SUPPORT', `REFUTE', and `NOT ENOUGH INFO' labels correspond to `ENTAILMENT', `CONTRADICT', and `NEUTRAL' label, respectively, in typical NLI settings.
30
- The `cid' can be mapped back the original FEVER `id' field. (The labels for both dev and test are hidden but you can retrieve the label for dev using the cid and the original FEVER data.)
31
- Finally, you can train your NLI model using this data and get FEVER verification label results. The label accuracy on dev and test will be comparable to the previous fact-checking works and you can submit your entries to [FEVER CodaLab Leaderboard](https://competitions.codalab.org/competitions/18814#results) to report test results.
32
-
33
- ## Citation
34
- If you used the data in this file, please cite the following paper:
35
- ```
36
- @inproceedings{nie2019combining,
37
- title={Combining Fact Extraction and Verification with Neural Semantic Matching Networks},
38
- author={Yixin Nie and Haonan Chen and Mohit Bansal},
39
- booktitle={Association for the Advancement of Artificial Intelligence ({AAAI})},
40
- year={2019}
41
  }
42
  ```
43
 
 
 
 
 
 
44
 
 
45
 
46
 
47
- ## Code to generate dataset
48
  ```python
49
  import pandas as pd
50
  from datasets import Dataset, ClassLabel, load_dataset, Value, Features, DatasetDict
 
1
+ ## Overview
2
+ The original dataset can be found [here](https://www.dropbox.com/s/hylbuaovqwo2zav/nli_fever.zip?dl=0)
3
+ while the Github repo is [here](https://github.com/easonnie/combine-FEVER-NSMN/blob/master/other_resources/nli_fever.md).
4
 
5
+ This dataset has been proposed in [Combining fact extraction and verification with neural semantic matching networks](https://dl.acm.org/doi/abs/10.1609/aaai.v33i01.33016859). This dataset has been created as a modification
6
+ of FEVER.
7
 
8
+ In the original FEVER setting, the input is a claim from Wikipedia and the expected output is a label.
9
+ However, this is different from the standard NLI formalization which is basically a *pair-of-sequence to label* problem.
10
+ To facilitate NLI-related research to take advantage of the FEVER dataset, the authors pair the claims in the FEVER dataset
11
+ with the textual evidence and make it a *pair-of-sequence to label* formatted dataset.
12
 
13
+ ## Dataset curation
14
+ The label mapping follows the paper and is the following
 
15
 
16
+ ```python
17
+ mapping = {
18
+ "SUPPORTS": 0, # entailment
19
+ "NOT ENOUGH INFO": 1, # neutral
20
+ "REFUTES": 2, # contradiction
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
21
  }
22
  ```
23
 
24
+ Also, the "verifiable" column has been encoded as follows
25
+
26
+ ```python
27
+ mapping = {"NOT VERIFIABLE": 0, "VERIFIABLE": 1}
28
+ ```
29
 
30
+ Finally, a consistency check with the labels reported in the original FEVER dataset is performed.
31
 
32
 
33
+ ## Code to generate the dataset
34
  ```python
35
  import pandas as pd
36
  from datasets import Dataset, ClassLabel, load_dataset, Value, Features, DatasetDict