pietrolesci commited on
Commit
756abcb
·
1 Parent(s): 7fff128

Create README.md

Browse files
Files changed (1) hide show
  1. README.md +192 -0
README.md ADDED
@@ -0,0 +1,192 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ In the next section the README from the original project and below the code to generate the dataset.
2
+
3
+ ## NLI style FEVER Download Link
4
+ Link: [NLI style FEVER dataset]().
5
+
6
+ ## What is in the file?
7
+ This file contains the NLI style FEVER dataset introduced in the [**Adversarial NLI paper**](https://arxiv.org/abs/1910.14599).
8
+ The dataset is used together with [**SNLI**](https://nlp.stanford.edu/projects/snli/) and [**MultiNLI**](https://www.nyu.edu/projects/bowman/multinli/) to train the backend NLI model in the [**Adversarial NLI**](https://adversarialnli.com/).
9
+
10
+ ## What is the Original FEVER dataset?
11
+ Each data point in the original FEVER dataset is a textual claim paired with a label (support / refute / not enough information) depending on whether the claim can be verified by the Wikipedia.
12
+ For examples with support and refute labels in the training set and dev set, ground truth location of the evidence in the Wikipedia is also provided. (Please refer to [the original paper](https://arxiv.org/abs/1803.05355) for details)
13
+
14
+ ## What is the difference between the original FEVER and this file?
15
+ In the original FEVER setting, the input is a claim and the Wikipedia and the expected output is a label.
16
+ However, this is different from the standard NLI formalization which is basically a *pair-of-sequence to label* problem.
17
+ To facilitate NLI-related research take advantage of the FEVER dataset, we pair the claims in the FEVER dataset with **textual evidence** and make it a *pair-of-sequence to label* formatted dataset.
18
+
19
+ ## How is the pairing implemented?
20
+ We first applied evidence selection using the method in previous [SOTA fact-checking system](https://arxiv.org/abs/1811.07039) such that each claim will have a collection of potential evidential sentences.
21
+ Then, for claims in FEVER dev set, test set and the claims with not-enough-info label in training set, we directly paired them with the concatenation of all selected evidential sentences.
22
+ (Note that for not-enough-info claims in FEVER training set, no ground truth evidence locations are provided in the original dataset.)
23
+ For claims in FEVER training set with support and refute label where ground truth evidence locations are provided, we paired them with ground truth textual evidence plus some other randomly sampled evidence from the sentence collection selected by [SOTA fact-checking system](https://arxiv.org/abs/1811.07039).
24
+ Therefore, the same claim might got paired with multiple different contexts.
25
+ This can help the final NLI model be adaptive to the noisy upstream evidence.
26
+
27
+ ## What is the format?
28
+ The train/dev/test data are contained in the three jsonl files.
29
+ The `query' and `context' field correspond to `premise' and `hypothesis' and the `SUPPORT', `REFUTE', and `NOT ENOUGH INFO' labels correspond to `ENTAILMENT', `CONTRADICT', and `NEUTRAL' label, respectively, in typical NLI settings.
30
+ The `cid' can be mapped back the original FEVER `id' field. (The labels for both dev and test are hidden but you can retrieve the label for dev using the cid and the original FEVER data.)
31
+ Finally, you can train your NLI model using this data and get FEVER verification label results. The label accuracy on dev and test will be comparable to the previous fact-checking works and you can submit your entries to [FEVER CodaLab Leaderboard](https://competitions.codalab.org/competitions/18814#results) to report test results.
32
+
33
+ ## Citation
34
+ If you used the data in this file, please cite the following paper:
35
+ ```
36
+ @inproceedings{nie2019combining,
37
+ title={Combining Fact Extraction and Verification with Neural Semantic Matching Networks},
38
+ author={Yixin Nie and Haonan Chen and Mohit Bansal},
39
+ booktitle={Association for the Advancement of Artificial Intelligence ({AAAI})},
40
+ year={2019}
41
+ }
42
+ ```
43
+
44
+
45
+
46
+
47
+ ## Code to generate dataset
48
+ ```python
49
+ import pandas as pd
50
+ from datasets import Dataset, ClassLabel, load_dataset, Value, Features, DatasetDict
51
+ import json
52
+
53
+
54
+ # download data from https://www.dropbox.com/s/hylbuaovqwo2zav/nli_fever.zip?dl=0
55
+ paths = {
56
+ "train": "<some_path>/nli_fever/train_fitems.jsonl",
57
+ "validation": "<some_path>/nli_fever/dev_fitems.jsonl",
58
+ "test": "<some_path>/nli_fever/test_fitems.jsonl",
59
+ }
60
+
61
+
62
+ # parsing code from https://github.com/facebookresearch/anli/blob/main/src/utils/common.py
63
+ registered_jsonabl_classes = {}
64
+
65
+
66
+ def register_class(cls):
67
+ global registered_jsonabl_classes
68
+ if cls not in registered_jsonabl_classes:
69
+ registered_jsonabl_classes.update({cls.__name__: cls})
70
+
71
+
72
+ def unserialize_JsonableObject(d):
73
+ global registered_jsonabl_classes
74
+ classname = d.pop("_jcls_", None)
75
+ if classname:
76
+ cls = registered_jsonabl_classes[classname]
77
+ obj = cls.__new__(cls) # Make instance without calling __init__
78
+ for key, value in d.items():
79
+ setattr(obj, key, value)
80
+ return obj
81
+ else:
82
+ return d
83
+
84
+
85
+ def load_jsonl(filename, debug_num=None):
86
+ d_list = []
87
+ with open(filename, encoding="utf-8", mode="r") as in_f:
88
+ print("Load Jsonl:", filename)
89
+ for line in in_f:
90
+ item = json.loads(line.strip(), object_hook=unserialize_JsonableObject)
91
+ d_list.append(item)
92
+ if debug_num is not None and 0 < debug_num == len(d_list):
93
+ break
94
+
95
+ return d_list
96
+
97
+
98
+ def get_original_fever() -> pd.DataFrame:
99
+ """Get original fever datasets."""
100
+
101
+ fever_v1 = load_dataset("fever", "v1.0")
102
+ fever_v2 = load_dataset("fever", "v2.0")
103
+
104
+ columns = ["id", "label"]
105
+ splits = ["paper_test", "paper_dev", "labelled_dev", "train"]
106
+ list_dfs = [fever_v1[split].to_pandas()[columns] for split in splits]
107
+ list_dfs.append(fever_v2["validation"].to_pandas()[columns])
108
+
109
+ dfs = pd.concat(list_dfs, ignore_index=False)
110
+ dfs = dfs.drop_duplicates()
111
+
112
+ dfs = dfs.rename(columns={"label": "fever_gold_label"})
113
+ return dfs
114
+
115
+
116
+ def load_and_process(path: str, fever_df: pd.DataFrame) -> pd.DataFrame:
117
+ """Load data split and merge with fever."""
118
+
119
+ df = pd.DataFrame(load_jsonl(path))
120
+ df = df.rename(columns={"query": "premise", "context": "hypothesis"})
121
+
122
+ # adjust dtype
123
+ df["cid"] = df["cid"].astype(int)
124
+
125
+ # merge with original fever to get labels
126
+ df = pd.merge(df, fever_df, left_on="cid", right_on="id", how="inner").drop_duplicates()
127
+
128
+ return df
129
+
130
+
131
+ def encode_labels(df: pd.DataFrame) -> pd.DataFrame:
132
+ """Encode labels using the mapping used in SNLI and MultiNLI"""
133
+ mapping = {
134
+ "SUPPORTS": 0, # entailment
135
+ "NOT ENOUGH INFO": 1, # neutral
136
+ "REFUTES": 2, # contradiction
137
+ }
138
+ df["label"] = df["fever_gold_label"].map(mapping)
139
+
140
+ # verifiable
141
+ df["verifiable"] = df["verifiable"].map({"NOT VERIFIABLE": 0, "VERIFIABLE": 1})
142
+
143
+ return df
144
+
145
+
146
+ if __name__ == "__main__":
147
+ fever_df = get_original_fever()
148
+
149
+ dataset_splits = {}
150
+
151
+ for split, path in paths.items():
152
+
153
+ # from json to dataframe and merge with fever
154
+ df = load_and_process(path, fever_df)
155
+
156
+ if not len(df) > 0:
157
+ print(f"Split `{split}` has no matches")
158
+ continue
159
+
160
+ if split == "train":
161
+ # train must have same labels
162
+ assert sum(df["fever_gold_label"] != df["label"]) == 0
163
+
164
+ # encode labels using the default mapping used by other nli datasets
165
+ # i.e, entailment: 0, neutral: 1, contradiction: 2
166
+ df = df.drop(columns=["label"])
167
+ df = encode_labels(df)
168
+
169
+ # cast to dataset
170
+ features = Features(
171
+ {
172
+ "cid": Value(dtype="int64", id=None),
173
+ "fid": Value(dtype="string", id=None),
174
+ "id": Value(dtype="int32", id=None),
175
+ "premise": Value(dtype="string", id=None),
176
+ "hypothesis": Value(dtype="string", id=None),
177
+ "verifiable": Value(dtype="int64", id=None),
178
+ "fever_gold_label": Value(dtype="string", id=None),
179
+ "label": ClassLabel(num_classes=3, names=["entailment", "neutral", "contradiction"]),
180
+ }
181
+ )
182
+ if "test" in path:
183
+ # no features for test set
184
+ df["label"] = -1
185
+ df["verifiable"] = -1
186
+ df["fever_gold_label"] = "not available"
187
+ dataset = Dataset.from_pandas(df, features=features)
188
+ dataset_splits[split] = dataset
189
+
190
+ nli_fever = DatasetDict(dataset_splits)
191
+ nli_fever.push_to_hub("pietrolesci/nli_fever", token="<your token>")
192
+ ```