pszemraj commited on
Commit
c7be518
·
1 Parent(s): cda6a06

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +57 -0
README.md CHANGED
@@ -12,6 +12,25 @@ tags:
12
 
13
  - https://arxiv.org/abs/1909.03242
14
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
15
  > We contribute the largest publicly available dataset of naturally occurring factual claims for the purpose of automatic claim verification. It is collected from 26 fact checking websites in English, paired with textual sources and rich metadata, and labelled for veracity by human expert journalists. We present an in-depth analysis of the dataset, highlighting characteristics and challenges. Further, we present results for automatic veracity prediction, both with established baselines and with a novel method for joint ranking of evidence pages and predicting veracity that outperforms all baselines. Significant performance increases are achieved by encoding evidence, and by modelling metadata. Our best-performing model achieves a Macro F1 of 49.2%, showing that this is a challenging testbed for claim veracity prediction.
16
 
17
  ```
@@ -25,3 +44,41 @@ title = {MultiFC: A Real-World Multi-Domain Dataset for Evidence-Based Fact Chec
25
  year = 2019
26
  }
27
  ```
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
12
 
13
  - https://arxiv.org/abs/1909.03242
14
 
15
+ ## Dataset contents
16
+
17
+ ```
18
+ DatasetDict({
19
+ train: Dataset({
20
+ features: ['claimID', 'claim', 'label', 'claimURL', 'reason', 'categories', 'speaker', 'checker', 'tags', 'article title', 'publish date', 'climate', 'entities'],
21
+ num_rows: 27871
22
+ })
23
+ test: Dataset({
24
+ features: ['claimID', 'claim', 'label', 'claimURL', 'reason', 'categories', 'speaker', 'checker', 'tags', 'article title', 'publish date', 'climate', 'entities'],
25
+ num_rows: 3487
26
+ })
27
+ validation: Dataset({
28
+ features: ['claimID', 'claim', 'label', 'claimURL', 'reason', 'categories', 'speaker', 'checker', 'tags', 'article title', 'publish date', 'climate', 'entities'],
29
+ num_rows: 3484
30
+ })
31
+ })
32
+ ```
33
+ ## Paper Abstract / Citation
34
  > We contribute the largest publicly available dataset of naturally occurring factual claims for the purpose of automatic claim verification. It is collected from 26 fact checking websites in English, paired with textual sources and rich metadata, and labelled for veracity by human expert journalists. We present an in-depth analysis of the dataset, highlighting characteristics and challenges. Further, we present results for automatic veracity prediction, both with established baselines and with a novel method for joint ranking of evidence pages and predicting veracity that outperforms all baselines. Significant performance increases are achieved by encoding evidence, and by modelling metadata. Our best-performing model achieves a Macro F1 of 49.2%, showing that this is a challenging testbed for claim veracity prediction.
35
 
36
  ```
 
44
  year = 2019
45
  }
46
  ```
47
+
48
+ ## Original README
49
+ Real-World Multi-Domain Dataset for Evidence-Based Fact Checking of Claims
50
+
51
+ The MultiFC is the largest publicly available dataset of naturally occurring factual claims for automatic claim verification.
52
+ It is collected from 26 English fact-checking websites paired with textual sources and rich metadata and labeled for veracity by human expert journalists.
53
+
54
+
55
+ ###### TRAIN and DEV #######
56
+ The train and dev files are (tab-separated) and contain the following metadata:
57
+ claimID, claim, label, claimURL, reason, categories, speaker, checker, tags, article title, publish date, climate, entities
58
+
59
+ Fields that could not be crawled were set as "None." Please refer to Table 11 of our paper to see the summary statistics.
60
+
61
+
62
+ ###### TEST #######
63
+ The test file follows the same structure. However, we have removed the label. Thus, it only presents 12 metadata.
64
+ claimID, claim, claim, reason, categories, speaker, checker, tags, article title, publish date, climate, entities
65
+
66
+ Fields that could not be crawled were set as "None." Please refer to Table 11 of our paper to see the summary statistics.
67
+
68
+
69
+ ###### Snippets ######
70
+ The text of each claim is submitted verbatim as a query to the Google Search API (without quotes).
71
+ In the folder snippet, we provide the top 10 snippets retrieved. In some cases, fewer snippets are provided
72
+ since we have excluded the claimURL from the snippets.
73
+ Each file in the snippets folder is named after the claimID of the claim submitted as a query.
74
+ Snippets file is (tab-separated) and contains the following metadata:
75
+ rank_position, title, snippet, snippet_url
76
+
77
+
78
+ For more information, please refer to our paper:
79
+ References:
80
+ Isabelle Augenstein, Christina Lioma, Dongsheng Wang, Lucas Chaves Lima, Casper Hansen, Christian Hansen, and Jakob Grue Simonsen. 2019.
81
+ MultiFC: A Real-World Multi-Domain Dataset for Evidence-Based Fact Checking of Claims. In EMNLP. Association for Computational Linguistics.
82
+
83
+ https://copenlu.github.io/publication/2019_emnlp_augenstein/
84
+