|
{ |
|
"paper_id": "2021", |
|
"header": { |
|
"generated_with": "S2ORC 1.0.0", |
|
"date_generated": "2023-01-19T14:32:03.111738Z" |
|
}, |
|
"title": "DANFEVER: claim verification dataset for Danish", |
|
"authors": [ |
|
{ |
|
"first": "Jeppe", |
|
"middle": [], |
|
"last": "N\u00f8rregaard", |
|
"suffix": "", |
|
"affiliation": { |
|
"laboratory": "", |
|
"institution": "IT University of Copenhagen", |
|
"location": {} |
|
}, |
|
"email": "" |
|
}, |
|
{ |
|
"first": "Leon", |
|
"middle": [], |
|
"last": "Derczynski", |
|
"suffix": "", |
|
"affiliation": { |
|
"laboratory": "", |
|
"institution": "IT University of Copenhagen", |
|
"location": {} |
|
}, |
|
"email": "" |
|
} |
|
], |
|
"year": "", |
|
"venue": null, |
|
"identifiers": {}, |
|
"abstract": "Automatic detection of false claims is a difficult task. Existing data to support this task has largely been limited to English. We present a dataset, DANFEVER, intended for claim verification in Danish. The dataset builds upon the task framing of the FEVER fact extraction and verification challenge. DANFEVER can be used for creating models for detecting mis-& disinformation in Danish as well as for verification in multilingual settings.", |
|
"pdf_parse": { |
|
"paper_id": "2021", |
|
"_pdf_hash": "", |
|
"abstract": [ |
|
{ |
|
"text": "Automatic detection of false claims is a difficult task. Existing data to support this task has largely been limited to English. We present a dataset, DANFEVER, intended for claim verification in Danish. The dataset builds upon the task framing of the FEVER fact extraction and verification challenge. DANFEVER can be used for creating models for detecting mis-& disinformation in Danish as well as for verification in multilingual settings.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Abstract", |
|
"sec_num": null |
|
} |
|
], |
|
"body_text": [ |
|
{ |
|
"text": "The internet is rife with false and misleading information. Detection of misinformation and fact checking therefore presents a considerable task, spread over many languages (Derczynski et al., 2015; Wardle and Derakhshan, 2017; Zubiaga et al., 2018) . One approach to this task is to break down information content into verifiable claims, which can subsequently be fact-checked by automated systems.", |
|
"cite_spans": [ |
|
{ |
|
"start": 173, |
|
"end": 198, |
|
"text": "(Derczynski et al., 2015;", |
|
"ref_id": "BIBREF5" |
|
}, |
|
{ |
|
"start": 199, |
|
"end": 227, |
|
"text": "Wardle and Derakhshan, 2017;", |
|
"ref_id": "BIBREF21" |
|
}, |
|
{ |
|
"start": 228, |
|
"end": 249, |
|
"text": "Zubiaga et al., 2018)", |
|
"ref_id": "BIBREF23" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "Automated fact checking can be framed as a machine learning task, where a model is trained to verify a claim. Applying machine learning requires training and validation data that is representative of the task and is annotated for the desired behaviour. A model should then attempt to generalise over the labeled data.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "One dataset supporting automatic verification is the Fact Extraction and VERification dataset (FEVER) in English (Thorne et al., 2018a) , which supports the FEVER task (Thorne et al., 2018b; . The dataset is aimed both at claim detection and verification.", |
|
"cite_spans": [ |
|
{ |
|
"start": 113, |
|
"end": 135, |
|
"text": "(Thorne et al., 2018a)", |
|
"ref_id": "BIBREF18" |
|
}, |
|
{ |
|
"start": 168, |
|
"end": 190, |
|
"text": "(Thorne et al., 2018b;", |
|
"ref_id": "BIBREF19" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "While the misinformation problem spans both geography and language, much work in the field has focused on English. There have been suggestions on strategies for alleviating the misinformation problem (Hellman and Wagnsson, 2017) . It is however evident that multilingual models are essential if automation is to assist in multilingual regions like Europe. A possible approach for multilingual verification is to use translation systems for existing methods (Dementieva and Panchenko, 2020) , but relevant datasets in more languages are necessary for testing multilingual models' performance within each language, and ideally also for training.", |
|
"cite_spans": [ |
|
{ |
|
"start": 200, |
|
"end": 228, |
|
"text": "(Hellman and Wagnsson, 2017)", |
|
"ref_id": "BIBREF8" |
|
}, |
|
{ |
|
"start": 457, |
|
"end": 489, |
|
"text": "(Dementieva and Panchenko, 2020)", |
|
"ref_id": "BIBREF3" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "This paper presents DANFEVER, a dataset and baseline for the FEVER task in Danish, a language with shortage of resources (Kirkedal et al., 2019) . While DANFEVER enables improved automatic verification for Danish, an important task , it is also, to our knowledge, the first non-English dataset on the FEVER task, and so paves the way for multilingual fact verification systems. DANFEVER is openly available at https: //figshare.com/articles/dataset/ DanFEVER_claim_verification_ dataset_for_Danish/14380970", |
|
"cite_spans": [ |
|
{ |
|
"start": 121, |
|
"end": 144, |
|
"text": "(Kirkedal et al., 2019)", |
|
"ref_id": "BIBREF11" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "The Fact Extraction and VERification dataset and task (FEVER) is aimed at automatic claim verification in English (Thorne et al., 2018a) . When comparing we will stylize the original FEVER dataset ENFEVER to avoid confusion. The dataset was created by first sampling sentences from approximately 50,000 popular English Wikipedia pages. Human annotators were asked to generate sets of claims based on these sentences. Claims focus on the same entity as the sentence, but may not be contradictory to or not verifiable by the sentence. A second round of annotators labelled these claims, producing the labels seen in Table 1 , using the following guidelines:", |
|
"cite_spans": [ |
|
{ |
|
"start": 114, |
|
"end": 136, |
|
"text": "(Thorne et al., 2018a)", |
|
"ref_id": "BIBREF18" |
|
} |
|
], |
|
"ref_spans": [ |
|
{ |
|
"start": 614, |
|
"end": 621, |
|
"text": "Table 1", |
|
"ref_id": "TABREF1" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "English FEVER", |
|
"sec_num": "2" |
|
}, |
|
{ |
|
"text": "\"If I was given only the selected sentences, do I have strong reason to believe the claim is true (Supported) or stronger reason to believe the claim is false (Refuted).\" \"The label NotEnoughInfo label was used if the claim could not be supported or refuted by any amount of information in Wikipedia.\"", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "English FEVER", |
|
"sec_num": "2" |
|
}, |
|
{ |
|
"text": "The ENFEVER guidelines state that claims labelled NotEnoughInfo could possibly be verified using other publicly available information, which was not considered in the annotation. In the FEVER task (Thorne et al., 2018b) , automatic verification is commonly framed as a two-step process: given a claim, relevant evidence must first be collected, and secondly be assessed as supporting or refuting the claim, or not providing enough information. ENFEVER contains data for training models for both steps.", |
|
"cite_spans": [ |
|
{ |
|
"start": 197, |
|
"end": 219, |
|
"text": "(Thorne et al., 2018b)", |
|
"ref_id": "BIBREF19" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "English FEVER", |
|
"sec_num": "2" |
|
}, |
|
{ |
|
"text": "We tasked annotators to create claims for DAN-FEVER based on the same guidelines and without regulation of class-distribution. The classdistribution of DANFEVER is therefore a bit different that that of ENFEVER; there is about the same ratio of Supported claims, but more Refuted and less NotEnoughInfo claims in DANFEVER that in ENFEVER.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "English FEVER", |
|
"sec_num": "2" |
|
}, |
|
{ |
|
"text": "A FEVER task instance consists of a claim, zero or more pieces of evidence, and a label. The labels take one of the following values:", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Method", |
|
"sec_num": "3" |
|
}, |
|
{ |
|
"text": "Supported Claims that can be supported by evidence from the textual data", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Method", |
|
"sec_num": "3" |
|
}, |
|
{ |
|
"text": "Refuted Claims that can be refuted by evidence from the textual data", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Method", |
|
"sec_num": "3" |
|
}, |
|
{ |
|
"text": "NotEnoughInfo Claims that can neither be supported or refuted based on the textual data", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Method", |
|
"sec_num": "3" |
|
}, |
|
{ |
|
"text": "The claims were created based on data from Danish Wikipedia and Den Store Danske (a privately-developed, non-profit, online encyclopedia based in Denmark and financed through foundations and universities). Both sites are generally considered high quality and trustworthy. Along with the claims, DANFEVER supplies the Wikipedia dump used for creating the claims as well as the content of the articles used from Den Store Danske. The remaining articles from Den Store Danske are not included (due to rights), and all articles should be considered to be iid.for modelling.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Method", |
|
"sec_num": "3" |
|
}, |
|
{ |
|
"text": "The format of the dataset can be found in Appendix A.1.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Method", |
|
"sec_num": "3" |
|
}, |
|
{ |
|
"text": "DANFEVER can be used for research and implementation of multi-lingual claim-detection. The dataset can be used for bench-marking models on a small language, as well as for fine-tuning when applying such models on Danish data.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Dataset Goal", |
|
"sec_num": "3.1" |
|
}, |
|
{ |
|
"text": "The following is a data-statement as defined by Bender and Friedman (2018) . The dataset consists of a text corpus and a set of annotated claims. The annotated part contains 6407 claims, with labels and information about what articles can be used to verify them.", |
|
"cite_spans": [ |
|
{ |
|
"start": 48, |
|
"end": 74, |
|
"text": "Bender and Friedman (2018)", |
|
"ref_id": "BIBREF0" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Data Statement", |
|
"sec_num": "3.2" |
|
}, |
|
{ |
|
"text": "Curation Rationale A dump of the Danish Wikipedia of 13 February 2020 was stored as well as the relevant articles from Den Store Danske (subset of site to adhere to rights). Two teams of two people independently sampled evidence, and created and annotated claims from these two sites (more detail in section 3.3).", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Data Statement", |
|
"sec_num": "3.2" |
|
}, |
|
{ |
|
"text": "Speaker Demographic Den Store Danske is written by professionals and is funded by various foundations for creating free information for the Danish public. Wikipedia is crowd-sourced and its writers are therefore difficult to specify, although the content is generally considered to be of high quality.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Data Statement", |
|
"sec_num": "3.2" |
|
}, |
|
{ |
|
"text": "Annotator Demographic The annotators are native Danish speakers and masters students of IT.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Data Statement", |
|
"sec_num": "3.2" |
|
}, |
|
{ |
|
"text": "The data is formal, written texts created with the purpose of informing a broad crowd of Danish speakers.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Speech Situation", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "The language of the texts is fairly formal Danish from encyclopedias. It is considered to be consistent. Any deviation from Danish language is largely due to topics on history from non-Danish regions.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Language Variety and Text Characteristics", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "The main text corpus was created by storing the Danish Wikipedia dump of the time as well as a subset of pages from Den Store Danske, selected from the annotation process. Two strategies were employed for gathering specific texts for claims. A selection of pages with well-known topics were selected from Wikipedia's starred articles and Den Store Danske (similar to the \"popular articles\" selection in ENFEVER). Furthermore a random selection of Wikipedia entities with abstracts were Table 4 : Claims and evidence extracts in dataset.", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 486, |
|
"end": 493, |
|
"text": "Table 4", |
|
"ref_id": null |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Sampling and Annotation", |
|
"sec_num": "3.3" |
|
}, |
|
{ |
|
"text": "selected to ensure broad spectrum of topics. Random substrings were selected and passed to annotators, who created claims based on each substring, as in ENFEVER. The claims focus on the same entity as the substring's source document and may be supported by the text in the substring, but may also be refuted or unverifiable by the substring. It is up to the annotator to decide on what type of claim to aim for (although the final label of each claim is provided by the next annotator). The set of claims were subsequently revisited by another annotator, who labelled the claim as Supported, Refuted or NotEnoughInfo, based on the original substring used to generate the claim. The majority of the claims (80%) are generated based on Wikipedia pages, while 20% were based on articles from Den Store Danske. Note that claims are independent of the source and could be verified using any text; while the FEVER format presents a list of articles where evidence is present, this list is not exhaustive, just as in the TREC and TAC challenges. The two annotating teams reported Fleiss \u03ba-scores of 0.75 and 0.82 measured on a reduced subset. The remaining data was annotated by a single annotator.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Sampling and Annotation", |
|
"sec_num": "3.3" |
|
}, |
|
{ |
|
"text": "DANFEVER consists of 6407 claims. We have included one example from each class in Tables 2a, 2b and 2c, and shown the label distribution in Table 3 . Table 4 summarizes the lengths of claims and evidence extracts, as well as the number of entities linked to the claims. Table 5 : Most frequent entities and number of occurrences.", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 82, |
|
"end": 96, |
|
"text": "Tables 2a, 2b", |
|
"ref_id": "TABREF3" |
|
}, |
|
{ |
|
"start": 141, |
|
"end": 148, |
|
"text": "Table 3", |
|
"ref_id": "TABREF5" |
|
}, |
|
{ |
|
"start": 151, |
|
"end": 158, |
|
"text": "Table 4", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 271, |
|
"end": 278, |
|
"text": "Table 5", |
|
"ref_id": null |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Dataset Details & Analysis", |
|
"sec_num": "4" |
|
}, |
|
{ |
|
"text": "The entities mentioned frequently in a corpus can give insight into popular themes in the data. In this case, the topic of the claims is particularly relevant. We present an automatic survey of DAN-FEVER's entities. Entities in claims were identified using the DaNLP NER tool (Hvingelby et al., 2020) , which identifies location (LOC), person (PER), and organization (ORG) entities. Those most frequently named are shown in Table 5 . 1 .", |
|
"cite_spans": [ |
|
{ |
|
"start": 276, |
|
"end": 300, |
|
"text": "(Hvingelby et al., 2020)", |
|
"ref_id": "BIBREF9" |
|
} |
|
], |
|
"ref_spans": [ |
|
{ |
|
"start": 424, |
|
"end": 431, |
|
"text": "Table 5", |
|
"ref_id": null |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Named Entities in Claims", |
|
"sec_num": "4.1" |
|
}, |
|
{ |
|
"text": "The FEVER task consists of verifying claims based on a text corpus. One common strategy is to split the task into three components (as in the original work (Thorne et al., 2018a)) 1. Document Retrieval: Retrieve a useful subset of documents from the corpora, based on the claim.", |
|
"cite_spans": [ |
|
{ |
|
"start": 156, |
|
"end": 179, |
|
"text": "(Thorne et al., 2018a))", |
|
"ref_id": "BIBREF18" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Baseline: Recognizing Textual Entailment", |
|
"sec_num": "5" |
|
}, |
|
{ |
|
"text": "2. Sentence Retrieval: Retrieve a useful subset of sentences from those documents, based on the claim.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Baseline: Recognizing Textual Entailment", |
|
"sec_num": "5" |
|
}, |
|
{ |
|
"text": "3. Recognize Textual Entailment: Classify the claims as Supported, Refuted or NotEnoughInfo, based on the claim and the subset of sentences.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Baseline: Recognizing Textual Entailment", |
|
"sec_num": "5" |
|
}, |
|
{ |
|
"text": "To provide baseline performance for future research to benchmark against, we trained a baseline model on the final task; recognizing textual entailment. Since there are no evidence extracts for the NotVerifiable samples, we apply the random-sampling method from the original EN-FEVER paper, where evidence is randomly assigned from the data to each of these samples. We trained classifiers on the resulting 3-class problem.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Baseline: Recognizing Textual Entailment", |
|
"sec_num": "5" |
|
}, |
|
{ |
|
"text": "1 Interestingly the most mentioned location is Finland The transformer based model BERT (Devlin et al., 2019) has shown promising performance for claim verification (Soleimani et al., 2020) , and the team (DOMLIN) with highest FEVER-score in the FEVER2.0 competition used a BERTbased system . Using the transformers repository from HuggingFace (Wolf et al., 2020) we test; mBERT (Feng et al., 2020) (tag: bert-base-multilingual-cased), XLM-RoBERTa Small and XLM-RoBERTa Large (Conneau et al., 2020; Liu et al., 2019) (tags: xlm-roberta-base and xlm-roberta-large), and the Danish NordicBERT (BotXO, 2019). We use BERT's sentence-pair representation for claims and evidence extracts. The classification embedding is then passed to a single-hidden-layer, fullyconnected neural network for prediction. We first train the prediction layer, while freezing the weights of the language model, and consecutively fine-tune them both. We do this in a 10-fold cross-validation scheme for the 4 models. Table 6 shows weighted-mean F1-scores, training parameters and info about the models. XLM-RoBERTa Large performed best, followed by mBERT and then XLM-RoBERTa Small. NordicBERT performed surprisingly poor. The learning curve of NordicBERT flattened out quickly and nothing further was learned despite the high learning rate used. NordicBERT was trained for Masked-Language-Modelling, but we are unsure whether it was also trained for Next-Sentence-Prediction like BERT (or even Causal-Language-Modelling like RoBERTa). If not, this may explain the poor performance on this task, even when NordicBERT has shown promising results for other tasks.", |
|
"cite_spans": [ |
|
{ |
|
"start": 88, |
|
"end": 109, |
|
"text": "(Devlin et al., 2019)", |
|
"ref_id": "BIBREF6" |
|
}, |
|
{ |
|
"start": 165, |
|
"end": 189, |
|
"text": "(Soleimani et al., 2020)", |
|
"ref_id": "BIBREF15" |
|
}, |
|
{ |
|
"start": 344, |
|
"end": 363, |
|
"text": "(Wolf et al., 2020)", |
|
"ref_id": "BIBREF22" |
|
}, |
|
{ |
|
"start": 379, |
|
"end": 398, |
|
"text": "(Feng et al., 2020)", |
|
"ref_id": "BIBREF7" |
|
}, |
|
{ |
|
"start": 476, |
|
"end": 498, |
|
"text": "(Conneau et al., 2020;", |
|
"ref_id": "BIBREF2" |
|
}, |
|
{ |
|
"start": 499, |
|
"end": 523, |
|
"text": "Liu et al., 2019) (tags:", |
|
"ref_id": null |
|
} |
|
], |
|
"ref_spans": [ |
|
{ |
|
"start": 991, |
|
"end": 998, |
|
"text": "Table 6", |
|
"ref_id": "TABREF8" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Baseline: Recognizing Textual Entailment", |
|
"sec_num": "5" |
|
}, |
|
{ |
|
"text": "For comparison the multi-layer perceptron and decomposable attention models from the EN-FEVER paper (Thorne et al., 2018a) an F1 score of respectively 73% and 88% on the verification subtask. The comparable performance indicates that pretrained, multilingual, language models are useful for the task, especially considering that DANFEVER is small relative to EN-FEVER. We show the collective test-set confusion matrix of xlm-roberta-large in table 7 and note that it is much easier to disregard the randomized evidence (classify NotEnoughInfo (NEI)), than it is to refute or support claims, which is to be expected.", |
|
"cite_spans": [ |
|
{ |
|
"start": 100, |
|
"end": 122, |
|
"text": "(Thorne et al., 2018a)", |
|
"ref_id": "BIBREF18" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Baseline: Recognizing Textual Entailment", |
|
"sec_num": "5" |
|
}, |
|
{ |
|
"text": "We have presented a human-annotated dataset, DANFEVER, for claim verification in a new language; Danish. DANFEVER can be used for building Danish claim verification systems and for researching & building multilingual claim verification systems. To our knowledge DANFEVER is the first non-English FEVER dataset, and it is openly accessible 3 . Baseline results are presented over four models for the textual-entailment part of the FEVER-task.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Conclusion", |
|
"sec_num": "6" |
|
}, |
|
{ |
|
"text": "2 Available in Huggingface's library:", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Conclusion", |
|
"sec_num": "6" |
|
}, |
|
{ |
|
"text": "https: //huggingface.co/transformers/main_ classes/optimizer_schedules.html# transformers.get_cosine_schedule_with_ warmup 3 https://figshare.com/articles/ dataset/DanFEVER_claim_verification_ dataset_for_Danish/14380970", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Conclusion", |
|
"sec_num": "6" |
|
} |
|
], |
|
"back_matter": [ |
|
{ |
|
"text": "This research was supported by the Independent Danish Research Fund through the Verif-AI project grant. We are grateful to our annotators (Jespersen and Thygesen, 2020; Schulte and Binau, 2020).", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Acknowledgments", |
|
"sec_num": "7" |
|
}, |
|
{ |
|
"text": "A.1 Format DANFEVER contains three sqlite databases (SQLite Consortium, 2000) ; da fever.db, da wikipedia.db and den store danske.db.The databases da wikipedia.db and den store danske.db contain article data from Danish Wikipedia and Den Store Danske respectively. They contain an id-field, which is a numerical ID of the article (the curid for Wikipedia and a simple enumeration for Den Store Danske). They also contain the text and title of each article, as well as the url to that article.The da fever.db database contain the annotated claims. Each row in the database contain a claim and a unique id. With each claims comes the labels verifiable (Verifiable and NotVerifiable) and label (Supported, Refuted and NotEnoughInfo).The evidence column contain information about what articles were used to create and annotate the claim, and is composed by a comma-separated string, with IDs referring to the articles. The ID-format is Y X where Y is either wiki or dsd to indicate whether the article comes from Danish Wikipedia or Den Store Danske, and X is the numerical id from that data-source. Finally the claims that were Verifiable contains an evidence extract which is the text-snippet used to create and annotate the claim. Note that there may be some character-level incongruence between the original articles and the evidence extract, due to formatting and scraping.All three databases are also provided in TSVformat.The data is publicly available at https://figshare.com/articles/ dataset/DanFEVER_claim_ verification_dataset_for_Danish/ 14380970", |
|
"cite_spans": [ |
|
{ |
|
"start": 52, |
|
"end": 77, |
|
"text": "(SQLite Consortium, 2000)", |
|
"ref_id": null |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "A Appendices", |
|
"sec_num": null |
|
} |
|
], |
|
"bib_entries": { |
|
"BIBREF0": { |
|
"ref_id": "b0", |
|
"title": "Data Statements for Natural Language Processing: Toward Mitigating System Bias and Enabling Better Science. Transactions of the Association for Computational Linguistics", |
|
"authors": [ |
|
{ |
|
"first": "Emily", |
|
"middle": [ |
|
"M" |
|
], |
|
"last": "Bender", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Batya", |
|
"middle": [], |
|
"last": "Friedman", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2018, |
|
"venue": "", |
|
"volume": "6", |
|
"issue": "", |
|
"pages": "587--604", |
|
"other_ids": { |
|
"DOI": [ |
|
"10.1162/tacl_a_00041" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Emily M. Bender and Batya Friedman. 2018. Data Statements for Natural Language Processing: To- ward Mitigating System Bias and Enabling Better Science. Transactions of the Association for Com- putational Linguistics, 6:587-604.", |
|
"links": null |
|
}, |
|
"BIBREF2": { |
|
"ref_id": "b2", |
|
"title": "Unsupervised Cross-lingual Representation Learning at Scale", |
|
"authors": [ |
|
{ |
|
"first": "Alexis", |
|
"middle": [], |
|
"last": "Conneau", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Kartikay", |
|
"middle": [], |
|
"last": "Khandelwal", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Naman", |
|
"middle": [], |
|
"last": "Goyal", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Vishrav", |
|
"middle": [], |
|
"last": "Chaudhary", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Guillaume", |
|
"middle": [], |
|
"last": "Wenzek", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Francisco", |
|
"middle": [], |
|
"last": "Guzm\u00e1n", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Edouard", |
|
"middle": [], |
|
"last": "Grave", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Myle", |
|
"middle": [], |
|
"last": "Ott", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Luke", |
|
"middle": [], |
|
"last": "Zettlemoyer", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Veselin", |
|
"middle": [], |
|
"last": "Stoyanov", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2020, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": { |
|
"arXiv": [ |
|
"arXiv:1911.02116[cs].XLM-R" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Alexis Conneau, Kartikay Khandelwal, Naman Goyal, Vishrav Chaudhary, Guillaume Wenzek, Francisco Guzm\u00e1n, Edouard Grave, Myle Ott, Luke Zettle- moyer, and Veselin Stoyanov. 2020. Unsupervised Cross-lingual Representation Learning at Scale. arXiv:1911.02116 [cs]. XLM-R.", |
|
"links": null |
|
}, |
|
"BIBREF3": { |
|
"ref_id": "b3", |
|
"title": "Fake News Detection using Multilingual Evidence", |
|
"authors": [ |
|
{ |
|
"first": "D", |
|
"middle": [], |
|
"last": "Dementieva", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "A", |
|
"middle": [], |
|
"last": "Panchenko", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2020, |
|
"venue": "2020 IEEE 7th International Conference on Data Science and Advanced Analytics (DSAA)", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "775--776", |
|
"other_ids": { |
|
"DOI": [ |
|
"10.1109/DSAA49011.2020.00111" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "D. Dementieva and A. Panchenko. 2020. Fake News Detection using Multilingual Evidence. In 2020 IEEE 7th International Conference on Data Science and Advanced Analytics (DSAA), pages 775-776.", |
|
"links": null |
|
}, |
|
"BIBREF4": { |
|
"ref_id": "b4", |
|
"title": "Misinformation on Twitter during the Danish national election: A case study", |
|
"authors": [ |
|
{ |
|
"first": "Leon", |
|
"middle": [], |
|
"last": "Derczynski", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Marius", |
|
"middle": [], |
|
"last": "Torben Oskar Albert-Lindqvist", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Nanna", |
|
"middle": [], |
|
"last": "Ven\u00f8 Bendsen", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Inie", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2019, |
|
"venue": "Proceedings of the conference for Truth and Trust Online", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Leon Derczynski, Torben Oskar Albert-Lindqvist, Marius Ven\u00f8 Bendsen, Nanna Inie, Jens Egholm Pedersen, and Viktor Due Pedersen. 2019. Misinfor- mation on Twitter during the Danish national elec- tion: A case study. In Proceedings of the conference for Truth and Trust Online.", |
|
"links": null |
|
}, |
|
"BIBREF5": { |
|
"ref_id": "b5", |
|
"title": "Pheme: Computing veracity-the fourth challenge of big social data", |
|
"authors": [ |
|
{ |
|
"first": "Leon", |
|
"middle": [], |
|
"last": "Derczynski", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Kalina", |
|
"middle": [], |
|
"last": "Bontcheva", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Michal", |
|
"middle": [], |
|
"last": "Lukasik", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Thierry", |
|
"middle": [], |
|
"last": "Declerck", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Arno", |
|
"middle": [], |
|
"last": "Scharl", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Georgi", |
|
"middle": [], |
|
"last": "Georgiev", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Petya", |
|
"middle": [], |
|
"last": "Osenova", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Anna", |
|
"middle": [], |
|
"last": "Toms Pariente Lobo", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Robert", |
|
"middle": [], |
|
"last": "Kolliakou", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Stewart", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2015, |
|
"venue": "Proceedings of the Extended Semantic Web Conference EU Project Networking session", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Leon Derczynski, Kalina Bontcheva, Michal Lukasik, Thierry Declerck, Arno Scharl, Georgi Georgiev, Petya Osenova, Toms Pariente Lobo, Anna Kolli- akou, Robert Stewart, et al. 2015. Pheme: Com- puting veracity-the fourth challenge of big social data. In Proceedings of the Extended Semantic Web Conference EU Project Networking session (ESCW- PN).", |
|
"links": null |
|
}, |
|
"BIBREF6": { |
|
"ref_id": "b6", |
|
"title": "BERT: Pre-training of deep bidirectional transformers for language understanding", |
|
"authors": [ |
|
{ |
|
"first": "Jacob", |
|
"middle": [], |
|
"last": "Devlin", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Ming", |
|
"middle": [ |
|
"Wei" |
|
], |
|
"last": "Chang", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Kenton", |
|
"middle": [], |
|
"last": "Lee", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Kristina", |
|
"middle": [], |
|
"last": "Toutanova", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2019, |
|
"venue": "Naacl Hlt 2019 -2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies -Proceedings of the Conference", |
|
"volume": "1", |
|
"issue": "", |
|
"pages": "4171--4186", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Jacob Devlin, Ming Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. BERT: Pre-training of deep bidirectional transformers for language under- standing. Naacl Hlt 2019 -2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Tech- nologies -Proceedings of the Conference, 1:4171- 4186. ISBN: 9781950737130 Publisher: Associa- tion for Computational Linguistics (ACL).", |
|
"links": null |
|
}, |
|
"BIBREF7": { |
|
"ref_id": "b7", |
|
"title": "Language-agnostic BERT Sentence Embedding", |
|
"authors": [ |
|
{ |
|
"first": "Fangxiaoyu", |
|
"middle": [], |
|
"last": "Feng", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Yinfei", |
|
"middle": [], |
|
"last": "Yang", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Daniel", |
|
"middle": [], |
|
"last": "Cer", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Naveen", |
|
"middle": [], |
|
"last": "Arivazhagan", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Wei", |
|
"middle": [], |
|
"last": "Wang", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2020, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": { |
|
"arXiv": [ |
|
"arXiv:2007.01852[cs].ArXiv:2007.01852" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Fangxiaoyu Feng, Yinfei Yang, Daniel Cer, Naveen Arivazhagan, and Wei Wang. 2020. Language-agnostic BERT Sentence Embedding. arXiv:2007.01852 [cs]. ArXiv: 2007.01852.", |
|
"links": null |
|
}, |
|
"BIBREF8": { |
|
"ref_id": "b8", |
|
"title": "How can European states respond to Russian information warfare? An analytical framework", |
|
"authors": [ |
|
{ |
|
"first": "Maria", |
|
"middle": [], |
|
"last": "Hellman", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Charlotte", |
|
"middle": [], |
|
"last": "Wagnsson", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2017, |
|
"venue": "European Security", |
|
"volume": "26", |
|
"issue": "2", |
|
"pages": "153--170", |
|
"other_ids": { |
|
"DOI": [ |
|
"10.1080/09662839.2017.1294162" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Maria Hellman and Charlotte Wagnsson. 2017. How can European states respond to Russian information warfare? An analytical framework. European Secu- rity, 26(2):153-170.", |
|
"links": null |
|
}, |
|
"BIBREF9": { |
|
"ref_id": "b9", |
|
"title": "DaNE: A Named Entity Resource for Danish", |
|
"authors": [ |
|
{ |
|
"first": "Rasmus", |
|
"middle": [], |
|
"last": "Hvingelby", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Amalie", |
|
"middle": [ |
|
"Brogaard" |
|
], |
|
"last": "Pauli", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Maria", |
|
"middle": [], |
|
"last": "Barrett", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Christina", |
|
"middle": [], |
|
"last": "Rosted", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Lasse", |
|
"middle": [], |
|
"last": "Malm Lidegaard", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Anders", |
|
"middle": [], |
|
"last": "S\u00f8gaard", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2020, |
|
"venue": "Proceedings of The 12th Language Resources and Evaluation Conference", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "4597--4604", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Rasmus Hvingelby, Amalie Brogaard Pauli, Maria Bar- rett, Christina Rosted, Lasse Malm Lidegaard, and Anders S\u00f8gaard. 2020. DaNE: A Named Entity Re- source for Danish. In Proceedings of The 12th Lan- guage Resources and Evaluation Conference, pages 4597-4604.", |
|
"links": null |
|
}, |
|
"BIBREF10": { |
|
"ref_id": "b10", |
|
"title": "Sidsel Latsch Jespersen and Mikkel Ekenberg Thygesen. 2020. Fact Extraction and Verification in Danish", |
|
"authors": [], |
|
"year": null, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Sidsel Latsch Jespersen and Mikkel Ekenberg Thyge- sen. 2020. Fact Extraction and Verification in Dan- ish. Master's thesis, IT University of Copenhagen.", |
|
"links": null |
|
}, |
|
"BIBREF11": { |
|
"ref_id": "b11", |
|
"title": "The Lacunae of Danish Natural Language Processing", |
|
"authors": [ |
|
{ |
|
"first": "Andreas", |
|
"middle": [], |
|
"last": "Kirkedal", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Barbara", |
|
"middle": [], |
|
"last": "Plank", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Leon", |
|
"middle": [], |
|
"last": "Derczynski", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Natalie", |
|
"middle": [], |
|
"last": "Schluter", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2019, |
|
"venue": "Proceedings of the 22nd Nordic Conference on Computational Linguistics", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "356--362", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Andreas Kirkedal, Barbara Plank, Leon Derczynski, and Natalie Schluter. 2019. The Lacunae of Dan- ish Natural Language Processing. In Proceedings of the 22nd Nordic Conference on Computational Lin- guistics, pages 356-362.", |
|
"links": null |
|
}, |
|
"BIBREF12": { |
|
"ref_id": "b12", |
|
"title": "RoBERTa: A Robustly Optimized BERT Pretraining Approach", |
|
"authors": [ |
|
{ |
|
"first": "Yinhan", |
|
"middle": [], |
|
"last": "Liu", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Myle", |
|
"middle": [], |
|
"last": "Ott", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Naman", |
|
"middle": [], |
|
"last": "Goyal", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Jingfei", |
|
"middle": [], |
|
"last": "Du", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Mandar", |
|
"middle": [], |
|
"last": "Joshi", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Danqi", |
|
"middle": [], |
|
"last": "Chen", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Omer", |
|
"middle": [], |
|
"last": "Levy", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Mike", |
|
"middle": [], |
|
"last": "Lewis", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Luke", |
|
"middle": [], |
|
"last": "Zettlemoyer", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Veselin", |
|
"middle": [], |
|
"last": "Stoyanov", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2019, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": { |
|
"arXiv": [ |
|
"arXiv:1907.11692" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Yinhan Liu, Myle Ott, Naman Goyal, Jingfei Du, Man- dar Joshi, Danqi Chen, Omer Levy, Mike Lewis, Luke Zettlemoyer, and Veselin Stoyanov. 2019. RoBERTa: A Robustly Optimized BERT Pretrain- ing Approach. arXiv:1907.11692 [cs].", |
|
"links": null |
|
}, |
|
"BIBREF14": { |
|
"ref_id": "b14", |
|
"title": "Danish Fact Verification: An End-to-End Machine Learning System for Automatic Fact-Checking of Danish Textual Claims", |
|
"authors": [ |
|
{ |
|
"first": "Henri", |
|
"middle": [], |
|
"last": "Schulte", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Julie", |
|
"middle": [ |
|
"Christine" |
|
], |
|
"last": "Binau", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2020, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Henri Schulte and Julie Christine Binau. 2020. Danish Fact Verification: An End-to-End Machine Learn- ing System for Automatic Fact-Checking of Danish Textual Claims. Master's thesis, IT University of Copenhagen.", |
|
"links": null |
|
}, |
|
"BIBREF15": { |
|
"ref_id": "b15", |
|
"title": "BERT for Evidence Retrieval and Claim Verification", |
|
"authors": [ |
|
{ |
|
"first": "Amir", |
|
"middle": [], |
|
"last": "Soleimani", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Christof", |
|
"middle": [], |
|
"last": "Monz", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Marcel", |
|
"middle": [], |
|
"last": "Worring", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2020, |
|
"venue": "Advances in Information Retrieval", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "359--366", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Amir Soleimani, Christof Monz, and Marcel Worring. 2020. BERT for Evidence Retrieval and Claim Ver- ification. In Advances in Information Retrieval, pages 359-366, Cham. Springer International Pub- lishing.", |
|
"links": null |
|
}, |
|
"BIBREF16": { |
|
"ref_id": "b16", |
|
"title": "The SQLite Consortium", |
|
"authors": [], |
|
"year": 2000, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "The SQLite Consortium. 2000. SQLite. www. sqlite.org.", |
|
"links": null |
|
}, |
|
"BIBREF17": { |
|
"ref_id": "b17", |
|
"title": "Adversarial attacks against Fact Extraction and VERification", |
|
"authors": [ |
|
{ |
|
"first": "James", |
|
"middle": [], |
|
"last": "Thorne", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Andreas", |
|
"middle": [], |
|
"last": "Vlachos", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2019, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": { |
|
"arXiv": [ |
|
"arXiv:1903.05543[cs].ArXiv:1903.05543" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "James Thorne and Andreas Vlachos. 2019. Adversar- ial attacks against Fact Extraction and VERification. arXiv:1903.05543 [cs]. ArXiv: 1903.05543.", |
|
"links": null |
|
}, |
|
"BIBREF18": { |
|
"ref_id": "b18", |
|
"title": "FEVER: A large-scale dataset for fact extraction and verification", |
|
"authors": [ |
|
{ |
|
"first": "James", |
|
"middle": [], |
|
"last": "Thorne", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Andreas", |
|
"middle": [], |
|
"last": "Vlachos", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Christos", |
|
"middle": [], |
|
"last": "Christodoulopoulos", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Arpit", |
|
"middle": [], |
|
"last": "Mittal", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2018, |
|
"venue": "Proceedings of the Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies", |
|
"volume": "1", |
|
"issue": "", |
|
"pages": "809--819", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "James Thorne, Andreas Vlachos, Christos Christodoulopoulos, and Arpit Mittal. 2018a. FEVER: A large-scale dataset for fact extraction and verification. In Proceedings of the Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, volume 1, pages 809-819.", |
|
"links": null |
|
}, |
|
"BIBREF19": { |
|
"ref_id": "b19", |
|
"title": "The Fact Extraction and VERification (FEVER) Shared Task", |
|
"authors": [ |
|
{ |
|
"first": "James", |
|
"middle": [], |
|
"last": "Thorne", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Andreas", |
|
"middle": [], |
|
"last": "Vlachos", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Oana", |
|
"middle": [], |
|
"last": "Cocarascu", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Christos", |
|
"middle": [], |
|
"last": "Christodoulopoulos", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Arpit", |
|
"middle": [], |
|
"last": "Mittal", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2018, |
|
"venue": "Proceedings of the First Workshop on Fact Extraction and VERification (FEVER)", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "1--9", |
|
"other_ids": { |
|
"DOI": [ |
|
"10.18653/v1/W18-5501" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "James Thorne, Andreas Vlachos, Oana Cocarascu, Christos Christodoulopoulos, and Arpit Mittal. 2018b. The Fact Extraction and VERification (FEVER) Shared Task. In Proceedings of the First Workshop on Fact Extraction and VERification (FEVER), pages 1-9, Brussels, Belgium. Associa- tion for Computational Linguistics.", |
|
"links": null |
|
}, |
|
"BIBREF20": { |
|
"ref_id": "b20", |
|
"title": "The Second Fact Extraction and VERification (FEVER2.0) Shared Task", |
|
"authors": [ |
|
{ |
|
"first": "James", |
|
"middle": [], |
|
"last": "Thorne", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Andreas", |
|
"middle": [], |
|
"last": "Vlachos", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Oana", |
|
"middle": [], |
|
"last": "Cocarascu", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Christos", |
|
"middle": [], |
|
"last": "Christodoulopoulos", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Arpit", |
|
"middle": [], |
|
"last": "Mittal", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2019, |
|
"venue": "Proceedings of the Second Workshop on Fact Extraction and VERification (FEVER)", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "1--6", |
|
"other_ids": { |
|
"DOI": [ |
|
"10.18653/v1/D19-6601" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "James Thorne, Andreas Vlachos, Oana Cocarascu, Christos Christodoulopoulos, and Arpit Mittal. 2019. The Second Fact Extraction and VERifica- tion (FEVER2.0) Shared Task. In Proceedings of the Second Workshop on Fact Extraction and VER- ification (FEVER), pages 1-6, Hong Kong, China. Association for Computational Linguistics.", |
|
"links": null |
|
}, |
|
"BIBREF21": { |
|
"ref_id": "b21", |
|
"title": "Information disorder: Toward an interdisciplinary framework for research and policy making", |
|
"authors": [ |
|
{ |
|
"first": "Claire", |
|
"middle": [], |
|
"last": "Wardle", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Hossein", |
|
"middle": [], |
|
"last": "Derakhshan", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2017, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Claire Wardle and Hossein Derakhshan. 2017. Infor- mation disorder: Toward an interdisciplinary frame- work for research and policy making. Council of Europe report, 27.", |
|
"links": null |
|
}, |
|
"BIBREF22": { |
|
"ref_id": "b22", |
|
"title": "Transformers: State-of-theart natural language processing", |
|
"authors": [ |
|
{ |
|
"first": "Thomas", |
|
"middle": [], |
|
"last": "Wolf", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Julien", |
|
"middle": [], |
|
"last": "Chaumond", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Lysandre", |
|
"middle": [], |
|
"last": "Debut", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Victor", |
|
"middle": [], |
|
"last": "Sanh", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Clement", |
|
"middle": [], |
|
"last": "Delangue", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Anthony", |
|
"middle": [], |
|
"last": "Moi", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Pierric", |
|
"middle": [], |
|
"last": "Cistac", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Morgan", |
|
"middle": [], |
|
"last": "Funtowicz", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Joe", |
|
"middle": [], |
|
"last": "Davison", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Sam", |
|
"middle": [], |
|
"last": "Shleifer", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2020, |
|
"venue": "Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "38--45", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Thomas Wolf, Julien Chaumond, Lysandre Debut, Vic- tor Sanh, Clement Delangue, Anthony Moi, Pier- ric Cistac, Morgan Funtowicz, Joe Davison, Sam Shleifer, et al. 2020. Transformers: State-of-the- art natural language processing. In Proceedings of the 2020 Conference on Empirical Methods in Nat- ural Language Processing: System Demonstrations, pages 38-45.", |
|
"links": null |
|
}, |
|
"BIBREF23": { |
|
"ref_id": "b23", |
|
"title": "Detection and resolution of rumours in social media: A survey", |
|
"authors": [ |
|
{ |
|
"first": "Arkaitz", |
|
"middle": [], |
|
"last": "Zubiaga", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Ahmet", |
|
"middle": [], |
|
"last": "Aker", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Kalina", |
|
"middle": [], |
|
"last": "Bontcheva", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Maria", |
|
"middle": [], |
|
"last": "Liakata", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Rob", |
|
"middle": [], |
|
"last": "Procter", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2018, |
|
"venue": "ACM Computing Surveys (CSUR)", |
|
"volume": "51", |
|
"issue": "2", |
|
"pages": "1--36", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Arkaitz Zubiaga, Ahmet Aker, Kalina Bontcheva, Maria Liakata, and Rob Procter. 2018. Detection and resolution of rumours in social media: A survey. ACM Computing Surveys (CSUR), 51(2):1-36.", |
|
"links": null |
|
} |
|
}, |
|
"ref_entries": { |
|
"FIGREF0": { |
|
"text": "(b) A Refuted claim. Claim 2767: \"Lau Lauritzen har instrueret b\u00e5de stumfilmen Skruebraekkeren og vikingefilmen N\u00e5r raeven flyver.\" Lau Lauritzen directed the silent film Skruebraekkeren and the viking film N\u00e5r Raeven Flyver. Evidence Extract: \"\" Evidence Entities: wiki 833896 Verifiable: NotVerifiable Label: NotEnoughInfo (c) A NotEnoughInfo claim.", |
|
"num": null, |
|
"uris": null, |
|
"type_str": "figure" |
|
}, |
|
"TABREF1": { |
|
"content": "<table/>", |
|
"num": null, |
|
"type_str": "table", |
|
"html": null, |
|
"text": "Annotated classes in ENFEVER." |
|
}, |
|
"TABREF2": { |
|
"content": "<table><tr><td>Evidence Entities: wiki 93781</td></tr><tr><td>Verifiable: Verifiable</td></tr><tr><td>Label: Supported</td></tr><tr><td>(a) A Supported claim.</td></tr><tr><td>Claim 1306: \"Hugh Hudson er f\u00f8dt i England i 1935.\"</td></tr><tr><td>Hugh Hudson was born in England in 1935.</td></tr><tr><td>Evidence Extract: \"Hugh Hudson (f\u00f8dt 25. august</td></tr><tr><td>1936 i London, England) er en britisk filminstrukt\u00f8r.\"</td></tr><tr><td>Hugh Hudson (born 25th of August 1936 in London,</td></tr><tr><td>England) is a British film director.</td></tr><tr><td>Evidence Entities: wiki 397805</td></tr><tr><td>Verifiable: Verifiable</td></tr><tr><td>Label: Refuted</td></tr></table>", |
|
"num": null, |
|
"type_str": "table", |
|
"html": null, |
|
"text": "Claim 3152: \"Udenrigsministeriet har eksisteret siden 1848.\" The Ministry of Foreign Affairs has existed since 1848. Evidence Extract: \"Dette er en liste over ministre for Udenrigsministeriet siden oprettelsen af ministeriet i 1848.\" This is a list of ministers of the Ministry of Foreign Affairs since it was founded in 1848." |
|
}, |
|
"TABREF3": { |
|
"content": "<table/>", |
|
"num": null, |
|
"type_str": "table", |
|
"html": null, |
|
"text": "Examples of claims. English translations are in italic." |
|
}, |
|
"TABREF5": { |
|
"content": "<table><tr><td/><td>Median</td><td>Mean</td><td>SD</td></tr><tr><td>Claims</td><td/><td/><td/></tr><tr><td># Characters</td><td>45</td><td>50.18</td><td>22.02</td></tr><tr><td># Tokens</td><td>7</td><td>8.46</td><td>3.86</td></tr><tr><td># Evidence Entities</td><td>1</td><td>1.10</td><td>0.34</td></tr><tr><td>Evidence Extracts</td><td/><td/><td/></tr><tr><td># Characters</td><td colspan=\"3\">260 305.56 257.20</td></tr><tr><td># Tokens</td><td>47</td><td>53.75</td><td>44.64</td></tr></table>", |
|
"num": null, |
|
"type_str": "table", |
|
"html": null, |
|
"text": "Annotated classes in DANFEVER." |
|
}, |
|
"TABREF8": { |
|
"content": "<table><tr><td/><td/><td>Predicted</td></tr><tr><td/><td>NEI</td><td>R</td><td>S</td></tr><tr><td>True Class</td><td colspan=\"3\">NEI 1118 R 6 1643 7 S 4 441 2679 2 507</td></tr></table>", |
|
"num": null, |
|
"type_str": "table", |
|
"html": null, |
|
"text": "Model Evaluations. F1 score is weighted-mean. Params: number of parameters in model. Time: total training & evaluation time using 1 NVIDIA Tesla V100 PCIe 32 GB card; RMSProp optimizer. BS: batch size. LR: maximum learning rate in single-round, cosine schedule w/ 10% warm-up. 2 WD: weight decay. DR: dropout rate." |
|
}, |
|
"TABREF9": { |
|
"content": "<table/>", |
|
"num": null, |
|
"type_str": "table", |
|
"html": null, |
|
"text": "Test-set confusion matrix of xlm-roberta-large classifier." |
|
} |
|
} |
|
} |
|
} |