andyP commited on
Commit
96ca995
1 Parent(s): 2886559

Create README.md

Browse files
Files changed (1) hide show
  1. README.md +217 -0
README.md ADDED
@@ -0,0 +1,217 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: apache-2.0
3
+ annotations_creators:
4
+ - machine-generated
5
+ language_creators:
6
+ - found
7
+ task_categories:
8
+ - sentence-similarity
9
+ language:
10
+ - ro
11
+ multilinguality:
12
+ - monolingual
13
+ tags:
14
+ - fake-news-detection
15
+ - fake news
16
+ - english
17
+ - nlp
18
+ task_ids:
19
+ - text-scoring
20
+ - semantic-similarity-scoring
21
+ - semantic-similarity-classification
22
+ pretty_name: Romanian Bible Paraphrase Corpus
23
+ size_categories:
24
+ - 100K<n<1M
25
+ dataset_info:
26
+ features:
27
+ - name: text1
28
+ dtype: string
29
+ - name: text2
30
+ dtype: string
31
+ - name: score
32
+ dtype: int
33
+ ---
34
+
35
+ # Dataset Card for "Romanian Bible Paraphrase Corpus"
36
+
37
+ ## Table of Contents
38
+ - [Dataset Description](#dataset-description)
39
+ - [Dataset Summary](#dataset-summary)
40
+ - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
41
+ - [Languages](#languages)
42
+ - [Dataset Structure](#dataset-structure)
43
+ - [Data Instances](#data-instances)
44
+ - [Data Fields](#data-fields)
45
+ - [Data Splits](#data-splits)
46
+ - [Dataset Creation](#dataset-creation)
47
+ - [Curation Rationale](#curation-rationale)
48
+ - [Source Data](#source-data)
49
+ - [Annotations](#annotations)
50
+ - [Personal and Sensitive Information](#personal-and-sensitive-information)
51
+ - [Considerations for Using the Data](#considerations-for-using-the-data)
52
+ - [Social Impact of Dataset](#social-impact-of-dataset)
53
+ - [Discussion of Biases](#discussion-of-biases)
54
+ - [Other Known Limitations](#other-known-limitations)
55
+ - [Additional Information](#additional-information)
56
+ - [Dataset Curators](#dataset-curators)
57
+ - [Licensing Information](#licensing-information)
58
+ - [Citation Information](#citation-information)
59
+ - [Contributions](#contributions)
60
+
61
+ ## Dataset Description
62
+ <!--
63
+ - **Paper:** Fake News Opensources
64
+ -->
65
+ - **Homepage:** [https://github.com/AndyTheFactory/FakeNewsDataset](https://github.com/AndyTheFactory/FakeNewsDataset)
66
+ - **Repository:** [https://github.com/AndyTheFactory/FakeNewsDataset](https://github.com/AndyTheFactory/FakeNewsDataset)
67
+ - **Point of Contact:** [Andrei Paraschiv](https://github.com/AndyTheFactory)
68
+ -
69
+ ### Dataset Summary
70
+ a consolidated and cleaned up version of the opensources Fake News dataset
71
+ Fake News Corpus (cite) comprises 8,529,090 individual articles, classified into 12 classes: reliable, unreliable, political, bias, fake, conspiracy,
72
+ rumor clickbait, junk science, satire, hate and unknown. The articles were scraped between the end of 2017 and the beginning of 2018 from various
73
+ news websites, totaling 647 distinct sources, collecting articles dating from various years leading to the 2016 US elections and the year after.
74
+ Documents were classified based on their source, based on the curated website list provided by opensources.co using a leading to a
75
+ high imbalanced class distribution.
76
+
77
+ This consolidated and cleaned Fake News Corpus Dataset is a refined and enhanced version of the "opensources.co Fake News dataset".
78
+ This dataset, is a valuable resource for research and analysis in the domain of misinformation, disinformation, and fake news detection.
79
+
80
+ The oroginal Fake News Corpus comprises a vast collection of 8,529,090 individual articles, categorized into 12 distinct classes: reliable, unreliable, political, bias, fake, conspiracy,
81
+ rumor clickbait, junk science, satire, hate and unknown.
82
+
83
+ The articles within this dataset were systematically gathered by the original authors through web scraping activities
84
+ conducted between the end of 2017 and the beginning of 2018.
85
+ These articles were sourced from a diverse range of news websites, resulting in a compilation from 647 distinct online sources.
86
+
87
+ Documents were classified based on their source, using a curated website list provided by opensources.co,
88
+ a crowdsourced platform for tracking online information sources. Their proposed source classification method, was based on six criteria:
89
+ - Title and Domain name analysis,
90
+ - “About Us” analysis,
91
+ - source or study mentioning,
92
+ - writing style analysis,
93
+ - aesthetic analysis and social media analysis.
94
+
95
+
96
+ After extensive data cleaning and duplicate removal we retain 5,915,569 records
97
+
98
+ ### Languages
99
+
100
+ English
101
+
102
+ ## Dataset Structure
103
+
104
+
105
+ ### Data Instances
106
+
107
+
108
+ An example record looks as follows.
109
+
110
+ ```
111
+ {
112
+ 'id': 4059480,
113
+ 'type': 'political',
114
+ 'domain': 'dailycaller.com',
115
+ 'scraped_at': '2017-11-27',
116
+ 'url': 'http://dailycaller.com/buzz/massachusettsunited-states/page/2/',
117
+ 'authors': 'Jeff Winkler, Jonathan Strong, Ken Blackwell, Pat Mcmahon, Julia Mcclatchy, Admin, Matt Purple',
118
+ 'title': 'The Daily Caller',
119
+ 'content':'New Hampshire is the state with the highest median income in the nation, according to the U.S. Census Bureau’s report on income, poverty and health insurance',
120
+ }
121
+ ```
122
+
123
+ ### Data Fields
124
+
125
+ - `id`: The unique article ID
126
+ - `type`: the label of the record (one of: reliable, unreliable, political, bias, fake, conspiracy,
127
+ rumor clickbait, junk science, satire, hate)
128
+ - 'scraped_at': date of the original scrape run
129
+ - 'url': original article url
130
+ - 'authors': comma separated list of scraped authors
131
+ - 'title': original scraped article title
132
+ - `content`: full article text
133
+
134
+ ### Data Splits
135
+
136
+
137
+ Label | Nr Records
138
+ :---| :---:
139
+ reliable | 1807323
140
+ political | 968205
141
+ bias | 769874
142
+ fake | 762178
143
+ conspiracy | 494184
144
+ rumor | 375963
145
+ unknown | 230532
146
+ clickbait | 174176
147
+ unreliable | 104537
148
+ satire | 84735
149
+ junksci | 79099
150
+ hate | 64763
151
+ |
152
+ total | 5915569
153
+
154
+ ## Dataset Creation
155
+
156
+
157
+ ### Curation Rationale
158
+
159
+ ### Source Data
160
+
161
+ News Articles comments
162
+
163
+
164
+ #### Initial Data Collection and Normalization
165
+
166
+
167
+
168
+ #### Who are the source language producers?
169
+
170
+ Sports News Article readers
171
+
172
+
173
+ ### Annotations
174
+
175
+
176
+ #### Annotation process
177
+
178
+
179
+ #### Who are the annotators?
180
+
181
+ Journalists
182
+
183
+ ### Personal and Sensitive Information
184
+
185
+
186
+ ## Considerations for Using the Data
187
+
188
+
189
+ ### Social Impact of Dataset
190
+
191
+
192
+ ### Discussion of Biases
193
+
194
+
195
+ ### Other Known Limitations
196
+
197
+ The dataset was not manually filtered, therefore some of the labels might not be correct and some of the URLs might not point to the actual articles but other pages on the website. However, because the corpus is intended for use in training machine learning algorithms, those problems should not pose a practical issue.
198
+
199
+ Additionally, when the dataset will be finalised (as for now only about 80% was cleaned and published), I do not intend to update it, therefore it might quickly become outdated for other purposes than content-based algorithms. However, any contributions are welcome!
200
+ ## Additional Information
201
+
202
+
203
+ ### Dataset Curators
204
+
205
+
206
+ ### Licensing Information
207
+
208
+ This data is available and distributed under Apache-2.0 license
209
+
210
+ ### Citation Information
211
+
212
+ ```
213
+ tbd
214
+ ```
215
+
216
+
217
+ ### Contributions