Datasets:
Modalities:
Text
Formats:
csv
Languages:
English
Size:
100K - 1M
Tags:
webdataset
News
Articles
Text Classification
Information Extraction
Natural Language Processing
License:
Update README.md
Browse files
README.md
CHANGED
@@ -1,5 +1,60 @@
|
|
1 |
-
---
|
2 |
language:
|
3 |
-
- en
|
4 |
-
pretty_name: TheGuardian Website Articles
|
5 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
language:
|
2 |
+
- en
|
3 |
+
pretty_name: TheGuardian.com Website Articles
|
4 |
+
licenses:
|
5 |
+
- apache-2.0
|
6 |
+
task_categories:
|
7 |
+
- text-scraping
|
8 |
+
- data-curation
|
9 |
+
- natural-language-processing
|
10 |
+
task_ids:
|
11 |
+
- text-classification
|
12 |
+
- information-extraction
|
13 |
+
- topic-modeling
|
14 |
+
- sentiment-analysis
|
15 |
+
multilinguality:
|
16 |
+
- monolingual
|
17 |
+
source_datasets:
|
18 |
+
- original
|
19 |
+
annotations_creators:
|
20 |
+
- no-annotation
|
21 |
+
formats:
|
22 |
+
- .csv
|
23 |
+
|
24 |
+
# Dataset Structure
|
25 |
+
## Data Instances
|
26 |
+
A typical data row contains:
|
27 |
+
- URL: [string] The URL of the scraped article.
|
28 |
+
- Article Category: [string] The category assigned to the article.
|
29 |
+
- Publication Date: [datetime] The date and time the article was published.
|
30 |
+
- Article Author: [string] The author of the article.
|
31 |
+
- Article Title: [string] The title of the article.
|
32 |
+
- Article Contents: [string] The full text content of the article.
|
33 |
+
- Data Quality: [enum] 'full' if all data from the article was scraped, 'partial' if some data may be missing.
|
34 |
+
|
35 |
+
# Dataset Creation
|
36 |
+
## Curation Rationale
|
37 |
+
The dataset was curated to facilitate research and development in natural language processing tasks such as text classification and information extraction from news articles.
|
38 |
+
|
39 |
+
## Source Data
|
40 |
+
### Initial Data Collection and Normalization
|
41 |
+
Articles and similar content were scraped from theguardian.com website.
|
42 |
+
|
43 |
+
## Who are the dataset creators?
|
44 |
+
The dataset model was created by Stefan Carter (stefan171@gmail.com).
|
45 |
+
|
46 |
+
## Who are the intended users?
|
47 |
+
Researchers, data scientists, and developers interested in text analysis of news articles.
|
48 |
+
|
49 |
+
## Legal and Ethical Considerations
|
50 |
+
### Dataset Licensing
|
51 |
+
The dataset is distributed under the Apache License 2.0, which allows for a wide range of uses.
|
52 |
+
### Data Privacy
|
53 |
+
The dataset contains publicly available information from articles. However, users should ensure they comply with The Guardian's terms of use and privacy policies when using this data.
|
54 |
+
|
55 |
+
# Considerations
|
56 |
+
## Content Warning
|
57 |
+
The dataset may contain explicit content or material that is not considered child-friendly. Users should exercise caution and review the content before using it in applications accessible by a younger audience.
|
58 |
+
|
59 |
+
## Potential Biases
|
60 |
+
The dataset reflects the content as published on theguardian.com and may inherently contain biases present in the source material. Users should be aware of these potential biases when analyzing the dataset and consider them in their research or applications.
|