Datasets:
Tasks:
Sentence Similarity
Modalities:
Text
Formats:
json
Languages:
English
Size:
10M - 100M
ArXiv:
DOI:
License:
Update README.md
Browse files
README.md
CHANGED
@@ -9,17 +9,84 @@ size_categories:
|
|
9 |
- 100M<n<1B
|
10 |
---
|
11 |
|
|
|
|
|
12 |
![headline_image_with_title.jpg](https://s3.amazonaws.com/moonup/production/uploads/61654589b5ec555e8e9c203a/fJiAy43PFD3FBa1aMSzHz.jpeg)
|
13 |
|
14 |
-
|
15 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
16 |
|
17 |
## Dataset Creation
|
|
|
|
|
18 |
The dataset was constructed using a large corpus of newly digitized articles from off-copyright, local U.S. newspapers.
|
19 |
Many of these newspapers reprint articles from newswires, such as the Associated Press, but the headlines are written locally.
|
20 |
The dataset comprises different headlines for the same article.
|
21 |
|
22 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
23 |
The dataset contains 396,001,930 positive semantic similarity pairs, from 1920 to 1989.
|
24 |
|
25 |
![image (3).png](https://s3.amazonaws.com/moonup/production/uploads/61654589b5ec555e8e9c203a/vKeR-SEEfYte6ZZbdpaq3.png)
|
@@ -29,35 +96,37 @@ It contains headlines from all 50 states.
|
|
29 |
![map (1).png](https://s3.amazonaws.com/moonup/production/uploads/61654589b5ec555e8e9c203a/0WMdO8Fo1nfYiId4SlWaL.png)
|
30 |
|
31 |
|
32 |
-
## Dataset Structure
|
33 |
-
Each year in the dataset is divided into a distinct file (eg. 1952_headlines.pkl), giving a total of 70 files.
|
34 |
|
35 |
-
|
36 |
|
37 |
-
|
38 |
-
{
|
39 |
-
"headline": "...",
|
40 |
-
"date": "...",
|
41 |
-
"newspaper": "...",
|
42 |
-
"state": "...",
|
43 |
-
"cluster": "..."
|
44 |
-
}
|
45 |
-
```
|
46 |
|
47 |
-
|
48 |
-
** Date: the date of publication of the newspaper article, as a string in the form YYYY-MM-DD.
|
49 |
-
** Newspaper: name of the newspaper that published the headline.
|
50 |
-
** State: state of the newspaper that published the headline.
|
51 |
-
** Cluster: a number that is shared with all other headlines for the same article. This number is unique across all year files.
|
52 |
|
53 |
-
|
54 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
55 |
|
56 |
-
## Usage
|
57 |
|
58 |
|
|
|
59 |
|
|
|
|
|
60 |
|
61 |
-
|
62 |
This dataset was created by Emily Silcock and Melissa Dell. For more information, see [Dell Research Harvard](https://dell-research-harvard.github.io/).
|
63 |
|
|
|
|
|
|
9 |
- 100M<n<1B
|
10 |
---
|
11 |
|
12 |
+
# Dataset Card for HEADLINES
|
13 |
+
|
14 |
![headline_image_with_title.jpg](https://s3.amazonaws.com/moonup/production/uploads/61654589b5ec555e8e9c203a/fJiAy43PFD3FBa1aMSzHz.jpeg)
|
15 |
|
16 |
+
|
17 |
+
## Dataset Description
|
18 |
+
|
19 |
+
- **Homepage:** [Dell Research homepage](https://dell-research-harvard.github.io/)
|
20 |
+
- **Repository:** [Github repository](https://github.com/dell-research-harvard)
|
21 |
+
- **Paper:** [arxiv submission](https://arxiv.org/abs/tbd)
|
22 |
+
- **Point of Contact:** [Melissa Dell](mailto:melissadell@fas.harvard.edu)
|
23 |
+
|
24 |
+
### Dataset Summary
|
25 |
+
HEADLINES is a massive English-language semantic similarity dataset, containing 396,001,930 pairs of different headlines for the same newspaper article, taken from historical U.S. newspapers, covering the period 1920-1989.
|
26 |
+
|
27 |
+
### Languages
|
28 |
+
The text in the dataset is in English.
|
29 |
+
|
30 |
+
|
31 |
+
## Dataset Structure
|
32 |
+
Each year in the dataset is divided into a distinct file (eg. 1952_headlines.pkl), giving a total of 70 files.
|
33 |
+
|
34 |
+
The data is presented in the form of clusters, rather than pairs to eliminate duplication of text data and minimise the storage size of the datasets. Below we give an example of how to convert the dataset into pairs.
|
35 |
+
|
36 |
+
### Dataset Instances
|
37 |
+
|
38 |
+
An example from the HEADLINES dataset looks like:
|
39 |
+
|
40 |
+
```python
|
41 |
+
{
|
42 |
+
"headline": "FRENCH AND BRITISH BATTLESHIPS IN MEXICAN WATERS",
|
43 |
+
"date": "May-14-1920",
|
44 |
+
"newspaper": "cuba-daylight",
|
45 |
+
"state": "kansas",
|
46 |
+
"cluster": 4
|
47 |
+
}
|
48 |
+
```
|
49 |
+
|
50 |
+
### Dataset Fields
|
51 |
+
|
52 |
+
- `headline`: headline text.
|
53 |
+
- `date`: the date of publication of the newspaper article, as a string in the form YYYY-MM-DD.
|
54 |
+
- `newspaper`: name of the newspaper that published the headline.
|
55 |
+
- `state`: state of the newspaper that published the headline.
|
56 |
+
- `cluster`: a number that is shared with all other headlines for the same article. This number is unique across all year files.
|
57 |
+
|
58 |
+
|
59 |
+
## Usage
|
60 |
+
|
61 |
+
[Coming soon]
|
62 |
+
|
63 |
|
64 |
## Dataset Creation
|
65 |
+
|
66 |
+
### Source Data
|
67 |
The dataset was constructed using a large corpus of newly digitized articles from off-copyright, local U.S. newspapers.
|
68 |
Many of these newspapers reprint articles from newswires, such as the Associated Press, but the headlines are written locally.
|
69 |
The dataset comprises different headlines for the same article.
|
70 |
|
71 |
+
#### Initial Data Collection and Normalization
|
72 |
+
|
73 |
+
To construct HEADLINES, we digitize front pages of off-copyright newspaper page scans, localizing and OCRing individual content regions like headlines and articles. The headlines, bylines, and article texts that form full articles span multiple bounding boxes - often arranged with complex layouts - and we associate them using a model that combines layout information and language understanding. Then, we use neural methods to accurately predict which articles come from the same underlying source, in the presence of noise and abridgement.
|
74 |
+
|
75 |
+
We remove all headline pairs that are below a Levenshtein edit distance, divided by the min length in the pair, of 0.1 from each other, with the aim of removing pairs that are exact duplicates up to OCR noise.
|
76 |
+
|
77 |
+
#### Who are the source language producers?
|
78 |
+
|
79 |
+
The text data was originally produced by journalists of local U.S. newspapers.
|
80 |
+
|
81 |
+
### Annotations
|
82 |
+
|
83 |
+
The dataset does not contain any additional annotations.
|
84 |
+
|
85 |
+
### Personal and Sensitive Information
|
86 |
+
The dataset may contain information about individuals, to the extent that this is covered in the headlines of news stories. However we make no additional information about individuals publicly available.
|
87 |
+
|
88 |
+
|
89 |
+
### Data Description
|
90 |
The dataset contains 396,001,930 positive semantic similarity pairs, from 1920 to 1989.
|
91 |
|
92 |
![image (3).png](https://s3.amazonaws.com/moonup/production/uploads/61654589b5ec555e8e9c203a/vKeR-SEEfYte6ZZbdpaq3.png)
|
|
|
96 |
![map (1).png](https://s3.amazonaws.com/moonup/production/uploads/61654589b5ec555e8e9c203a/0WMdO8Fo1nfYiId4SlWaL.png)
|
97 |
|
98 |
|
|
|
|
|
99 |
|
100 |
+
## Considerations for Using the Data
|
101 |
|
102 |
+
### Social Impact of Dataset
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
103 |
|
104 |
+
The purpose of this dataset is to widen the range of language and topics for training semantic similarity models.
|
|
|
|
|
|
|
|
|
105 |
|
106 |
+
This will facilitate the study of semantic change across space and time.
|
107 |
|
108 |
+
Specific biases in the dataset are considered in the next section.
|
109 |
+
|
110 |
+
|
111 |
+
### Discussion of Biases
|
112 |
+
|
113 |
+
The headlines in the dataset may reflect attitudes and values from the period in which they were written, 1920-1989. This may include instances of racism, sexism and homophobia.
|
114 |
+
|
115 |
+
We also note that given that all the newspapers considered are from the U.S., the data is likely to present a Western perspective on the news stories of the day.
|
116 |
+
|
117 |
+
### Other Known Limitations
|
118 |
+
|
119 |
+
As the dataset is sourced from digitalised text, it contains some OCR errors.
|
120 |
|
|
|
121 |
|
122 |
|
123 |
+
## Additional information
|
124 |
|
125 |
+
### Licensing Information
|
126 |
+
HUPD is released under the Creative Commons CC-BY 2.0 license.
|
127 |
|
128 |
+
### Dataset curators
|
129 |
This dataset was created by Emily Silcock and Melissa Dell. For more information, see [Dell Research Harvard](https://dell-research-harvard.github.io/).
|
130 |
|
131 |
+
### Citation information
|
132 |
+
Citation coming soon.
|