File size: 6,790 Bytes
215aea1
 
 
236f798
 
 
 
 
 
 
 
 
e048369
215aea1
 
164dcf2
215aea1
93e8a65
 
215aea1
c5d09bd
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
71a8b4d
 
c5d09bd
 
 
 
 
 
 
71a8b4d
 
c5d09bd
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
8b1ebe8
c5d09bd
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
8b1ebe8
c5d09bd
b01c4dd
c5d09bd
4888792
 
 
 
 
 
b01c4dd
 
 
 
 
 
 
 
 
 
 
 
d97ac1b
b01c4dd
4888792
 
 
 
 
ab4f68c
 
 
 
4888792
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
---
language:
- de
configs:
- config_name: default
  data_files:
  - split: train
    path: "splits/HisGermaNER_v0_train.tsv"
  - split: validation
    path: "splits/HisGermaNER_v0_dev.tsv"
  - split: test
    path: "splits/HisGermaNER_v0_test.tsv"
  sep: "\t"
---

# HisGermaNER: NER Datasets for Historical German

<img src="https://huggingface.co/datasets/stefan-it/HisGermaNER/resolve/main/assets/logo.jpeg" width="500" height="500" />

In this repository we release another NER dataset from historical German newspapers.

## Newspaper corpus

In the first release of our dataset, we select 11 newspapers from 1710 to 1840 from the Austrian National Library (ONB), resulting in 100 pages:

| Year | ONB ID             | Newspaper                        | URL                                                                      | Pages |
| ---- | ------------------ | -------------------------------- | ------------------------------------------------------------------------ | ----- |
| 1720 | `ONB_wrz_17200511` | Wiener Zeitung                   | [Viewer](https://anno.onb.ac.at/cgi-content/anno?aid=wrz&datum=17200511) | 10    |
| 1730 | `ONB_wrz_17300603` | Wiener Zeitung                   | [Viewer](https://anno.onb.ac.at/cgi-content/anno?aid=wrz&datum=17300603) | 14    |
| 1740 | `ONB_wrz_17401109` | Wiener Zeitung                   | [Viewer](https://anno.onb.ac.at/cgi-content/anno?aid=wrz&datum=17401109) | 12    |
| 1770 | `ONB_rpr_17700517` | Reichspostreuter                 | [Viewer](https://anno.onb.ac.at/cgi-content/anno?aid=rpr&datum=17700517) | 4     |
| 1780 | `ONB_wrz_17800701` | Wiener Zeitung                   | [Viewer](https://anno.onb.ac.at/cgi-content/anno?aid=wrz&datum=17800701) | 24    |
| 1790 | `ONB_pre_17901030` | Preßburger Zeitung               | [Viewer](https://anno.onb.ac.at/cgi-content/anno?aid=pre&datum=17901030) | 12    |
| 1800 | `ONB_ibs_18000322` | Intelligenzblatt von Salzburg    | [Viewer](https://anno.onb.ac.at/cgi-content/anno?aid=ibs&datum=18000322) | 8     |
| 1810 | `ONB_mgs_18100508` | Morgenblatt für gebildete Stände | [Viewer](https://anno.onb.ac.at/cgi-content/anno?aid=mgs&datum=18100508) | 4     |
| 1820 | `ONB_wan_18200824` | Der Wanderer                     | [Viewer](https://anno.onb.ac.at/cgi-content/anno?aid=wan&datum=18200824) | 4     |
| 1830 | `ONB_ild_18300713` | Das Inland                       | [Viewer](https://anno.onb.ac.at/cgi-content/anno?aid=ild&datum=18300713) | 4     |
| 1840 | `ONB_hum_18400625` | Der Humorist                     | [Viewer](https://anno.onb.ac.at/cgi-content/anno?aid=hum&datum=18400625) | 4     |

## Data Workflow

In the first step, we obtain original scans from ONB for our selected newspapers. In the second step, we perform OCR using [Transkribus](https://readcoop.eu/de/transkribus/).

We use the [Transkribus print M1](https://readcoop.eu/model/transkribus-print-multi-language-dutch-german-english-finnish-french-swedish-etc/) model for performing OCR.
Note: we experimented with an existing NewsEye model, but the print M1 model is newer and led to better performance in our preliminary experiments.

Only layout hints/fixes were made in Transkribus. So no OCR corrections or normalizations were performed.

<img src="https://huggingface.co/datasets/stefan-it/HisGermaNER/resolve/main/assets/transkribus_wrz_17401109.png" width="500" height="500" />

We export plain text of all newspaper pages into plain text format and perform normalization of hyphenation and the `=` character.
After normalization we tokenize the plain text newspaper pages using the `PreTokenizer` of the [hmBERT](https://huggingface.co/hmbert) model.

After pre-tokenization we import the corpus into Argilla to start the annotation of named entities.
Note: We perform annotation at page/document-level. Thus, no sentence segmentation is needed and performed.
In the annotation process we also manually annotate sentence boundaries using a special `EOS` tag.

<img src="https://huggingface.co/datasets/stefan-it/HisGermaNER/resolve/main/assets/argilla_wrz_17401109.png" width="600" height="600" />

The dataset is exported into an CoNLL-like format after the annotation process.
The `EOS` tag is removed and the information of an potential end of sentence is stored in a special column.

## Annotation Guidelines

We use the same NE's (`PER`, `LOC` and `ORG`) and annotation guideline as used in the awesome [Europeana NER Corpora](https://github.com/cneud/ner-corpora).

Furthermore, we introduced some specific rules for annotations:

* `PER`: We include e.g. `Kaiser`, `Lord`, `Cardinal` or `Graf` in the NE, but not `Herr`, `Fräulein`, `General` or rank/grades.
* `LOC`: We excluded `Königreich` from the NE.

## Dataset Format

Our dataset format is inspired by the [HIPE-2022 Shared Task](https://github.com/hipe-eval/HIPE-2022-data?tab=readme-ov-file#hipe-format-and-tagging-scheme).
Here's an example of an annotated document:

```txt
TOKEN	NE-COARSE-LIT	MISC

-DOCSTART-	O	_

# onb:id = ONB_wrz_17800701
# onb:image_link = https://anno.onb.ac.at/cgi-content/anno?aid=wrz&datum=17800701&seite=12
# onb:page_nr = 12
# onb:publication_year_str = 17800701
den	O	_
Pöbel	O	_
noch	O	_
mehr	O	_
in	O	_
Harnisch	O	_
.	O	EndOfSentence
Sie	O	_
legten	O	_
sogleich	O	_
```

Note: we include a `-DOCSTART-` marker to e.g. allow document-level features for NER as proposed in the [FLERT](https://arxiv.org/abs/2011.06993) paper.

## Dataset Splits & Stats

For training powerful NER models on the dataset, we manually document-splitted the dataset into training, development and test splits.

The training split consists of 73 documents, development split of 13 documents and test split of 14 documents.

We perform dehyphenation as one and only preprocessing step. The final dataset splits can be found in the `splits` folder of this dataset repository.

Some dataset statistics - instances per class:

| Class | Training | Development | Test |
| ----- | -------- | ----------- | ---- |
| `PER` | 942      | 308         | 238  |
| `LOC` | 749      | 217         | 216  |
| `ORG` | 16       |   3         | 11   |

Number of sentences (incl. document marker) per split:

|           | Training | Development | Test |
| --------- | -------- | ----------- | ---- |
| Sentences | 1.539    | 406         | 400  |

# Release Cycles

We plan to release new updated versions of this dataset on a regular basis (e.g. monthly).
For now, we want to collect some feedback about the dataset first, so we use `v0` as current version.

# Questions & Feedback

Please open a new discussion [here](https://huggingface.co/datasets/stefan-it/HisGermaNER/discussions) for questions or feedback!

# License

Dataset is (currently) licenced under [CC BY 4.0](https://creativecommons.org/licenses/by/4.0/).