Datasets:
Commit
·
a1eb14f
verified
·
0
Parent(s):
Initial commit.
Browse files- .gitattributes +57 -0
- README.md +160 -0
- test.parquet +3 -0
- train.parquet +3 -0
- validation.parquet +3 -0
.gitattributes
ADDED
@@ -0,0 +1,57 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
*.7z filter=lfs diff=lfs merge=lfs -text
|
2 |
+
*.arrow filter=lfs diff=lfs merge=lfs -text
|
3 |
+
*.bin filter=lfs diff=lfs merge=lfs -text
|
4 |
+
*.bz2 filter=lfs diff=lfs merge=lfs -text
|
5 |
+
*.ckpt filter=lfs diff=lfs merge=lfs -text
|
6 |
+
*.ftz filter=lfs diff=lfs merge=lfs -text
|
7 |
+
*.gz filter=lfs diff=lfs merge=lfs -text
|
8 |
+
*.h5 filter=lfs diff=lfs merge=lfs -text
|
9 |
+
*.joblib filter=lfs diff=lfs merge=lfs -text
|
10 |
+
*.lfs.* filter=lfs diff=lfs merge=lfs -text
|
11 |
+
*.lz4 filter=lfs diff=lfs merge=lfs -text
|
12 |
+
*.mlmodel filter=lfs diff=lfs merge=lfs -text
|
13 |
+
*.model filter=lfs diff=lfs merge=lfs -text
|
14 |
+
*.msgpack filter=lfs diff=lfs merge=lfs -text
|
15 |
+
*.npy filter=lfs diff=lfs merge=lfs -text
|
16 |
+
*.npz filter=lfs diff=lfs merge=lfs -text
|
17 |
+
*.onnx filter=lfs diff=lfs merge=lfs -text
|
18 |
+
*.ot filter=lfs diff=lfs merge=lfs -text
|
19 |
+
*.parquet filter=lfs diff=lfs merge=lfs -text
|
20 |
+
*.pb filter=lfs diff=lfs merge=lfs -text
|
21 |
+
*.pickle filter=lfs diff=lfs merge=lfs -text
|
22 |
+
*.pkl filter=lfs diff=lfs merge=lfs -text
|
23 |
+
*.pt filter=lfs diff=lfs merge=lfs -text
|
24 |
+
*.pth filter=lfs diff=lfs merge=lfs -text
|
25 |
+
*.rar filter=lfs diff=lfs merge=lfs -text
|
26 |
+
*.safetensors filter=lfs diff=lfs merge=lfs -text
|
27 |
+
saved_model/**/* filter=lfs diff=lfs merge=lfs -text
|
28 |
+
*.tar.* filter=lfs diff=lfs merge=lfs -text
|
29 |
+
*.tflite filter=lfs diff=lfs merge=lfs -text
|
30 |
+
*.tgz filter=lfs diff=lfs merge=lfs -text
|
31 |
+
*.wasm filter=lfs diff=lfs merge=lfs -text
|
32 |
+
*.xz filter=lfs diff=lfs merge=lfs -text
|
33 |
+
*.zip filter=lfs diff=lfs merge=lfs -text
|
34 |
+
*.zst filter=lfs diff=lfs merge=lfs -text
|
35 |
+
*tfevents* filter=lfs diff=lfs merge=lfs -text
|
36 |
+
# Audio files - uncompressed
|
37 |
+
*.pcm filter=lfs diff=lfs merge=lfs -text
|
38 |
+
*.sam filter=lfs diff=lfs merge=lfs -text
|
39 |
+
*.raw filter=lfs diff=lfs merge=lfs -text
|
40 |
+
# Audio files - compressed
|
41 |
+
*.aac filter=lfs diff=lfs merge=lfs -text
|
42 |
+
*.flac filter=lfs diff=lfs merge=lfs -text
|
43 |
+
*.mp3 filter=lfs diff=lfs merge=lfs -text
|
44 |
+
*.ogg filter=lfs diff=lfs merge=lfs -text
|
45 |
+
*.wav filter=lfs diff=lfs merge=lfs -text
|
46 |
+
# Image files - uncompressed
|
47 |
+
*.bmp filter=lfs diff=lfs merge=lfs -text
|
48 |
+
*.gif filter=lfs diff=lfs merge=lfs -text
|
49 |
+
*.png filter=lfs diff=lfs merge=lfs -text
|
50 |
+
*.tiff filter=lfs diff=lfs merge=lfs -text
|
51 |
+
# Image files - compressed
|
52 |
+
*.jpg filter=lfs diff=lfs merge=lfs -text
|
53 |
+
*.jpeg filter=lfs diff=lfs merge=lfs -text
|
54 |
+
*.webp filter=lfs diff=lfs merge=lfs -text
|
55 |
+
test.parquet filter=lfs diff=lfs merge=lfs -text
|
56 |
+
train.parquet filter=lfs diff=lfs merge=lfs -text
|
57 |
+
validation.parquet filter=lfs diff=lfs merge=lfs -text
|
README.md
ADDED
@@ -0,0 +1,160 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
---
|
2 |
+
license: cc-by-4.0
|
3 |
+
task_categories:
|
4 |
+
- token-classification
|
5 |
+
language:
|
6 |
+
- hr
|
7 |
+
tags:
|
8 |
+
- wikidata
|
9 |
+
- wikipedia
|
10 |
+
- wikification
|
11 |
+
pretty_name: WikiAnc HR
|
12 |
+
size_categories:
|
13 |
+
- 1M<n<10M
|
14 |
+
---
|
15 |
+
|
16 |
+
# Dataset Card for WikiAnc HR
|
17 |
+
|
18 |
+
## Table of Contents
|
19 |
+
- [Dataset Description](#dataset-description)
|
20 |
+
- [Dataset Summary](#dataset-summary)
|
21 |
+
- [Supported Tasks](#supported-tasks)
|
22 |
+
- [Languages](#languages)
|
23 |
+
- [Dataset Structure](#dataset-structure)
|
24 |
+
- [Data Instances](#data-instances)
|
25 |
+
- [Data Fields](#data-fields)
|
26 |
+
- [Data Splits](#data-splits)
|
27 |
+
- [Additional Information](#additional-information)
|
28 |
+
- [Licensing Information](#licensing-information)
|
29 |
+
|
30 |
+
## Dataset Description
|
31 |
+
|
32 |
+
- **Repository:** [WikiAnc repository](https://github.com/cyanic-selkie/wikianc)
|
33 |
+
|
34 |
+
### Dataset Summary
|
35 |
+
|
36 |
+
The WikiAnc HR datasets is an automatically generated dataset from Wikipedia (hr) and Wikidata dumps (March 1, 2023).
|
37 |
+
|
38 |
+
The code for generating the dataset can be found [here](https://github.com/cyanic-selkie/wikianc).
|
39 |
+
|
40 |
+
### Supported Tasks
|
41 |
+
|
42 |
+
- `wikificiation`: The dataset can be used to train a model for Wikification.
|
43 |
+
|
44 |
+
### Languages
|
45 |
+
|
46 |
+
The text in the dataset is in Croatian. The associated BCP-47 code is `hr`.
|
47 |
+
|
48 |
+
You can find the English version [here](https://huggingface.co/datasets/cyanic-selkie/wikianc-en).
|
49 |
+
|
50 |
+
## Dataset Structure
|
51 |
+
|
52 |
+
### Data Instances
|
53 |
+
|
54 |
+
A typical data point represents a paragraph in a Wikipedia article.
|
55 |
+
|
56 |
+
The `paragraph_text` field contains the original text in an NFC normalized, UTF-8 encoded string.
|
57 |
+
|
58 |
+
The `paragraph_anchors` field contains a list of anchors, each represented by a struct with the inclusive starting UTF-8 code point `start` field, exclusive ending UTF-8 code point `end` field, a nullable `qid` field, a nullable `pageid` field, and an NFC normalized, UTF-8 encoded `title` (Wikipedia) field.
|
59 |
+
|
60 |
+
Additionally, each paragraph has `article_title`, `article_pageid`, and (nullable) `article_qid` fields referring to the article the paragraph came from.
|
61 |
+
|
62 |
+
There is also a nullable, NFC normalized, UTF-8 encoded `section_heading` field, and an integer `section_level` field referring to the heading (if it exists) of the article section, and the level in the section hierarchy that the paragraph came from.
|
63 |
+
|
64 |
+
The `qid` fields refers to Wikidata's QID identifiers, while the `pageid` and `title` fields refer to Wikipedia's pageID and title identifiers (there is a one-to-one mapping between pageIDs and titles).
|
65 |
+
|
66 |
+
**NOTE:** An anchor will always have a `title`, but that doesn't mean it has to have a `pageid`. This is because Wikipedia allows defining anchors to nonexistent articles.
|
67 |
+
|
68 |
+
An example from the WikiAnc HR test set looks as follows:
|
69 |
+
|
70 |
+
```
|
71 |
+
{
|
72 |
+
"uuid": "8a9569ea-a398-4d14-8bce-76c263a8c0ac",
|
73 |
+
"article_title": "Špiro_Dmitrović",
|
74 |
+
"article_pageid": 70957,
|
75 |
+
"article_qid": 16116278,
|
76 |
+
"section_heading": null,
|
77 |
+
"section_level": 0,
|
78 |
+
"paragraph_text": "Špiro Dmitrović (Benkovac, 1803. – Zagreb, 6. veljače 1868.) hrvatski časnik i politički borac u doba ilirizma.",
|
79 |
+
"paragraph_anchors": [
|
80 |
+
{
|
81 |
+
"start": 17,
|
82 |
+
"end": 25,
|
83 |
+
"qid": 397443,
|
84 |
+
"pageid": 14426,
|
85 |
+
"title": "Benkovac"
|
86 |
+
},
|
87 |
+
{
|
88 |
+
"start": 27,
|
89 |
+
"end": 32,
|
90 |
+
"qid": 6887,
|
91 |
+
"pageid": 1876,
|
92 |
+
"title": "1803."
|
93 |
+
},
|
94 |
+
{
|
95 |
+
"start": 35,
|
96 |
+
"end": 41,
|
97 |
+
"qid": 1435,
|
98 |
+
"pageid": 5903,
|
99 |
+
"title": "Zagreb"
|
100 |
+
},
|
101 |
+
{
|
102 |
+
"start": 43,
|
103 |
+
"end": 53,
|
104 |
+
"qid": 2320,
|
105 |
+
"pageid": 496,
|
106 |
+
"title": "6._veljače"
|
107 |
+
},
|
108 |
+
{
|
109 |
+
"start": 54,
|
110 |
+
"end": 59,
|
111 |
+
"qid": 7717,
|
112 |
+
"pageid": 1811,
|
113 |
+
"title": "1868."
|
114 |
+
},
|
115 |
+
{
|
116 |
+
"start": 102,
|
117 |
+
"end": 110,
|
118 |
+
"qid": 680821,
|
119 |
+
"pageid": 54622,
|
120 |
+
"title": "Ilirizam"
|
121 |
+
}
|
122 |
+
]
|
123 |
+
}
|
124 |
+
```
|
125 |
+
|
126 |
+
### Data Fields
|
127 |
+
|
128 |
+
- `uuid`: a UTF-8 encoded string representing a v4 UUID that uniquely identifies the example
|
129 |
+
- `article_title`: an NFC normalized, UTF-8 encoded Wikipedia title of the article; spaces are replaced with underscores
|
130 |
+
- `article_pageid`: an integer representing the Wikipedia pageID of the article
|
131 |
+
- `article_qid`: an integer representing the Wikidata QID this article refers to; it can be null if the entity didn't exist in Wikidata at the time of the creation of the original dataset
|
132 |
+
- `section_heading`: a nullable, NFC normalized, UTF-8 encoded string representing the section heading
|
133 |
+
- `section_level`: an integer representing the level of the section in the section hierarchy
|
134 |
+
- `paragraph_text`: an NFC normalized, UTF-8 encoded string representing the paragraph
|
135 |
+
- `paragraph_anchors`: a list of structs representing anchors, each anchor has:
|
136 |
+
- `start`: an integer representing the inclusive starting UTF-8 code point of the anchors
|
137 |
+
- `end`: an integer representing the exclusive ending UTF-8 code point of the anchor
|
138 |
+
- `qid`: a nullable integer representing the Wikidata QID this anchor refers to; it can be null if the entity didn't exist in Wikidata at the time of the creation of the original dataset
|
139 |
+
- `pageid`: a nullable integer representing the Wikipedia pageID of the anchor; it can be null if the article didn't exist in Wikipedia at the time of the creation of the original dataset
|
140 |
+
- `title`: an NFC normalized, UTF-8 encoded string representing the Wikipedia title of the anchor; spaces are replaced with underscores; can refer to a nonexistent Wikipedia article
|
141 |
+
|
142 |
+
### Data Splits
|
143 |
+
|
144 |
+
The data is split into training, validation and test sets; paragraphs belonging to the same article aren't necessarily in the same split. The final split sizes are as follows:
|
145 |
+
|
146 |
+
| | Train | Validation | Test |
|
147 |
+
| :----- | :------: | :-----: | :----: |
|
148 |
+
| WikiAnc HR - articles | 192,653 | 116,375 | 116,638 |
|
149 |
+
| WikiAnc HR - paragraphs | 2,346,651 | 292,590 | 293,557 |
|
150 |
+
| WikiAnc HR - anchors | 8,368,928 | 1,039,851 | 1,044,828 |
|
151 |
+
| WikiAnc HR - anchors with QIDs | 7,160,367 | 891,959 | 896,414 |
|
152 |
+
| WikiAnc HR - anchors with pageIDs | 7,179,116 | 894,313 | 898,692 |
|
153 |
+
|
154 |
+
**NOTE:** The number of articles in the table above refers to the number of articles that have at least one paragraph belonging to the article appear in the split.
|
155 |
+
|
156 |
+
## Additional Information
|
157 |
+
|
158 |
+
### Licensing Information
|
159 |
+
|
160 |
+
The WikiAnc HR dataset is given under the [Creative Commons Attribution 4.0 International](https://creativecommons.org/licenses/by/4.0/) license.
|
test.parquet
ADDED
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
1 |
+
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:b621cda507c4b3dfff2d545d81bed541b0a5be9899f34440a55c213e1bf3bb40
|
3 |
+
size 48895361
|
train.parquet
ADDED
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
1 |
+
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:20e58b58f8080b4ce460ec04d2eb0128db9001a3b5ede2fffcc3b559f9adffc3
|
3 |
+
size 519591982
|
validation.parquet
ADDED
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
1 |
+
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:84d2edeb69e35d69e9ef88ee5b523f46fdfce9d79d8c5d9425e98b70187c5638
|
3 |
+
size 48714578
|