add sizes of all data set (manual upload)
Browse files
README.md
CHANGED
@@ -7,10 +7,10 @@ size_categories: n>1K
|
|
7 |
source_datasets: extended
|
8 |
task_categories:
|
9 |
- text-classification
|
10 |
-
pretty_name:
|
11 |
---
|
12 |
|
13 |
-
# Dataset Card for
|
14 |
|
15 |
## Dataset Description
|
16 |
|
@@ -20,7 +20,7 @@ The dataset is an annotated corpus of 1258 records from 'gov data'. The annotati
|
|
20 |
|
21 |
### Languages
|
22 |
|
23 |
-
|
24 |
|
25 |
## Dataset Structure
|
26 |
|
@@ -30,9 +30,13 @@ The dataset is an annotated corpus of 1258 records from 'gov data'. The annotati
|
|
30 |
|
31 |
### Data Fields
|
32 |
|
33 |
-
|
34 |
-
|
35 |
-
|
|
|
|
|
|
|
|
|
36 |
|
37 |
An example of the 'train' looks as follows:
|
38 |
|
@@ -56,7 +60,11 @@ The data fields are the same among all splits:
|
|
56 |
|
57 |
### Data Splits
|
58 |
|
59 |
-
|
|
|
|
|
|
|
|
|
60 |
|
61 |
## Dataset Creation
|
62 |
|
@@ -71,8 +79,9 @@ Several sources were used for the annotation process. A sample was collected fro
|
|
71 |
#### Annotation process
|
72 |
|
73 |
The data was annotated in four rounds and one additional test round. In each round a percentage of the data was allocated to all annotators to caluculate the inter-annotator agreement using Cohens Kappa.
|
74 |
-
|
75 |
The following table shows the results of the of the annotations:
|
|
|
|
|
76 |
| | **Cohens Kappa** | **Number of Annotators** | **Number of Documents** |
|
77 |
|----------------|:----------------:|--------------------------|-------------------------|
|
78 |
| **Test Round** | .77 | 6 | 50 |
|
|
|
7 |
source_datasets: extended
|
8 |
task_categories:
|
9 |
- text-classification
|
10 |
+
pretty_name: GOVDATA dataset titles labelled
|
11 |
---
|
12 |
|
13 |
+
# Dataset Card for GOVDATA dataset titles labelled
|
14 |
|
15 |
## Dataset Description
|
16 |
|
|
|
20 |
|
21 |
### Languages
|
22 |
|
23 |
+
The language data is German.
|
24 |
|
25 |
## Dataset Structure
|
26 |
|
|
|
30 |
|
31 |
### Data Fields
|
32 |
|
33 |
+
| dataset | size |
|
34 |
+
|-----|-----|
|
35 |
+
| small/train | 19.96 KB |
|
36 |
+
| small/test | 4.85 KB |
|
37 |
+
| large/train | 451.55 KB |
|
38 |
+
| large/test | 109.47 KB |
|
39 |
+
|
40 |
|
41 |
An example of the 'train' looks as follows:
|
42 |
|
|
|
60 |
|
61 |
### Data Splits
|
62 |
|
63 |
+
| dataset_name | dataset_splits | train_size | test_size: |
|
64 |
+
|-----|-----|-----|-----|
|
65 |
+
| dataset_large | train, test | 1009 | 249 |
|
66 |
+
| dataset_small | train, test | 37 | 13 |
|
67 |
+
|
68 |
|
69 |
## Dataset Creation
|
70 |
|
|
|
79 |
#### Annotation process
|
80 |
|
81 |
The data was annotated in four rounds and one additional test round. In each round a percentage of the data was allocated to all annotators to caluculate the inter-annotator agreement using Cohens Kappa.
|
|
|
82 |
The following table shows the results of the of the annotations:
|
83 |
+
|
84 |
+
|
85 |
| | **Cohens Kappa** | **Number of Annotators** | **Number of Documents** |
|
86 |
|----------------|:----------------:|--------------------------|-------------------------|
|
87 |
| **Test Round** | .77 | 6 | 50 |
|