File size: 2,168 Bytes
5a43dc6
 
 
ce9e264
 
 
5a43dc6
 
 
ce9e264
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
5a43dc6
 
e318933
aff4190
6e3b1a9
 
 
5a43dc6
 
 
 
 
 
 
 
e318933
a346942
 
 
 
 
aff4190
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
---
language:
- nl
license: eupl-1.1
size_categories:
- 10K<n<100K
tags:
- documents
- fine-tuning
dataset_info:
  features:
  - name: prompt_id
    dtype: int64
  - name: message
    list:
    - name: content
      dtype: string
    - name: role
      dtype: string
  splits:
  - name: train
    num_bytes: 9011452
    num_examples: 9900
  - name: test
    num_bytes: 998068
    num_examples: 1100
  - name: val
    num_bytes: 1000675
    num_examples: 1100
  - name: discard
    num_bytes: 7897005
    num_examples: 8718
  download_size: 5846654
  dataset_size: 18907200
configs:
- config_name: default
  data_files:
  - split: train
    path: data/train-*
  - split: test
    path: data/test-*
  - split: val
    path: data/val-*
  - split: discard
    path: data/discard-*
---

This dataset is a modified version of the [AmsterdamDocClassificationDataset](https://huggingface.co/datasets/FemkeBakker/AmsterdamDocClassificationDataset). 
The original dataset consists of Dutch Raadsinformatie documents from the Municipality of Amsterdam, which were published in accordance with the Open Government Act (Woo). 
In this modified version, the documents are truncated to the first 200 tokens each. 
This dataset is used to fine-tune large language models (LLMs) for the Assessing LLMs for Document Classification project.
The documents are formatted into a zero-shot prompt and then turned into conversations, 
where the ideal response of the model is the prediction (class) formatted into JSON format. 

Specifics:
- Truncation: first 200 tokens of each document. Docs are tokenized using the Llama tokenizer.
- Data split:
  - test set: first 100 docs of each class (in total 1100 docs)
  - train set: remaining docs, with max of 1500 docs per class (11000 docs)
    - train set: 90% of train set is used for fine-tuning model (9900 docs)
    - val set: 10% of train set is used for evaluating the loss during training (1100 docs)

Data sources:
- https://amsterdam.raadsinformatie.nl/
- https://openresearch.amsterdam/
- https://open.amsterdam/

This dataset is part of [inseart thesis info] in collaboration with Amsterdam Intelligence for the City of Amsterdam.