Datasets:
Update README.md
Browse files
README.md
CHANGED
@@ -1,3 +1,23 @@
|
|
1 |
-
---
|
2 |
-
license: eupl-1.1
|
3 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
---
|
2 |
+
license: eupl-1.1
|
3 |
+
language:
|
4 |
+
- nl
|
5 |
+
tags:
|
6 |
+
- documents
|
7 |
+
- fine-tuning
|
8 |
+
size_categories:
|
9 |
+
- 10K<n<100K
|
10 |
+
---
|
11 |
+
|
12 |
+
This dataset is used in the Woo-document classification project from the Municipality of Amsterdam.
|
13 |
+
The goal of this dataset is to fine-tune LLMs. The documents are formatted into a zero-shot prompt and then turned into conversations,
|
14 |
+
where the ideal response of the model is the prediction (class) formatted into JSON format.
|
15 |
+
|
16 |
+
|
17 |
+
Specifics:
|
18 |
+
- Truncation: first 200 tokens of each document. Docs are tokenized using the Llama tokenizer.
|
19 |
+
- Data split:
|
20 |
+
- test set: first 100 docs of each class (in total 1100 docs)
|
21 |
+
- train set: remaining docs, with max of 1500 docs per class (11000 docs)
|
22 |
+
- train set: 90% of train set is used for fine-tuning model (9900 docs)
|
23 |
+
- val set: 10% of train segt is used for evaluating the loss during training (1100 docs)
|