Edit model card

AmsterdamDocClassificationMistral200T3Epochs

As part of the Assessing Large Language Models for Document Classification project by the Municipality of Amsterdam, we fine-tune Mistral, Llama, and GEITje for document classification. The fine-tuning is performed using the AmsterdamBalancedFirst200Tokens dataset, which consists of documents truncated to the first 200 tokens. In our research, we evaluate the fine-tuning of these LLMs across one, two, and three epochs. This model is a fine-tuned version of mistralai/Mistral-7B-Instruct-v0.2 and has been fine-tuned for three epochs.

It achieves the following results on the evaluation set:

  • Loss: 0.6716

Training and evaluation data

  • The training data consists of 9900 documents and their labels formatted into conversations.
  • The evaluation data consists of 1100 documents and their labels formatted into conversations.

Training procedure

See the GitHub for specifics about the training and the code.

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 1e-05
  • train_batch_size: 2
  • eval_batch_size: 8
  • seed: 42
  • gradient_accumulation_steps: 8
  • total_train_batch_size: 16
  • optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
  • lr_scheduler_type: cosine
  • lr_scheduler_warmup_ratio: 0.1
  • num_epochs: 3

Training results

Training Loss Epoch Step Validation Loss
0.9863 0.1988 123 0.8790
0.7918 0.3976 246 0.8324
0.5133 0.5964 369 0.7915
0.5702 0.7952 492 0.7591
0.7897 0.9939 615 0.6976
0.5872 1.1927 738 0.6768
0.4242 1.3915 861 0.6649
0.5222 1.5903 984 0.6609
0.2609 1.7891 1107 0.6599
0.4834 1.9879 1230 0.6601
0.554 2.1891 1353 0.6769
0.2486 2.3879 1476 0.6720
0.303 2.5867 1599 0.6709
0.483 2.7855 1722 0.6716
0.6027 2.9842 1845 0.6716

Training time: it took in total 2 hours and 12 minutes to fine-tune the model for three epochs.

Framework versions

  • Transformers 4.41.1
  • Pytorch 2.3.0+cu121
  • Datasets 2.19.1
  • Tokenizers 0.19.1

Acknowledgements

This model was trained as part of [insert thesis info] in collaboration with Amsterdam Intelligence for the City of Amsterdam.

Downloads last month
0
Safetensors
Model size
7.24B params
Tensor type
BF16
·
Inference Examples
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.

Model tree for FemkeBakker/AmsterdamDocClassificationMistral200T3Epochs

Finetuned
(356)
this model

Dataset used to train FemkeBakker/AmsterdamDocClassificationMistral200T3Epochs

Collection including FemkeBakker/AmsterdamDocClassificationMistral200T3Epochs