File size: 11,406 Bytes
684edd4
 
 
 
 
 
b19ceaf
 
d602d9f
 
 
 
 
684edd4
 
4633e49
684edd4
d602d9f
684edd4
4633e49
 
d602d9f
 
684edd4
 
d602d9f
c6fa26a
d602d9f
 
 
 
 
 
684edd4
d602d9f
684edd4
 
d602d9f
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
684edd4
d602d9f
684edd4
d602d9f
 
 
 
684edd4
d602d9f
684edd4
d602d9f
684edd4
d602d9f
684edd4
d602d9f
684edd4
d602d9f
684edd4
d602d9f
 
 
 
 
 
 
684edd4
 
d602d9f
 
 
 
 
 
 
 
 
 
 
 
 
 
684edd4
 
d602d9f
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
684edd4
d602d9f
684edd4
d602d9f
684edd4
d602d9f
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
---
license: cc-by-nc-4.0
library_name: peft
tags:
- alignment-handbook
- generated_from_trainer
- trl
- sft
- geitje
- fingeitje
- dutch
- nl
- finance
base_model: BramVanroy/GEITje-7B-ultra
datasets:
- snoels/FinGEITje-sft
model-index:
- name: snoels/FinGEITje-7B-sft
  results: []
language:
- nl
pipeline_tag: text-generation
inference: false
---

<p align="center" style="margin:0;padding:0">
<img src="https://huggingface.co/snoels/FinGEITje-7B-sft/blob/main/fingeitje-banner.png" alt="FinGEITje Banner" width="1000"/>
</p>

<div style="margin:auto; text-align:center">
<h1 style="margin-bottom: 0">🐐 FinGEITje 7B</h1>
<em>A large open Dutch Financial language model.</em>
</div>

This model is a fine-tuned version of [BramVanroy/GEITje-7B-ultra](https://huggingface.co/BramVanroy/GEITje-7B-ultra) on the [snoels/FinGEITje-sft](https://huggingface.co/datasets/snoels/FinGEITje-sft) dataset.

It achieves the following results on the evaluation set:
- **Loss**: 0.3928

## Model Description

FinGEITje 7B is a large open Dutch financial language model with 7 billion parameters, based on Mistral 7B. It has been further trained on Dutch financial texts, enhancing its proficiency in the Dutch language and its knowledge of financial topics. As a result, FinGEITje provides more accurate and relevant responses in the domain of finance, including areas such as banking, investment, insurance, and economic policy.

FinGEITje builds upon the foundation of GEITje models, which are Dutch language models based on Mistral 7B and have been further pretrained on extensive Dutch corpora.

## Intended Uses & Limitations

### Intended Use

- **Educational Purposes**: Researching Dutch financial language patterns, generating financial texts for study, or aiding in financial education.
- **Content Generation**: Assisting in drafting Dutch financial reports, summaries, articles, or other finance-related content.
- **Financial Chatbots**: Integrating into chatbots to provide responses related to Dutch financial matters, customer service, or financial advice simulations.
- **Financial Analysis**: Supporting analysis by generating insights or summarizing financial data in Dutch.

### Limitations

- **Not for Real-time Financial Decisions**: The model does not have access to real-time data and may not reflect the most current financial regulations or market conditions. It should not be used for making actual financial decisions.
- **Accuracy**: While trained on financial data, the model may still produce incorrect or nonsensical answers, especially for prompts outside its training scope.
- **Bias**: The model may reflect biases present in its training data, potentially affecting the neutrality of its responses.
- **Ethical Use**: Users should ensure that the model's outputs comply with ethical standards and do not promote misinformation or harmful content.

### Ethical Considerations

- **License Restrictions**: FinGEITje is released under a non-commercial license (CC BY-NC 4.0). Commercial use is prohibited without obtaining proper permissions.
- **Misinformation**: Users should verify the information generated by the model, particularly for critical applications.
- **Data Privacy**: The model should not be used to generate or infer personal identifiable information or confidential data.

## Training and Evaluation Data

### Training Data

FinGEITje 7B was fine-tuned on the [snoels/FinGEITje-sft](https://huggingface.co/datasets/snoels/FinGEITje-sft) dataset, which consists of translated and processed Dutch financial texts. This dataset includes a wide range of financial topics and instruction tuning data, ensuring that the model gains a comprehensive understanding of the financial domain in Dutch.

#### Data Processing Steps

1. **Translation**: Original instruction tuning datasets were translated into Dutch using a specialized translation service to maintain the integrity of financial terminology.
2. **Post-processing**: The translated data underwent post-processing to correct any translation inconsistencies and to format it according to the original dataset structure.
3. **Formatting**: The data was formatted to match the style and requirements of instruction tuning datasets, ensuring compatibility with the fine-tuning process.
4. **Filtering**: A Dutch language check and predefined validation checks were applied to filter out any low-quality or irrelevant data, enhancing the overall dataset quality.

### Evaluation Data

The model was evaluated using:

- **[snoels/FinDutchBench](https://huggingface.co/datasets/snoels/FinDutchBench)**: A Dutch financial benchmark dataset designed to assess the model's performance on various financial tasks. It includes tasks that test the model's understanding of financial concepts, its ability to generate financial texts, and its comprehension of Dutch financial regulations.

## Training Procedure

FinGEITje was trained following the methodology described in the [Alignment Handbook](https://github.com/huggingface/alignment-handbook), utilizing a structured training pipeline to ensure optimal model performance.

### Training Configuration

- The training configuration is based on the recipe outlined in the alignment handbook and can be found in the `config_qlora.yml` file.
- The model was further trained using **QLoRA** (Quantized LoRA) for efficient fine-tuning with reduced computational resources.
- Training was conducted on a multi-GPU setup to handle the large model size and extensive dataset.

### Training Hyperparameters

The following hyperparameters were used during training:

- **Learning Rate**: 0.0002
- **Train Batch Size**: 4
- **Evaluation Batch Size**: 8
- **Seed**: 42
- **Distributed Type**: Multi-GPU
- **Gradient Accumulation Steps**: 2
- **Total Train Batch Size**: 8
- **Optimizer**: Adam with betas=(0.9, 0.999) and epsilon=1e-08
- **LR Scheduler Type**: Cosine
- **Warmup Ratio**: 0.1
- **Number of Epochs**: 1

### Training Results

| Training Loss | Epoch | Step | Validation Loss |
|---------------|-------|------|-----------------|
|     0.406     |  1.0  | 3922 |      0.3928     |

### Evaluation Package

The evaluation package includes a set of metrics defined per task, grouped per dataset to evaluate the model's performance across different financial domains. The evaluation notebooks are available:

- **[Evaluation in Dutch](https://github.com/your-repo/evaluation_nl.ipynb)**: Assesses the model's performance on the Dutch financial benchmark dataset.
- **[Evaluation in English](https://github.com/your-repo/evaluation_en.ipynb)**: Evaluates the model's performance on English financial benchmarks for comparison purposes.

### Framework Versions

- **PEFT**: 0.7.1
- **Transformers**: 4.39.0.dev0
- **PyTorch**: 2.1.2
- **Datasets**: 2.14.6
- **Tokenizers**: 0.15.2

## How to Use

FinGEITje 7B can be utilized using the Hugging Face Transformers library along with PEFT to load the LoRA adapters efficiently.

### Installation

Ensure you have the necessary libraries installed:

```bash
pip install torch transformers peft accelerate
```

### Loading the Model

```python
from transformers import AutoTokenizer, AutoModelForCausalLM
from peft import PeftModel

# Load the tokenizer
tokenizer = AutoTokenizer.from_pretrained("BramVanroy/GEITje-7B-ultra", use_fast=False)

# Load the base model
base_model = AutoModelForCausalLM.from_pretrained("BramVanroy/GEITje-7B-ultra", device_map='auto')

# Load the FinGEITje model with PEFT adapters
model = PeftModel.from_pretrained(base_model, "snoels/FinGEITje-7B", device_map='auto')
```

### Generating Text

```python
# Prepare the input
input_text = "Wat zijn de laatste trends in de Nederlandse banksector?"
input_ids = tokenizer.encode(input_text, return_tensors='pt').to(model.device)

# Generate a response
outputs = model.generate(input_ids, max_length=200, num_return_sequences=1)
response = tokenizer.decode(outputs[0], skip_special_tokens=True)

print(response)
```

### Interactive Conversation

```python
from transformers import pipeline, Conversation

# Initialize the conversational pipeline
chatbot = pipeline(
    "conversational",
    model=model,
    tokenizer=tokenizer,
    device_map='auto'
)

# Start a conversation
start_messages = [
    {"role": "system", "content": "Je bent een behulpzame financiële assistent."},
    {"role": "user", "content": "Kun je me iets vertellen over de huidige rentevoeten voor hypotheekleningen in Nederland?"}
]

conversation = Conversation()
for msg in start_messages:
    conversation.add_message(**msg)

# Get the assistant's response
response = chatbot(conversation)
print(response)
```

## Limitations and Future Work

While FinGEITje 7B demonstrates significant improvements in understanding and generating Dutch financial content, certain limitations exist:

- **Data Cutoff**: The model's knowledge is limited to the data it was trained on and may not include the most recent developments in the financial sector.
- **Accuracy Concerns**: The model may generate incorrect or outdated information. Users should verify critical information with reliable sources.
- **Biases**: Potential biases in the training data may affect the neutrality and fairness of the model's responses.
- **Language Scope**: Primarily designed for Dutch; performance in other languages is not optimized.

### Future Work

- **Data Updates**: Incorporate more recent and diverse financial datasets to keep the model up-to-date.
- **Bias Mitigation**: Implement techniques to identify and reduce biases in the model's outputs.
- **Performance Enhancement**: Fine-tune on more specialized financial topics and complex financial tasks.
- **Multilingual Expansion**: Extend support to other languages relevant to the financial sector in the Netherlands and Europe.

## Acknowledgements

We would like to thank:

- **Rijgersberg** ([GitHub](https://github.com/Rijgersberg)) for creating [GEITje](https://github.com/Rijgersberg/GEITje), one of the first Dutch foundation models, and for contributing significantly to the development of Dutch language models.
- **Bram Vanroy** ([GitHub](https://github.com/BramVanroy)) for creating [GEITje-7B-ultra](https://huggingface.co/BramVanroy/GEITje-7B-ultra), an open-source Dutch chat model, and for sharing training, translation, and evaluation resources.
- **Contributors of the Alignment Handbook** for providing valuable resources that guided the development and training process of FinGEITje.

## Citation

If you use FinGEITje in your work, please cite:

```bibtex
@article{FinGEITje2024,
  title={A Dutch Financial Large Language Model},
  author={Noels, Sander and De Blaere, Jorne and De Bie, Tijl},
  journal={arXiv preprint arXiv:xxxx.xxxxx},
  year={2024},
  url={https://arxiv.org/abs/xxxx.xxxxx}
}
```

## License

This model is licensed under the [Creative Commons Attribution-NonCommercial 4.0 International (CC BY-NC 4.0)](https://creativecommons.org/licenses/by-nc/4.0/) license.

### Summary

- **Attribution**: You must give appropriate credit, provide a link to the license, and indicate if changes were made.
- **NonCommercial**: You may not use the material for commercial purposes.

For the full license text, please refer to the [license document](https://creativecommons.org/licenses/by-nc/4.0/legalcode).

## Contact

For any inquiries or questions, please contact [Sander Noels](mailto:sander.noels@ugent.be).